The next era of growth is driven by business interoperability. With advancements in cloud technology, generative AI, and solutions combining services and software, companies are outpacing competition not just through superior products, but by forming stronger partnerships, optimizing go-to-market strategies, and developing better business models. At WorkSpan, we are at the forefront of this revolution, building the world’s largest, trusted co-selling network.
Our platform hosts seven of the world’s ten largest partner ecosystems, managing a $50B customer pipeline. Companies like AWS, Google, Microsoft, MongoDB, PagerDuty, and Databricks trust WorkSpan to enhance their ecosystem strategies.
Backed by Insight Partners, Mayfield, and M12, and having secured a $30M Series C, WorkSpan is poised to shape the future of B2B partnerships. Join us to make a tangible impact and drive this change.
Why Join Us:
About the team :
The Engineering Team at WorkSpan is a dynamic and innovative group dedicated to pushing the boundaries of technology to drive our platform's success. Comprised of talented individuals with diverse backgrounds and expertise, our team thrives on collaboration, creativity, and a passion for problem-solving.At the heart of our engineering philosophy is a commitment to excellence and continuous improvement. We leverage cutting-edge tools and methodologies to develop robust, scalable, and user-centric solutions that empower our customers to achieve their business objectives.
With a focus on agility and adaptability, our Engineering Team embraces challenges and embraces new technologies to stay ahead in a rapidly evolving landscape. From front-end development to backend architecture, our team is at the forefront of shaping the future of collaborative business networks.
Driven by a shared vision and a relentless pursuit of excellence, the Engineering Team at WorkSpan is poised to tackle the most complex challenges and deliver innovative solutions that propel our platform and our customers' success forward.
Join our team for the opportunity to:
Own your results and make a tangible impact on the business
Be treated like the expert you are
Work with smart, passionate people every day
Job Summary: The QE Architect will be an exemplar in AI-driven quality engineering, responsible for reviewing and testing AI first knowledge products and agents. The person will review the underlying AI-based algorithms to ensure their correctness, robustness, and business impact. The QE Architect will provide in-depth technical reviews of system designs, define, review and validate test cases from a customer’s perspective, and apply mathematical models to evaluate platform capabilities. This role requires strong analytical skills, domain expertise in AI quality, and the ability to champion engineering excellence.
Key Responsibilities:
Algorithm Quality Assurance: Review and test AI/ML algorithms to validate correctness, efficiency, and real-world performance.
Design & Code Reviews: Collaborate with development teams to review system architectures, designs, and implementation from a quality perspective.
Customer-Centric Testing: Define and review test strategies, ensuring test cases align with customer expectations and real-world scenarios.
Model Validation: Develop and apply mathematical and statistical models to validate AI performance and platform capabilities.
AI and Data Validation: Validate the correctness and performance of AI solutions leveraging AWS Bedrock, vectorized databases, and Sonnet/LLM algorithms.
Automation & Customization Testing: Lead the quality strategy for our Python-based automation framework and ensure the flexibility of our highly customizable platform for customer needs.
Quality Strategy & Best Practices: Define and champion best practices for AI-driven quality engineering, ensuring high test coverage and automation.
Performance & Scalability Testing: Validate AI models and platforms for scalability, performance, and edge-case handling.
Collaboration & Mentorship: Serve as a mentor to quality engineers and developers, fostering a culture of quality and AI excellence.
AI & LLM Metrics Publication: Define and publish key metrics for AI and Large Language Model (LLM) validations, ensuring transparency and continuous improvements in AI quality.
Required Skills & Experience:
Must have
8+ years of experience in Quality Engineering, with at least 2+ years in testing products with statistical or OR-based algorithms. 1+ years of experience testing products using AI/ML will be a distinct plus.
Strong knowledge of AI/ML algorithms and ability to validate their accuracy, bias, and performance.
Experience in testing AI-driven applications, including data validation, model performance validation, and AI explainability.
Strong analytical and problem-solving skills, with a deep understanding of AI risks and mitigation strategies.
Ability to collaborate effectively across teams and influence engineering best practices.
Desirable and a strong plus
Solid experience in reviewing architectures, test strategies, and automation frameworks.
Knowledge of statistical analysis, probability, and mathematical modeling to validate AI systems.
Experience with AWS Bedrock, vectorized databases, and Sonnet/LLM algorithms.
Experience in performance testing of AI models and large-scale distributed systems.