At the end of Day 2 of the TestMu Conference 2023, we reflect on a day filled with learning and inspiration. From the first session to the final chat exchange, we experienced innovation and togetherness. Throughout the day, we explored various aspects of testing, including AI’s impact and automation’s development, led by industry experts. These discussions have left a lasting impact, inspiring us beyond our screens.
We’re ready to carry its lessons and inspiration as we conclude this exceptional day. The connections we’ve made and the insights we’ve gained will shape the path ahead, allowing us to shape the future of testing.
Let’s see some highlights from Day 2 of the TestMu Conference 2023.
Welcome Note: Day 2 by Manoj Kumar
Manoj Kumar greeted all the speakers and attendees with a warm welcome, infusing everyone with excitement as we were on the brink of commencing Day 2 of TestMu 2023.
As we delve into Day 2, everyone is encouraged to prepare for an even more captivating array of talks, sessions, and discussions. The schedule features various intriguing topics and esteemed speakers, including Maaret Pyhäjärvi, Paul Grizzaffi, Andrew Knight, Mathias Bynens, Christian Bromann, and our keynote speaker, Mahesh Venkataraman. The anticipation is palpable for the exclusive preview of the “Future of Quality Assurance” survey, reflecting the collective insights of our community and underscoring the significance of collaboration and shared viewpoints.
Regardless of whether attendees are seasoned testing professionals or embarking on their journey, the event promised something for everyone. The expectation was for further enlightening talks that delve deeper into the testing domain, panel discussions that explore diverse perspectives, and sessions meticulously designed to provide tangible takeaways, enhancing the attendees’ testing practices.
The contests, certification marathons, and #LambdaTestYourApps challenges were compelling opportunities to showcase your abilities and gain prominence in this dynamic community.
Expanding the Horizon of Innovation in Testing by Mahesh Venkataraman
Over the last twenty years, there has been a significant transformation in testing processes and technology. However, the concept of innovative testing has revolved chiefly around test automation. Recently, AI-driven testing has gained popularity. But should innovation in testing be limited to only automation or AI-driven methods? Sometimes, the most valuable ideas come from outside the industry. Can we find inspiration from how other fields have transformed their products and practices? Can we extract and apply their core principles to testing?
This session by Mahesh encouraged contemplation by delving into how we can reshape, rethink, and reposition testing for the benefit of all stakeholders. By scrutinizing the challenges encountered in contemporary software delivery and capitalizing on innovation principles borrowed from various domains, we can visualize a testing future that adds extra value to everyone involved.
Key Takeaways: Understanding the implicit and explicit challenges facing modern software delivery and how generic innovation principles can be applied to bring about a win-win for all — opening new career pathways for practitioners, creating business value for customers, and generating new revenue streams for tools and service providers.
About the Speaker:
With more than 34 years of experience in the Information Technology sector, Mahesh Venkataraman has held diverse roles encompassing technology management, service management, and business management.
Panel Discussion: Evolution of Testing in the Age of DevOps
Software engineering teams have warmly embraced DevOps to achieve intelligent, swift, and daily shipping. Yet, the question remains: Does this guarantee confident shipping? Amid the DevOps era, continuous testing offers solutions and hurdles as testing methodologies evolve.
Our distinguished panel of industry luminaries engaged in a conversation about this evolution, shedding light on their contributions to aiding clients in overhauling their testing approaches through DevOps.
With approximately twenty years of software testing experience, Asmita Parab is a seasoned professional dedicated to ensuring the delivery of top-notch software products. As Head of Testing at UST Product Engineering, she leads a team of skilled professionals, driving excellence in testing practices and guaranteeing software application reliability. Committed to continuous improvement, she stays updated with emerging testing trends, shaping industry best practices within the organization. Asmita’s expertise ensures high-quality standards for customers and organizational growth.
About Panelists:
A Business and Technology leadership veteran with over two decades of experience, Bhushan Bagi excels in scaling businesses and fostering high-performance teams. Currently overseeing Business for Quality Engineering at Wipro, he spearheads various aspects from Go-to-Market strategies to industry engagement. Bhushan’s transformational expertise spans multiple domains, making him a sought-after consultant for business and technology growth.
Harleen Bedi, a senior IT consultant, specializes in developing and selling IT offerings related to quality engineering and emerging technologies. She crafts and deploys QE strategies and innovations for enterprises, driving business objectives for clients. With her focus on AI, Cloud, Big Data, and more, Harleen is pivotal in aligning technology advancements with quality engineering.
Mallika Fernandes is an IT leader with an impressive 24-year innovation journey. As part of the Cloud First group at Accenture, she leads Quality Engineering innovation and automation, holding eleven patents for her pioneering work. Her passion for AI/ML and Cloud Quality transformation is reflected in her contributions.
Vikul Gupta, the Head of NextGen CoE at Qualitest, a modern quality engineering company, boasts over twenty years of experience with Tier 1 companies. His expertise spans quality engineering transformation, NextGen solutions, and co-innovation with partners worldwide. With a robust technological background encompassing AI/ML, DevOps, Cloud, and more, Vikul brings domain-specific insights to the forefront of his leadership roles.
My Crafting Project Became a Critical Infrastructure by Elizabeth Zagroba
Frustrated with the usual testing process, Elizabeth developed an APIs Python script, built and deployed the app, and printed updates in the terminal. Initially addressing my immediate needs, it unexpectedly automated a manual step in our release process.
Other teams adopted it, expanding its functionality. She managed code submissions, even those she disagreed with, to keep things unblocked. Eventually, maintaining the code became burdensome, and she stopped. However, renewed interest sparked when a merge request came in, leading to collaborative improvements and the addition of tests. This rejuvenated her enthusiasm for maintaining the script, which had grown into a vital piece of infrastructure.
Key Takeaways:
Good collaboration takes time and energy.
Small things for one use can grow into bigger things with many benefits.
Pick up the work for the skills you want to build.
About the Speaker:
Elizabeth serves as the Quality Lead at Mendix in The Netherlands. Her role involves enhancing exploratory testing by orchestrating collaborative “mob” testing sessions, effectively addressing gaps, and ensuring that the “it should just work” principle holds true. She fosters a shared comprehension of projects, offers critical insights, and supports team members beyond formal management channels. Additionally, she adeptly crafts API tests and communicates proficiently in English, making her a key asset in ensuring quality and cohesion within the team.
Let’s Play Rhetoric for All Things Testing by Maaret Pyhäjärvi
Remote screen sharing offers the platform for engaging in the intriguing Public Speaking Game called Rhetoric. In that adapted version, willing volunteers had the chance to take part during the session’s time frame, delivering concise two-minute talks focused on diverse testing aspects. Guided by dice rolls, players encountered a variety of speaking challenges.
TOPIC: Presented a framing word and six constraints to tailor the talk’s style.
CHALLENGE: Introduced specific speaking constraints.
QUESTION: Supplied audience prompts paired with six constraints.
REFLECTION: Granted the opportunity to speak freely on any topic.
CHOICE: Allowed participants to select from any of the four options mentioned.
About the Speaker:
In the past, Maaret Pyhäjärvi showcased her expertise as an exceptional exploratory tester while holding the role of Development Manager at Vaisala. She displayed proficiency as a tester, (polyglot) programmer, speaker, author, and community facilitator.
Staying Ahead In The Tech World by Rahul Parwal and Ajay Balamurugadas
During rapid technological changes, maintaining a competitive edge demands continuous updates. This talk delved into strategies that honed testing skills:
Building Your Toolkit: Understanding the value of a versatile toolkit and mastering tool selection amidst many options.
Leveraging Social Media: Discovering how staying informed through social platforms amplifies professional prowess.
Unlocking Automation: Exploring automation’s role, not solely in testing but also in daily tasks via micro tools.
Personal Insights: Gaining pragmatic insights from speakers’ experiences in tool selection and testing.
Key Takeaways:
Toolkit Significance: Learn to create a comprehensive toolkit with fitting tools.
Social Media’s Edge: Uncover how staying connected online enhances your testing prowess.
Automation Unveiled: Embrace automation’s power using micro tools.
Practical Insights: Benefit from firsthand insights to thrive in the testing tech landscape.
About Speakers:
Ajay Balamurugadas, known as ‘ajay184f’ in the testing community, is a seasoned expert with extensive experience redefining testing methodologies. With a distinguished background, he has co-founded Weekend Testing, authored multiple insightful books, and holds the position of Senior Director, QE at GSPANN Technologies.
Rahul Parwal is a proficient Software Tester and generalist. As a Senior Software Engineer at IFM Engineering in India, he specializes in testing IoT systems encompassing Unit, API, Web, and Mobile Testing. Fluent in C# and Python, Rahul’s expertise is well-rounded. He actively contributes to the testing community through various channels, sharing his insights on LinkedIn, Twitter, his blog, YouTube, and meetups.
Balancing the Test Pyramid, the AWS way!
The AWS team delved into their comprehensive testing approach, amalgamating hybrid UI and API testing with synthetic canary testing.
Their methodology responded to the challenge of balancing test coverage and efficiency while maintaining superior quality. Practical techniques and frameworks employed by AWS teams seamlessly integrated UI and API testing, boosting coverage across the software stack.
Additionally, they showcased the application of synthetic canary testing, putting real-world scenarios to the test in production to ensure operational excellence (OE) metrics coverage. By simulating actual production traffic and comparing outcomes with established benchmarks, anomalies, and potential issues were proactively identified, reinforcing system reliability and scalability.
Key Takeaways:
Hybrid Testing Approach: The AWS team’s hybrid testing approach, blending UI and API testing, struck a balance between test coverage and efficiency.
Expanded Test Coverage: Understanding how AWS leveraged hybrid testing to simultaneously validate user interface interactions and backend functionality, enhancing test coverage.
Operational Excellence: Gaining insights into leveraging synthetic canary testing to fortify your organization’s testing endeavors for system reliability and availability.
Practical Insights: Exploring the tools and frameworks that AWS teams employed in implementing the hybrid UI and API testing strategy, with actionable techniques for enhancing personal testing strategies.
About the Speaker:
Min Xu possessed substantial expertise, showcasing a robust background in quality and engineering. In her recent role as the Manager of engineering teams at AWS, her influence was significant. With over 15 years of industry experience, she contributed to Amazon’s pursuit of product quality and customer satisfaction over her five-year tenure there. Min Xu held multiple positions in quality and engineering management throughout her career.
Expect to Inspect — Performing Code Inspections on Your Automation by Paul Grizzaffi
Automation development is indeed a form of software development. Regardless of using drag-and-drop or record-and-playback tools, there’s code running behind the scenes.
Treating automation as software development is essential to avoid pitfalls. Just as in software development, code inspection plays a crucial role. In this session, Paul Grizzaffi explained the importance of code inspections for automation, highlighting differences from product software reviews and sharing real-life issues discovered during these assessments.
Key Takeaways:
Value of Inspections
Business-Driven Inspection Approach
Utilization of Tools
Illustrative Examples
About the Speaker:
Paul Grizzaffi, a Senior Automation Architect at Vaco, is passionate about his expertise in technology solutions for testing, QE, and QA realms. His role spans automation assessments, implementations, and contributions to the broader testing community.
An accomplished speaker and writer, Paul has presented at local and national conferences and is associated with Software Test Professionals and STPCon. He holds advisory roles and memberships in industry boards such as the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas.
Test Observability — A Paradigm Shift from Automation to Autonomous to Deep Observability by Vijay Kumar Sharma
The software industry has witnessed several transformations over time, often encountering disruptions every five years. Software testing, too, remained connected to the latest trends and technologies. Testing strategies aligned with agile development, rapid deployments, and heightened customer expectations for reliability and user-friendly interfaces. Like business logic, they grew swiftly and dependably.
Quality engineering (QE) processes evolved from test automation to autonomous testing, and the recent session delved into a new growth requirement: test observability. Test observability involved extracting continuous insights from automation infrastructure to guide decisions about product stability, reliability, and speed gaps in constant deployment. It also streamlined resource allocation for tests, providing a holistic system view through automated testing.
Key Takeaways: In the recently concluded session, the focus remained on value-driven testing achieved through optimal technology utilization for informed decision-making and intelligent execution.
About the Speaker:
Vijay boasts over 18 years of experience in Quality Engineering, primarily affiliated with Adobe and Sumologic. He has showcased his expertise by speaking at numerous testing conferences across India. He intend to propose a session titled ‘Test Observability and Its Significance in the Current Landscape of Rapidly Evolving Tech Enterprises.’
Advanced Strategies for Rest API Testing by Julio de Lima
Were you tired of the oversimplified view of Rest API testing? Let’s dive deeper and explore advanced strategies. He covered areas like contract testing, architecture style adherence, security, and more. Expect tools and tips to elevate your Rest API testing game, gaining insights into complex components. Gain skills to define tailored testing techniques, enhancing efficiency in planning and strategies.
Key Takeaways
Julio comprehensively covered crucial facets of Rest API testing, encompassing contract testing, backwards compatibility testing, adherence to Rest architecture style validation, token structure evaluation, Rest API heuristic testing, external service simulation, security testing, and performance testing.
He elaborated on the significance of each topic, detailing the steps for each type of testing, highlighting applicable tools, and offering illustrative examples for better comprehension.
About the Speaker:
Júlio de Lima is a specialist in Software Testing with 13 years of experience. Júlio has a Bachelor’s Degree in Software Engineering, a specialization in Teaching in Higher Education, and a Master’s Degree in Electrical and Computational Engineering with a focus on Testing and Artificial Intelligence.
A Live Intro to Python Testing by Andrew Knight
Python proved itself as an exceptional language for test automation, celebrated for its concise syntax and extensive package library. In the recently concluded session, he guided participants through the realm of Python-driven testing via live coding — an interactive experience without slides! The spotlight was on project setup with pytest and Playwright, crafting unit, API, and UI tests collaboratively. As the session concluded, attendees were well-prepared to embark on their own test automation journey with Python, armed with additional resources for further learning.
About the Speaker:
Andrew Knight, also known as “Pandy,” is the Automation Panda. He’s a software quality champion who loves to help people build better-quality software. An avid supporter of open-source software, Pandy is a Playwright Ambassador and the lead developer for Boa Constrictor, the .NET Screenplay Pattern.
Open Source for Fun and Profit: Opportunities for Personal and Professional Growth
Irrespective of your skill level, open-source projects presented distinctive avenues for knowledge-sharing and mutual learning. From crafting documentation to bug fixes and feature additions, dedicating time to open-source initiatives, you yielded short-term and lasting rewards. Did you desire to explore a new language or technology but need help determining where to begin? Or you aimed to refine your abilities and gain valuable insights from project maintainers.
The prospect of putting yourself out there could be daunting, but the rewards of expanding your network and expertise were invaluable. In the recently concluded session, the example of a Bitcoin open-source ecosystem illustrated that opportunities abound for everyone.
Key Takeaways
**Emphasis on Collaboration and Innovation: **The session highlighted the significance of open source in fostering collaboration and driving innovation.
Identifying Contribution Opportunities: Attendees learned how to identify active and well-maintained open-source projects to contribute to, enhancing their engagement in the community.
**Understanding Open Source Stacks: **The session provided insights into the composition and characteristics of open source stacks.
About the speaker
Felipe has been in the tech industry for almost twenty years and has been a Senior Software Engineer in Test at Netflix for the past six, where he helps build the UI delivered to millions of Smart TVs and other streaming devices around the world.
Chrome ❤️ Testing
The presented talk provided an overview of the recent initiatives undertaken by the Chrome team to enhance support for testing and automation scenarios. The focus delved into “Chrome for Testing” and the newly introduced Headless mode of Chrome. This information was shared in a past session.
About the speaker
Mathias is a web standards enthusiast from Belgium who currently works on Chrome DevTools. He likes HTML, CSS, JavaScript, Unicode, performance, and security.
Quality in Digital Transformation
In the concluded panel discussion, titled ‘Quality in Digital Transformation,’ the panelist delved into the interconnectedness of quality and digital transformation. Esteemed leaders across various industries shared their perspectives on upholding standards, ensuring smooth user experiences, and mitigating risks in a dynamically shifting technological landscape. They provided insights into establishing and maintaining quality processes conducive to agile transformation and securing digital assets.
Furthermore, the discussion explored harnessing data-driven decision-making to oversee quality and performance, and the strategies to ensure quality assurance and compliance in the digital realm. The panel shed light on how quality assurance is pivotal in driving successful digital transformation for businesses.
About Panelists:
With over 15 years of experience in the technology industry, Anish Ohri has played a vital role in advancing various innovative products and solutions across diverse domains, including Publishing, Finance, Multimedia, e-commerce, Gaming, and Enterprise Software.
Manish is a Quality Engineering enthusiast known for his expertise in developing and deploying quality software. He has actively contributed to open-source projects like Puppeteer and Playwright and advocates for balanced testing strategies. His discussions revolve around testing event-driven systems, GRPC constructs, and Contract Testing.
Todd Lemmonds has over 20 years of experience. He is a visionary in software quality assurance. He champions early and frequent testing, driving his shift-left testing strategies. Todd emphasizes tester involvement during story refinement, integration of appropriate tests into automated pipelines, and the right test types at suitable development stages. His mission is to create an environment where testers thrive and enhance skills for modern software development.
Robert Gonzalez is Vice President of Engineering at SugarCRM, a prominent CRM software company. His role involves steering engineering initiatives and fostering innovation within the CRM realm. Robert leads a skilled team and contributes significantly to developing and enhancing SugarCRM’s top-tier products, ensuring superior quality, functionality, and customer contentment.
As Director of Quality at Hudl, Seema Prabhu drives a quality-centric culture and sets up high-performance teams. With a passion for quality and years of experience, she excels in leadership, process establishment, coaching, and mentoring. Seema advocates for efficient testing and shares her expertise through speaking engagements at meetups and conferences.
Component Testing with WebdriverIO
An informative session emphasized the growing significance of web component testing in the rapidly evolving landscape of front-end frameworks. The session shed light on how testing individual UI components has become a pivotal aspect of testing stacks, offering the advantage of thoroughly examining various features within an element. Doing so effectively reduces the reliance on end-to-end tests, which are generally slower to run.
The session, hosted by Christian Bromann, the Founding Engineer at Stateful, delved into these novel browser runner capabilities. Through engaging live demonstrations, attendees were treated to firsthand experiences of testing components in various popular frameworks, including Vue, Svelte, React, and Preact. The session showcased the remarkable ease and efficiency with which component testing can now be approached.
About the speaker
Christian Bromann is a Full-stack Engineer passionate about Open Source and Open Standards. Driven individual with the ability to adapt to any situation and proven potential to grow self and others. He is a quality-focused engineer with a background in automation technologies and test-driven development.
Test Automation with SWAG
This enlightening session addressed the crucial matter of how to effectively supply automation frameworks with an unending stream of test data. The session explored a range of solutions, including both traditional and emerging tools, that cater to the test data-driven approach. Among these, a prevalent method involves storing all input values within storage files like CSV, YAML, JSON, and more. Another viable option includes harnessing the capabilities of databases, and offering resolutions to various challenges while meeting the dynamic variable requirements for automated scripting.
A noteworthy highlight of the session was the introduction of an innovative API cloud solution. This solution simplifies the process of interfacing with multiple databases, eliminating the need for integrating drivers into various automation frameworks. Attendees were presented with a seamless way to establish communication with different databases, streamlining the process without the hassle of managing multiple drivers. The session successfully conveyed how this solution enhances the efficiency and flexibility of automation frameworks.
Key Takeaways
- The session included insights into data-driven automation testing, the utilization of dynamic test data, the diversification of databases, and the importance of documenting test data.
About the speaker
Garvit Chandna is Head of Test Engineering at Equinox with 14 years of experience in handling globally distributed automation and manual test engineering teams. Wide experience in management and architecting complex automation frameworks.
Wrapping up Day 2!
A warm and sincere thank you to our esteemed speakers who have significantly contributed to shaping the success of Day 2 at TestMu 2023. The event was executed meticulously, showcasing insights from experienced speakers spanning the global testing community.
As we bring Day 2 to a close, we invite all to join us for Day 3, where the momentum of productivity and innovation continues unabated. Together, let’s delve into novel testing paradigms, actively engage with our community, and collectively define the future of testing.
For those who have been with us since the inception of Test Conference 2022, your unwavering support has been truly invaluable. Let’s continue the journey towards a world with minimal bugs, embracing the testing revolution by securing your spot at the LambdaTest TestMu Conference 2023.
Become a trailblazer in shaping the testing landscape. Your participation remains pivotal as we stride into Day 3, navigating the currents of technology and propelling meaningful change. We extend our heartfelt gratitude for your involvement in this remarkable event. Anticipating another extraordinary day ahead!
Stay inquisitive, stay engaged, and wholeheartedly embrace the future of testing!