Syncfusion Code Studio now available
Code Studio is an AI-powered IDE that offers capabilities like autocompletion, code generation and explanations, refactoring of selected code blocks, and multistep agent automation for large-scale tasks.
Customers can use their preferred LLM to power Code Studio, and will also get access to security and governance features like SSO, role-based access controls, and usage analytics.
“Every technology leader is seeking a responsible path to scale with AI,” said Daniel Jebaraj, CEO of Syncfusion. “With Code Studio, we’re helping enterprise teams harness AI on their own terms, maintaining a balance of productivity, transparency, and control in a single environment.”
Linkerd to get MCP support
Buoyant, the company behind Linkerd, announced its plans to add MCP support to the project, which will enable users to get more visibility into their MCP traffic, including metrics on resource, tool, and prompt usage, such as failure rates, latency, and volume of data transmitted.
Additionally, Linkerd’s zero-trust framework can be used to apply fine-grained authorization policies for MCP calls, allowing companies to restrict access to specific tools or resources based on the identity of the agent.
OpenAI starts creating new benchmarks that more accurately evaluate AI models across different languages and cultures
English is only spoken by about 20% of the world’s population, yet existing AI benchmarks for multilingual models are falling short. For example, MMMLU has become saturated to the point that top models are clustering near high scores, and OpenAI says this makes them a poor indicator of real progress.
Additionally, the existing multilingual benchmarks focus on translation and multiple choice tasks and don’t necessarily accurately measure how well the model understands regional context, culture, and history, OpenAI explained.
To remedy these issues, OpenAI is building new benchmarks for different languages and regions of the world, starting with India, its second largest market. The new benchmark, IndQA, will “evaluate how well AI models understand and reason about questions that matter in Indian languages, across a wide range of cultural domains.”
There are 22 official languages in India, seven of which are spoken by at least 50 million people. IndQA includes 2,278 questions across 12 different languages and 10 cultural domains, and was created with help from 261 domain experts from the country, including journalists, linguists, scholars, artists, and industry practitioners.
SnapLogic introduces new capabilities for agents and AI governance
Agent Snap is a new execution engine that allows for observable agent execution. The company compared it to onboarding a new employee and training and observing them before giving them greater responsibility.
Additionally, its new Agent Governance framework allows teams to ensure that agents are safely deployed, monitored, and compliant, and provides visibility into data provenance and usage.
“By combining agent creation, governance, and open interoperability with enterprise-grade resiliency and AI-ready data infrastructure, SnapLogic empowers organizations to move confidently into the agentic era, connecting humans, systems, and AI into one intelligent, secure, and scalable digital workforce,” the company wrote in a post.
Sauce Labs announces new data and analytics capabilities
Sauce AI for Insights allows development teams to turn their testing data into insights on builds, devices, and test performance, down to a user-by-user basis. Its AI agent will tailor its responses based on who is asking the question, such as a developer getting root cause analysis info while a QA manager gets release-readiness insights.
Each response comes with dynamically generated charts, data tables, and links to relevant test artifacts, as well as clear attribution as to how data was gathered and processed.
“What excites me most isn’t that we built AI agents for testing—it’s that we’ve democratized quality intelligence across every level of the organization,” said Shubha Govil, chief product officer at Sauce Labs. “For the first time, everyone from executives to junior developers can now participate in quality conversations that once required specialized expertise.”
Google Cloud’s Ironwood TPUs will soon be available
The new Tensor Processing Units (TPUs) will be available in the next few weeks. They were designed specifically for handling demanding workloads like large-scale model training or high-volume, low-latency AI latency and model serving.
Ironwood TPUs can scale up to 9,216 chips in a single unit with Inter-Chip Interconnect (ICI) networking at 9.6 Tb/s.
The company also announced a preview for new instances of the virtual machine Axion, N4A, as well as C4A, which is an Arm-based bare metal instance.
“Ultimately, whether you use Ironwood and Axion together or mix and match them with the other compute options available on AI Hypercomputer, this system-level approach gives you the ultimate flexibility and capability for the most demanding workloads,” the company wrote in a blog post.
DefectDojo announces security agent
DefectDojo Sensei acts like a security consultant, and is able to answer questions about cybersecurity programs managed through DefectDojo.
Key capabilities include evolution algorithms for self-improvement, generation of tool recommendations for security issues, analysis of current tools, creation of customer-specific KPIs, and summaries of key findings.
It is currently in alpha, and is expected to become generally available by the end of the year, the company says.
Testlio expands its crowdsourced testing platform to provide human-in-the-loop testing for AI solutions
Testlio, a company that offers crowdsourced software testing, has announced a new end-to-end testing solution designed specifically for testing AI solutions.
Leveraging Testlio’s community of over 80,000 testers, this new solution provides human-in-the-loop validation for each stage of AI development.
“Trust, quality, and reliability of AI-powered applications rely on both technology and people,” said Summer Weisberg, COO and Interim CEO at Testlio. “Our managed service platform, combined with the scale and expertise of the Testlio Community, brings human intelligence and automation together so organizations can accelerate AI innovation without sacrificing quality or safety.”
Kong’s Insomnia 12 release adds capabilities to help with MCP server development
The latest release of Insomnia aims to bring MCP developers a test-iterate-debug workflow for AI development so they can quickly develop and validate their work on MCP servers.
Developers will now be able to connect directly to their MCP servers, manually invoke tools with custom parameters, inspect protocol-level and authentication messages, and see responses.
Insomnia 12 also adds support for generating mock servers from OpenAPI spec documents, JSON samples, or a URL. “What used to require hours of manual set up, like defining endpoints or crafting realistic responses, now happens almost instantaneously with AI. Mock servers can now transform from a ‘nice to have if you have the time to set them up’ into an essential part of a developer’s workflow, allowing you to test faster without manual overhead,” Kong wrote in a blog post.
OpenAI and AWS announce $38 billion deal for compute infrastructure
AWS and OpenAI announced a new partnership that will have OpenAI’s workloads running on AWS’s infrastructure.
AWS will build compute infrastructure for OpenAI that is optimized for AI processing efficiency and performance. Specifically, the company will cluster NVIDIA GPUs (GB200s and GB300s) on Amazon EC2 UltraServers.
OpenAI will commit $38 billion to Amazon over the course of the next several years, and OpenAI will immediately begin using AWS infrastructure, with full capacity expected by the end of 2026 and the ability to scale as needed beyond that.

