June was a bit poorer in AI news for the first time this year, but there were still a few things worth mentioning.
And why is it that this month hasn't been so busy? In previous months, we've seen several big releases at once - and now it's been rather quiet. In fact, companies have been pretty consistent about when they release models, and now they're back to focusing on improving them. Of course, the onset of summer may have played a role, or maybe it's starting to show that scaling models isn't everything.
Gemini 2.5 pro and Flash
- They are now stable and production-ready.
- They can be used in applications that serve end users - i.e. applications in production.
- In addition, Google is releasing Flash 2.5 Lite - a faster version of Flash. however, due to pricing adjustments, in some cases it doesn't make sense to use it.
- Useful links: Deepmind Google
ChatGPT adds connectors
- It now supports Hubspot, Salesforce, Teams and more.
- OpenAI plans to further expand the MCP server network and allow users to use application data in ChatGPT. However, this may have security implications.
- Useful links: ChatGPT for Business Updates
DeepSearch via API
- It is also newly usable in applications used by third-party developers.
- It has a lot of potential - it depends on what the developers build on it.
- Useful links: Docs Cursor
Cursor issues background agents
- Allows you to run some programming tasks directly from Slack.
- However, this brings with it a number of security risks.
- It points to the need for rules of use at the company (firm) level
During June, the weaknesses of LLM applications were also addressed
- Attacks on agents using "poisoned" prompts are beginning to appear.
- It is extremely difficult to defend against such attacks - and not always a fake MCP server is to blame.
- The premise of the attack is that the LLM can execute commands independently, and the attacker can get into it to send the prompt.
Inception Labs releases Mercury model
- The model is based on a diffusion architecture.
- It achieves the performance of the GPT-4.1 Nano, but is 7x faster - at about the same price.
- Models of this type could be the future.
- Useful links: LinkedIn Post Artificial Analysis
Author

Jakub Vacek
Software AI EngineerSenior backend developer specializing in microservices architecture, with a growing focus on AI. Skilled in TypeScript, Node.js, React.js, Nest.js, and building AI-powered solutions.