Here are some brief news updates from the world of AI, with a few of my comments and opinions—because I believe that opinions matter in a world flooded with soulless news. Let me know what you think in the comments.
A group of researchers has created a model called #ECgMLP that can detect endometrial cancer with an accuracy of 99.26%—humans typically stop between 78% and 81%. And the model works on other cancers too: colon, breast, and oral.
Meanwhile, from #OpenAI and #MIT comes some worrying news. The heavy use of ChatGPT has been associated with an increase in loneliness and emotional dependency. If we rely on machines even in our moments of vulnerability, we can’t complain if we end up feeling more alone. It’s a major wake-up call, especially for the younger generation. AI is not a friend—and if it starts to seem like one, the problem isn’t with the AI; it’s with us.
#Zapier has launched a new protocol that allows AI assistants to act on over 8,000 apps without the need for complex integrations. In other words: AI doesn’t just talk, it now acts. It clicks, opens, and executes. It’s a leap in operational power, and it clearly shows where we’re headed: toward assistants that don’t just suggest but act—autonomous, fast, and invisible.
#Tencent has announced Hunyuan T1, a model based on a hybrid Transformer-Mamba architecture. It performs complex reasoning, is inexpensive, and stands up well against the best Western models. The point isn’t just technical—it’s geopolitical. Chinese models are emerging with a speed and solidity that we often ignore in the West, and we can’t afford to do so, because behind every model is a vision of the world.
Meanwhile, #Anthropic has introduced “think,” a function in Claude that enables it to reason in a structured way to handle complex tasks. Not just answers, but actual internal reasoning. Here, the line between “simulating thought” and “really thinking” is becoming increasingly blurred.
Then there’s the economic front. #OpenAI and #Meta are seeking a partnership with #Reliance in India, and OpenAI is reportedly considering a price cut of up to 85% to penetrate the market—a super-discounted AI just to capture billions of users. The risk? That mass access becomes mass exploitation. Because when something costs so little, we often end up being the product.
On the other hand, entrepreneur Kai-Fu Lee took a jab at Altman, saying that “he probably doesn’t sleep well at night.” His startup 01.AI, which works on open models like #DeepSeek, operates at only 2% of OpenAI’s annual costs—lighter, more transparent, and less centralized. It’s the alternative model, and it should be watched closely. Because even in the AI world, not everything has to come from just three companies.
Meanwhile, #Perplexity has proposed buying TikTok in the USA and rebuilding its algorithm to make it transparent and under Western control. They claim they want to redo it “with American rules.” But the real question is: is an algorithm ever truly neutral? And who decides what is transparent and what isn’t?