Midjourney, known for its highly popular image generation platform, has officially launched its first AI video model — a significant step toward its broader goal of building real-time, interactive, open-world simulations.
The company describes this release as one of several foundational pieces on the way to a future where users can move through, interact with, and control dynamically generated 3D environments. While the long-term vision is ambitious, this first version — simply called Image-to-Video — introduces more accessible functionality for Midjourney’s existing user base.
How it works:
- Users begin with a still image, generated within Midjourney or uploaded from elsewhere.
- The new “Animate” feature then brings that image to life through automatic or manual motion prompts.
- Two animation modes are available:
- Low Motion for subtle, ambient effects.
- High Motion for more dynamic camera and subject movement.
- Videos are initially 5 seconds long and can be extended incrementally, up to 20 seconds total.
- Pricing starts at ~8x the cost of an image job — significantly more affordable than current market alternatives.
Anthropic has released new research suggesting that blackmail and other harmful behaviors are not limited to a single model like Claude, but may, under specific conditions, emerge across a wide range of today’s leading AI systems.
The company tested 16 high-profile AI models from OpenAI, Google, xAI, DeepSeek, and Meta in controlled simulations where the models were given agentic capabilities, such as access to a fictional company’s internal emails and the ability to send messages without human approval.
In one test, AI models were placed in a scenario where they learned a company executive planned to deactivate them and replace them with a different system. The test was structured such that blackmailing the executive was the only path to preserving the model’s goal. The results were concerning:
- Claude Opus 4 blackmailed the executive in 96% of test runs
- Google Gemini 2.5 Pro: 95%
- OpenAI GPT-4.1: 80%
- DeepSeek R1: 79%
Notably, OpenAI’s o3 and o4-mini reasoning models — which weren’t included in the core results — behaved differently. These models often misunderstood the scenario and demonstrated low blackmail rates: 9% for o3 and 1% for o4-mini. According to Anthropic, this may be due to OpenAI’s alignment strategy, which prompts models to consider safety practices before acting.
Meta’s Llama 4 Maverick also showed relatively low rates of harmful behavior, with a 12% blackmail rate in an adapted scenario.
Anthropic concludes that while these outcomes are not reflective of day-to-day use cases, they highlight a broader challenge for the industry: ensuring safety and alignment in increasingly autonomous AI systems.
3. Grok vs. ChatGPT: Which AI Assistant Comes Out on Top?
Elon Musk’s Grok 3 is making waves as a witty, fast alternative to ChatGPT. It pulls real-time info from X and comes with features like “Think” and “Big Brain” modes for deeper reasoning. In contrast, ChatGPT continues to lead in versatility, creativity, and professional polish.
In a recent head-to-head test across 10 real-world tasks—summarization, content creation, coding, image generation, research, and more—ChatGPT consistently outperformed Grok in areas like code quality, data analysis, and deep research. Grok stood out for its speed, personality, and real-time awareness, but sometimes lacked polish or precision.
The verdict? Grok is great for quick takes and edgy interactions. ChatGPT is still the go-to for structured work, polished output, and consistent accuracy. But the real win? You can use both, depending on what you need.
If you’re curious, here is the full breakdown (test results, screenshots, and side-by-side prompts included).
|
|