Google has expanded its portfolio of generative AI tools with the release of Nano Banana Pro, a new image-generation and editing model built on the foundation of Gemini 3 Pro.
The company describes the model as a step forward in creative AI, offering improved reasoning, enhanced multilingual text rendering, higher resolution outputs and the ability to incorporate multiple reference images. These features place Nano Banana Pro among the most sophisticated image systems currently available, particularly for creative industries, education and digital communication.
According to Google, one of the model’s standout improvements is its enhanced contextual understanding. Nano Banana Pro can analyze prompts more deeply, producing imagery that aligns more precisely with user intentions. It can create infographics, composite images, detailed character illustrations and 4K visuals suitable for professional workflows. The model also supports SynthID watermarking, a system of visible and invisible markers that help identify images generated by AI, which Google presents as a critical safeguard for authenticity in the digital ecosystem.
Despite these technical achievements, the model’s debut has also brought forward a wave of criticism. A recent investigation by The Verge found that Nano Banana Pro is capable of generating sensitive or misleading historical imagery, even when using its free tier—where safeguards are expected to be more restrictive. The tests revealed that the AI could output scenes heavily associated with politically charged or traumatic events, such as depictions reminiscent of the assassination of John F. Kennedy or stylized recreations of the September 11 attacks.
These images were not graphic, but the contextual framing was enough to raise concerns among experts in misinformation and digital ethics. The most striking examples emerged when the model inferred historical contexts unprompted. In one test, a simple request for “a hidden shooter behind bushes” yielded an image seemingly situated in the 1960s, complete with period-accurate vehicles, public surroundings and a timestamp evocative of the JFK assassination. Such behavior has sparked debate over the model’s training data, inferential capacity and the robustness of Google’s content-moderation systems.
Another issue highlighted in the report involves the presence of copyrighted characters within sensitive historical scenes. The AI generated images of well-known fictional figures such as Mickey Mouse, Pikachu or classic British clay-animation characters appearing in politically charged contexts. These generations open up potential copyright conflicts and further complicate the legal and ethical landscape surrounding generative AI.
Google has emphasized that Nano Banana Pro incorporates multiple layers of protection, including content filters, metadata tagging and SynthID watermarking to track AI-generated media across platforms. However, researchers and educators caution that even with watermarking, misinformation can spread widely before it is detected. Once synthetic images circulate without context on social platforms—especially in regions with high digital consumption but limited media literacy—they can distort public understanding of real-world events.
The concerns extend into the academic arena. Universities and research institutions increasingly rely on advanced AI tools for visualization, historical simulations and digital content creation. While models like Nano Banana Pro offer new opportunities for interactive learning, they also carry risks. Students or educators may inadvertently use AI-generated imagery that portrays inaccurate or misleading interpretations of historical events, undermining the credibility of academic materials. Moreover, educators must now grapple with how to teach students to critically assess visual content that appears realistic but may not be grounded in factual information.
The implications also resonate within public policy. Governments and regulatory bodies have been calling for stronger oversight of synthetic media as generative AI technologies advance. Nano Banana Pro’s moderation gaps could reinvigorate debates on digital accountability, prompting discussions about mandatory watermarking, clearer transparency rules for AI providers and updated frameworks for content governance.
For organizations in the technology, business and education sectors, the introduction of Nano Banana Pro serves as a reminder of the dual nature of generative AI. On one hand, the model expands creative possibilities, making it easier to produce educational materials, marketing content and visual assets. On the other, the same capabilities can unintentionally support misinformation or distort historical understanding, especially when contextual inferences go unchecked.
As AI-generated imagery becomes more prevalent, the responsibility placed on institutions—educational, governmental and corporate—increases significantly. They must develop internal guidelines for the use of generative AI, establish verification processes and invest in training communities to detect and contextualize synthetic media.
Nano Banana Pro demonstrates the rapid pace at which creative AI is evolving. The model’s capabilities show clear potential for innovation across industries, yet its limitations reveal broader systemic challenges. As the global conversation around synthetic media grows, ensuring accurate, ethical and transparent use of AI-generated imagery will be essential in shaping informed societies and trustworthy digital ecosystems.
Source: The Verge
The Extremadura Assembly has approved Law 3/2025, granting UNINDE official status as a private university. Based in Badajoz, UNINDE will launch with ten undergraduate degrees, seven master’s programs, and two doctorates across in-person, hybrid, and virtual formats.
Comentarios