New Feature Floods Platform with AI-Generated Visuals, But Raises Questions About Content Moderation

Perplexity, an AI search startup, has upgraded its chatbot on X with a new feature that allows users to generate short, eight-second video clips with sound using AI. The ‘Ask Perplexity’ bot on X can produce creative visuals and audio, including dialogue, in response to a user’s prompt. However, this new video generation feature has raised concerns about the spread of misinformation on X, a platform that has already been criticized for its lax content moderation.
The feature has sparked a surge in demand, with users on X posting wildly imaginative, AI-generated videos depicting fictional scenarios involving real-life celebrities, politicians, and world leaders. Perplexity has acknowledged the issue, stating that video generation may take longer than expected due to high traffic. The company has implemented strong content filters to prevent the misuse of the latest AI video generation feature, but experts are still concerned about the potential for misinformation to spread.
The rivalry between Perplexity and Grok, an AI model developed by Elon Musk’s xAI venture, has also intensified. While Grok does not yet have the ability to generate videos, Perplexity’s AI model has taken the lead in this area, raising questions about the future of AI-powered content on X.
In addition to its X platform, Perplexity has been looking to make its AI chatbot more accessible by rolling out its services on WhatsApp. In April, Perplexity AI became available directly on the messaging platform, allowing users to access the AI-powered answer engine without downloading a separate app or signing up. This move has made Perplexity’s AI more accessible to a wider audience, but it also raises questions about the potential for AI-generated content to spread beyond the boundaries of social media platforms.
However, Perplexity is also facing legal challenges from various publishers, including the BBC, which has accused the company of allegedly training its “default AI model” using the UK broadcaster’s content. Perplexity has responded to the claims, calling them “manipulative and opportunistic” and stating that the publisher has a “fundamental misunderstanding of technology, the internet and intellectual property law.” The BBC has threatened to take legal action against Perplexity unless the company stops scraping its content, deletes existing copies used to train its AI systems, and submits a proposal for financial compensation.
The controversy surrounding Perplexity’s AI chatbot highlights the need for greater regulation and oversight of AI-powered content on social media platforms. As AI technology continues to evolve and improve, it is essential that companies like Perplexity prioritize transparency and accountability in their use of AI, and that regulators take steps to prevent the spread of misinformation and protect users from potential harm.



