The Perils of Letting AI Plan Your Next Adventure

A modern workspace featuring dual monitors displaying data analytics and geographical maps, ideal for travel planning and research.

As artificial intelligence becomes more integrated into everyday life, its role in travel planning has grown significantly. Tools like ChatGPT, Microsoft Copilot, and Google Gemini are increasingly being used by travelers to create itineraries, offering convenience and inspiration. However, this reliance on AI comes with risks, as some users are finding themselves in situations that are not only inconvenient but potentially dangerous.

Miguel Angel Gongora Meza, founder of Evolution Treks Peru, shared a concerning story of two tourists who were misled by an AI-generated itinerary. They were directed to a non-existent location called the “Sacred Canyon of Humantay,” a name that combines two unrelated places. The tourists paid nearly $160 to travel to a remote area without a guide, putting them at risk of altitude sickness and exposure due to the lack of oxygen and phone signal at high elevations.

Such incidents are not isolated. Dana Yao and her husband recently planned a romantic hike to Mount Misen in Japan using ChatGPT. Following the AI’s instructions, they arrived at the summit in time for sunset, only to discover that the ropeway had already closed, leaving them stranded on the mountain.

According to a 2024 survey, 37% of travelers who used AI for planning reported that the tools provided insufficient information, while 33% said they received false information. Rayid Ghani, a machine learning expert, explains that AI systems like ChatGPT generate responses based on statistical patterns in text, which can lead to “hallucinations” — fabricated information that sounds plausible.

The issue extends beyond travel planning. AI is increasingly used to alter videos and images, blurring the lines between reality and AI-generated content. For example, a recent incident involved a couple who traveled to Malaysia based on a TikTok video they believed showed a scenic cable car. Upon arrival, they discovered that no such structure existed — the video had been entirely AI-generated.

In August 2025, YouTube faced backlash after it was revealed that AI had been used to subtly alter content creators’ videos without their consent, modifying elements like clothing, hair, and facial features. Similarly, Netflix faced criticism earlier in 2025 for using AI to “remaster” old sitcoms, resulting in surreal distortions of beloved 1980s and ’90s television stars.

As AI continues to evolve, experts warn that users must remain vigilant and verify information, especially when planning trips to unfamiliar destinations. While efforts are underway to regulate AI and make its outputs more transparent — including proposals from the EU and the U.S. to include watermarks or other distinguishing features — the challenge remains significant.

For now, travelers are advised to double-check AI-generated advice and stay adaptable in the face of unexpected changes. As Javier Labourt, a clinical psychotherapist, notes, the essence of travel lies in adaptability and openness — qualities that remain essential, whether or not AI is involved.

Experts like Ghani suggest that users should be as specific as possible in their queries and verify everything. However, he acknowledges the unique problem travel poses, as travelers are often asking about destinations they’re unfamiliar with. If an AI tool gives you a travel suggestion that sounds too perfect, it’s a red flag.

In the end, the time spent verifying AI information can be just as laborious as planning the old-fashioned way. Yet, as AI continues to shape how we interact with the world, the balance between convenience and accuracy has never been more critical.

Leave a comment

Trending