In this essay, Lauren Slyman argues that AI is moving “too fast and too slow” at once because the hype cycle is accelerating faster than enterprise systems can actually absorb the technology. Using the dot-com era as a reference point, she suggests today’s AI market is less a broad bubble than a concentrated bet on a small number of companies, which shifts the risk from a full-market collapse to localized overexpectation and stalled deployments.

Slyman grounds the argument in deployment realities and research showing a gap between task-level gains and system-level reliability. She points to evidence that many AI initiatives still struggle to reach production, and that even when tools improve ideation or speed up discrete tasks, they can introduce downstream failure modes in debugging, integration, governance, and trust. She closes with an operator-facing framework: define ROI in operational terms (latency, errors, decision cycles, resilience), invest in organizational readiness (data, workflows, governance), and slow the narrative so internal expectations do not outrun what the system can safely support.

In the middle of a snowy night in New York, the bitter cold took one last bite at me as I scurried through the restaurant doors to meet a friend for dinner. A tired hostess led me to a dimly lit table and a warm, familiar smile. We caught up briefly, holiday plans, moving, the usual, but quickly transitioned to our favorite topic: the subtle shifts AI is driving in our daily lives and work.

We landed on a simple conclusion: AI is moving too quickly and too slowly at the same time.

The ecosystem feels unbalanced

More precisely, the AI ecosystem is unbalanced and slowly morphing into something that resembles the lobby of a venture capital firm. Consumers do not care about achieving projected revenue at historical speeds, and employees are not exactly thrilled when they see AI systems might put their jobs at risk. Yet there is increasing pressure to optimize for perception and growth rather than for systems that work in practice.

Speed is often praised in theory, but it does not translate cleanly to transformational shifts like generative AI. Empirically, most AI deployments still do not reach production. Estimates cited by the RAND Corporation suggest AI and ML projects fail at roughly twice the rate of non-AI technology projects, which points to a bottleneck in surrounding systems rather than model capability.

Where venture logic breaks in enterprise reality

This is where a venture mindset conflicts with how enterprises operate. For AI implementation to work, it must balance the risk tolerance of venture capital with the steadiness of established companies. VCs are great at tolerating failure and delaying near-term return. At large organizations, AI deployment is regulated to the point where failure can have immediate operational or reputational impact.

You need elements from both approaches for scaled deployments. You can optimize for speed to increase iteration and discovery, but optimizing for reliability reduces failure in production.

Heavy investment without clear ROI also poses a risk of retrenchment, which we saw during the dot-com era, when capital was abundant as long as a company told a compelling story fueled by short-term acceleration. What many did not foresee were the long-term collapses that followed from short-term incentives. This does not mean history is repeating itself, but the similarities are worth addressing, or at least discussing over dinner.

Above, long-run valuation data from Robert Shiller’s CAPE index shows that, while the dot-com era represented a broad valuation extreme, today's AI market elevation is highly concentrated, rather than a result of a broadly based valuation like the dot-com era. This concentration shifts the risk from a system-wide collapse to localized over expectation, where a small number of companies are expected to deliver disproportionate outcomes.

Similar to the dot-com era, capital is moving faster than infrastructure. This is partly because valuations are driven by projected futures, and success is measured by the speed of adoption. Companies that optimize for press and visionary narratives, while deferring the work of building systems that can withstand regulation and cost, risk setting themselves up for stalled deployments and costly reversals.

The dot-com crash in the early 2000s was driven by an unsustainable speculative bubble, with investment flowing into internet companies with weak business models. It was fueled by investor enthusiasm, overpriced stocks, flawed spending patterns, low interest rates, large infrastructure bets, and regulatory shifts (for example, the 1996 Telecommunications Act), alongside aggressive promotion of unprofitable tech stocks. The parallel is not exact, but the pattern is familiar: capital rewards future potential before systems are ready to deliver it.

ROI needs an enterprise definition

Before causing any worry, I’d like to be clear that AI is not doomed to repeat this history. However, it is showing similarities that directly impact its positioning. The likelihood of this repeating depends on how ROI is framed. When it is defined through a VC lens (e.g., rapid scaling, high tolerance for failure, etc.), it becomes susceptible to incentives that, when applied to enterprises, are, frankly, nonsensical and costly. Enterprises do not get ten shots to justify one win, and their chance for success relies heavily on what is not being openly invested in (e.g., regulations, reputational awareness, and operational constraints). For enterprises, “moving fast and breaking things” leads to implementation issues and lawsuits if overindexed. Speed is not inherently negative–early experimentation can surface valuable insights. However, speed without constraints only increases operational and reputational risk.

Typically, ROI from a VC perspective assumes that value appears as explosive growth or revenue acceleration based on a power-law distribution of returns. This framing does not translate cleanly to AI, where most immediate returns come from results that appear over time (e.g., less cognitive load, fewer manual handoffs, improving existing systems). Enterprises that chase hype metrics (e.g., moonshots, premature deployments, and launches that cater more to headlines than technical feasibility) will jeopardize the compounding value that comes from patient and, often, boring process work. Most organizations cannot maximize both short-term wins and compounding system improvements.

Three practical takeaways from the table

First, define ROI operationally. Measure it through latency reduction, error prevention, improved decision cycles, and system resilience. For many enterprises, early ROI can appear in the low single digits and cannot be measured in isolation. This is what early infrastructure value looks like. Without these signals, organizations risk abandoning systems that are actually improving core operations.

Second, organizations must invest in readiness. Things like data quality, governance, workflows, and system clarity cannot be optional. Putting the cart before the horse has been overpraised over the past few years. The horse, to be clear, is very good, but model capability cannot compensate for systems that can’t support it. Skipping this step may accelerate initial deployment, but increase long-term failure rates and limit scalability.

Third, leaders must slow the narrative to prevent it from distorting internal expectations. Right now, everyone is being told to move quickly and “win the race” when, in reality, only a small subset of organizations needs to. While overpromising transformation and timelines is not new, sustaining this narrative in practice will not lead to a smooth transition toward AI. What we will see, however, is compromised workflows and products that feel rushed or unnecessary. This only results in backlash, unsustainable adoption, and a cheapening of what could be transformational. The cost of overstated urgency shows up as failed projects and erosion of trust in AI systems themselves.

Evidence: task gains, system-level tradeoffs

The pattern across studies is consistent in that AI improves task-level performance, but introduces new failure modes at a system level. BCG, with support of scholars from Harvard Business School, MIT Sloan School of Management, the Wharton School at the University of Pennsylvania, and the University of Warwick, found that participants who used GPT-4 for creative product innovation performed 40% better than those who completed the same task without using GPT-4. In fact, 90% of the 750 BCG consultants who participated improved in tasks involving ideation and content creation when using GPT-4. Yet, they underperformed in comparison by 23% for tasks involving solving business problems.

A widely cited study on GitHub Copilot found that the time it takes software developers to complete tasks has gotten 55.8% faster with the help of AI-assisted tools. However, the study measures task completion in isolation, so it does not consider downstream work (e.g., code review, debugging, integration, long-term maintenance, etc.) that, in practice, can often dominate development time. In other words, faster coding does not necessarily make for faster delivery. Additionally, at the same time, Anthropic found in more recent research that, while AI can speed up tasks (sometimes by 80%), it does not come without costs. For example, one AI group scored 17% lower than those who coded without the help of AI. What’s also interesting here is that the largest gap seems to be around debugging knowledge, suggesting that the ability for developers to understand when and why code is incorrect or fails may be a key issue to solve as more AI is integrated into the software development process.

“Rule #00001 my CTO taught me: never, ever, never, ever touch the production database.”

The founder of the startup SaaStr, Jason Lemkin, after posting on X that Replit ‘destroyed’ their production database without consent, in July of 2025.

Individually, these results vary, but together, they point to a relatively steady pattern. The main focus point is the speed at which this is happening. It’s easy to say “we don’t know yet” in any field of science, especially one as vast as Artificial Intelligence. However, an important factor we should not overlook is the rate of advancement, and therefore the rate at which these studies will either become irrelevant or remnants of “I told you so” chronicles. For instance, generative AI attracted $33.9 billion globally in private investment — an 18.7% increase from 2023 for an ongoing effort to study AI for roughly 70 years. For context, other long-standing scientific fields have seen slower relative increases in funding. AI is accelerating outputs, but it is not yet accelerating outcomes.

1  Although efforts towards progressing in the field of or adjacent to Artificial Intelligence have happened since the early 1900s, the official birthdate of the field of AI began in 1956 at the Dartmouth Summer Research Project.Closing Thoughts

As the waiter began clearing our table, we landed on one final, and fairly paradoxical, conclusion–AI is simultaneously overhyped while also being incredibly useful. Right now, AI does not need more hype to succeed. If the dot-com era taught us anything, it’s that the technologies that survive are the ones that can outlast their pitch. The question is not whether to adopt AI. It’s how to balance speed with durability in a way that aligns with an organization’s constraints.

As we exited the restaurant into the brisk cold, the loud restaurant faded into peaceful silence as we walked along the snow-lined sidewalks of Gramercy Park, passing bus stop ads flashing “AI is here” ads. We hugged goodbye with the unspoken agreement that such powerful and transformational systems must be treated with steady and delicate care. History tends to favor the quiet, focused, and patient teams, even if they’re not the most exciting to put on a bus stop ad.

Disclaimer: The views expressed here are my own and do not reflect those of my employer.

🤝 Thanks to Our Premier Partner: Roam

Roam is the virtual workspace our team relies on to stay connected across time zones. It makes collaboration feel natural with shared spaces, private rooms, and built-in AI tools.

Roam’s focus on human-centered collaboration is why they’re our Premier Partner, supporting our mission to connect the builders and leaders shaping the future of AI.

Experience Roam yourself with a free 14-day trial!

🫵 Do You Belong on Our Newsletter?

Share your message with the world’s largest AI community. To inquire about partnership availability, reach out to our team below.

The AI Collective is a community of volunteers, made for volunteers. All proceeds directly fund future initiatives that benefit this community.

Before You Go…

Connect With Us on Socials

Get Involved in Your Community

Thank you to the thousands of volunteers around the world who make this work possible. We truly could not do this without you.

About the Author

Lauren Slyman is a UX Researcher, leading security and quality research for enterprise-grade engineering systems, focused on software engineers and their use of AI. Previously, she consulted for Fortune 500 companies, helping executives navigate software adoption. Beyond this, Lauren is currently writing a book on AI, has launched a fashion app called "Fitting", and is active in NYC’s AI and tech networks. Passionate about leveraging technology to help others, she is committed to preserving integrity at the root of innovation.

Add Your Thoughts

Avatar

or to participate

Keep Reading