The false promise of technology

Part 7 of a series on toxic technology.

For all the wide-eyed futurology surrounding the potential of digital technology, it commonly fails to meet expectations. Start-ups fail considerably more than they succeed. Digital ‘transformation’ initiatives commonly overrun and under deliver. Many products and features fail the ultimate test – users just don’t find them useful. This isn’t surprising – building with digital technology is experimentation, trying something new. 

In some respects, the optimism of digital is a virtue. It creates the conditions for the boldest reforms, and the biggest ideas. But where this optimism becomes a problem is when it comes with a disregard of pragmatic pessimism. Optimism is toxic when it leads to avoiding questions of feasibility, viability and the conditions for success.

Digital technology is a particularly dangerous place for this blinkered optimism. It’s an unpredictable medium, in an industry bursting with hype. If you’re willing to believe it, the worlds’ hardest problems can be solved with AI, or blockchains, or internets-of-things. If you do believe it, you’re almost certainly going to be disappointed.

In markets

Companies can tap into broader market optimism for the potential of hyped technologies. This is typically by expressing an intent to use it, rather than showing proven success. In November 2019 there were a flurry of articles reporting that HSBC would use blockchain for a ‘custody platform’ by March 2020. 

“HSBC aims to shift $20 billion worth of assets to a new blockchain-based custody platform by March”

November 2019 Reuters news article

There is no corresponding press coverage in March 2020 to indicate this really happened. But the announcement had already served its purpose to generate positive market sentiment for HSBC. It’s unlikely blockchain was a necessary technology to solve the problem, but its use turns a bland corporate IT initiative into a story about supposed innovation.

In governments

Governments frequently make assertions of future delivery. This helps cement strategy across huge bureaucracies, and encourages public engagement. It is also low risk to reputation because governments are pretty effective at managing the message around its technology delivery failure.

A current example of a government promise on technology comes from the UK government’s National Data Strategy:

“We will drive data discoverability across government through developing an Integrated Data Platform for government, which will be a safe, secure and trusted infrastructure for government’s own data.”

UK government National Data Strategy, September 2020

This announcement sets false expectations:

  • That much of the UK governments’ data can be put onto a singular ‘platform’.
  • That a named thing needs to exist called the “Integrated Data Platform”.
  • That such a thing will come into existence reasonably soon (perhaps within the next spending cycle in the UK government).
  • And, at some point, funding will be allocated to make this all happen.

The language may appear subtle, but published strategies start to shape the funding pots, and the £multi-million programmes that emerge. The complexity needed to solve data in government is immense, but a ‘single platform’ is appealingly simple to fund and easy to announce. The wider strategy does explore the complexity of data in government, but these more concrete, announce-able things will have a more enduring impact.

In politics

As technology plays a greater role in society, we’re likely to see ever more technology-based promises from parties seeking votes. This from the 2019 UK Conservative Manifesto is an example:

“We will use new air traffic control technology to cut the time aircraft spend waiting to land”

2019 Conservative Manifesto

This promise of a real-world outcome, achieved by introducing new technology, is hugely appealing to many voters. But how this will be done, and the uncertainty involved, is not forthcoming. Which technology would be used, and why? When would aircraft waiting times reduce? And to what cost to taxpayers? Instead of a strategy, we’re asked to vote for a promise that cannot be made, and the belief that a government will be competent enough to deliver on it. Whilst this pattern of trust in political promises isn’t new, the unpredictability of digital technology will increasingly expose the gap between intent and delivery.

Expectation vs reality

It’s hard to trust assertions about when and how digital technology can deliver outcomes. The internal complexity and interdependencies of digital technology mean even the most experienced professionals rarely get their estimates right. Most software developers can tell you the time they spent days solving a problem they thought would take a few minutes.

Technology initiatives don’t overrun or under-deliver – instead, people have irrational expectations of certainty. CEOs, Ministers and managers crave this certainty to build a reputation as a person-of-their-word. Customers, workforces and electorates crave this certainty because it reduces anxiety around something they need, or want.

This mismatch of expectation and reality isn’t just a disappointment or a financial write-off. Tactics and fire-fighting must make up for failed plans – reducing the quality of digital technology. These are the ideal conditions for toxic technology to emerge – poorly designed, insecure and unstable technology which is harmful to the organisation and its users. Sometimes organisations work this way for years, reinforcing the culture of false promises. Inevitably it’ll backfire spectacularly such as the glitchy dystopia of Cyberpunk 2077 or the IT meltdown at TSB. If organisations don’t learn from consistent failure to meet expectations, then they normalise the accumulation of toxic technology. 

The accumulation of broken promises

The tech industry narrative of success is dominated by survivor bias of the few commercial giants – each an example of when bold predictions came to pass (at least in their origin stories). Yet the majority of organisations are filled with toxic legacy and broken promises. The historic strategies of these organisations did not anticipate today’s glacial pace of change, system workarounds and disruptive incidents.

Inside established sectors, like government and traditional banking, these false promises repeat cycle after cycle. Somehow, the next Big Technology Transformation will solve all the problems, despite being funded, governed and promised in the same way as previous attempts.

The powerful – CEOs, government ministers, or those with the ability to gain media or community attention – have a unique role in how promises are made. There is pressure on them to make promises – relieving anxiety that something needed will be delivered, or creating a buzz around a future product. At worst, this is a tool of command and control management – creating unrealistic expectations in public to motivate teams in private. But even the most throw-away announcements can be exploited by command and control cultures. Leaders must take care with announcing new investments, plans or strategies – they communicate intent, but they also set expectations far beyond the leader’s control – and these expectations are not always reasonable. 

Making better promises

We still need to make promises, and to do so with optimism. It’s essential to positive, forward-looking organisational cultures. Optimism is what encourages teams to take risks and try something new. Promises are how trust is built. 

But we all need to make better promises when it comes to digital technology.

Leaders should balance their optimism with pragmatic pessimism. They should ensure this is woven into the culture, practice and process they encourage. They must make sure they don’t consistently over-promise and under deliver, even when the culture might reward it. They must wary of where the burden of expectation will fall, as often it will be with others.

Funders and those that govern should recognise that upfront large-scale approval for funding and headcount encourages false promises. They should work to reform accounting and governance based on greater trust within organisations, making it more incremental and iterative. They should listen to digital leaders and delivery teams as peers, putting aside their role in ‘holding to account’ to collaborate on strategy and reform.

Technologists should use their experience to counter naive optimism. They must remember what caught them by surprise – when the simple became complicated, and the complicated became simple. They must be aware of their own bias in setting expectations of delivery, particularly. when estimates depend upon their expertise. Technologists must be skeptical of hype, even when technology is exciting and new.

Designers should value and understand their medium of delivery as this is where viability and feasibility becomes clearer. Like an architect appreciates wood, steel and construction techniques, a designer of digital technology must appreciate software, data and the craft of technologists. This can be done best in a multidisciplinary team – finding the right experts for the right design challenges.

Product managers should embrace risk as a peer to value as a core part of their discipline. This helps ground bold ideas in achievable reality – it ensures product managers have a more complete set of information with which to make decisions, and make promises.

Delivery professionals should embrace uncertainty. They shouldn’t accept the false certainty of specific and singular estimates (e.g. “complete by July next year”) – ensuring they consider a range of potential outcomes from the best to the worst cases. They should be skeptical of estimates and conscious of bias. Delivery professionals should take a lead in exposing and challenging unreasonable expectations from leaders. Delivery professionals should hold to account those who consistently set poor expectations or provide inaccurate estimates.

Sign up for the next instalment

The next part of this series will be published soon. Sign up for email updates below.

Success! You're on the list.

The symptoms of toxic technology

Part 2 of a series on toxic technology. In part one I introduced toxic technology, and defined what I mean by the term.

The characteristics of toxic technology can be invisible to people working in organisations. It is hard to directly observe the security flaws, tangled architecture or unreadable code without expertise. But the symptoms of toxic technology are common and visible.


Toxic technology makes delivery slower, harder and riskier. This is caused by shortcomings in the the less visible design of technology: the code and architecture. This internal design has a significant effect on the ease and risk of change. Some change can be simple, like adding lego blocks to an already assembled structure. But some change require the existing structure to be pulled apart, or new kinds of lego block fabricated. The most complex changes ripple out into connected systems. Toxicity makes even the most simple changes surprisingly complicated.

There’s no perfect internal design for technology – it’s a balancing act between preparing for the future vs. reacting to immediate needs. Preparing involves making technology more malleable and understandable (known as refactoring), and therefore ready for likely future changes. But teams can be wasteful if they over-prepare, investing in changes to anticipate something that never happens. When the pressure to deliver is high, teams refactor less. Failing to refactor over a long time will make technology brittle and hard to understand – change becomes harder and harder to achieve. If the pain of changing technology is too high, it will become perceived as ‘legacy’.

Where legacy technology exists, its impact on delivery can be hard to avoid. If legacy systems hold data, or process transactions which are important to the organisation, then they become bottlenecks for delivery. It can be very challenging to replace legacy systems, so it is common for new technology to layer on top of the old, accumulating over time. Teams must be wary of the infectious nature of toxic technology – when new technology integrates with legacy technology, some changes can only move at the pace of the slowest system.


The ease of changing technology is also important for its operation – it improves the ability to manage failure. Teams need to be able to release fixes with confidence, knowing they’re unlikely to cause side effects. To help detect and diagnose failure operators sometimes need to add new ways to observe how the system is behaving. Toxic technology increases the risks of accidently making things worse when trying to deal with an emergency.

Toxic technology makes systems brittle. If changes cause unwanted side effects, small incidents can grow into catastrophic failures. Legacy systems will often grow a reputation for stability, but this reputation is a result of infrequent change. When incidents do occur, it takes a long time to recover and the impact on the organisation and users is more severe. 

The organisation

A catastrophic IT outage is a big threat to organisations. It creates an existential threat to commercial organisations, and can cause the loss of trust in a public institution. In 2019 British Airways and TSB experienced outages which impacted customers over several days. By lasting for longer periods of time, compound effects are seen – customers will shop elsewhere and public service users could see impacts on their lives. Modern technology companies, currently less burdened by toxic technology have outages which usually only last a few minutes or hours. Sometimes the press coverage of these is high-profile, but compound effects are avoided as business returns to normal. However, in the public and banking sectors, the loss of critical systems for several days is becoming increasingly normal. 

Communications around these kinds of persistent outages state, or imply, that there is a single root cause. The truth is that any major technology failure will have multiple complex causes: a succession of technical and organisational failures, alongside human error and bad luck. But organisations are unlikely to reveal these complex causes publicly, because it begins to expose the depth of toxic technology the organisation relies upon. Where they have accumulated toxic technology at scale, small failures can cascade into catastrophes.

Toxic technology is a cyber security risk. Neglect means that technology is not kept up-to-date, and vulnerabilities emerge. Pressure on delivery means good security practices are routinely deprioritised. Meanwhile, cyber crime is increasing year-by-year, in terms of both frequency and sophistication. Insufficient cyber security, caused by toxic technology, will someday result in the failure of a major corporation or institution.

Challenging new legislation like GDPR is beginning to take effect, with British Airways fined £183million in July 2019 for a data breach. Their rivals Easyjet are being investigated for the unauthorised access of 9 million customers’ personal data. Most organisations being fined so far are established organisations with high volumes of toxic technology. Legislation of technology is expanding further – including accessibility, online platforms, and encryption – meaning toxic technology is also becoming a legal concern.

Where technology is subject to standards, the impact of toxic technology can be critical to doing business. Toxic technology is very likely to breach standards through poor security, or a failure to keep up with changing requirements. PCI-DSS is a standard used by the payment cards industry to protect customers’ financial information. Failing to meet the standard can result in legal action, cost of fraud, loss of revenue and range of other negative impacts.


Toxic technology places a financial burden on organisations. It makes it harder to be strategic and innovative because resources drain away on tactical or emergency manoeuvres. Only the highest priority work can be done, because the cost and risk of change has risen so high.

Toxic technology can get stuck – becoming both too hard to change and too costly to replace. But even if remaining unchanged, costs increase because of a changing context. Most technology is connected with networks, platforms, APIs and ad-hoc user integrations. Whenever external systems change, there are costs to re-integrating the technology. New security vulnerabilities mean new investment in cyber defences. Changing supporting platforms can trigger expensive migrations. People costs increase as access to niche skills and knowledge becomes harder. Commercial contract renewals are renegotiated at higher prices to support ageing technology. And throughout this process of toxic technology growing in cost, modern replacements tend to become cheaper, but remain out of reach due to the cost of switching.

The simplest way for toxic technology to impact an organisation’s finances, is by buying or building something that’s not needed. The technology industry is full of ‘silver bullet’ solutions to complex problems, particularly in domains like data warehousing and cyber security. Faced with a complex public health and economic challenge like Covid-19, governments across the world have responded by spending significant sums on apps. Whilst these apps sound intuitively useful, and are very annouceable for political leaders, there’s no evidence to show they’re effective. If they’re not scrapped entirely, governments are left with complex sustainability and data privacy challenges.


Pain and frustration are not uncommon to experience when using technology. But design is even more challenging when the medium is brittle and slow-to-change. Technology provides its most painful experiences inside large slow-moving institutions, where toxicity has accumulated over decades. Inside these organisations, staff are mandated to use dire technology in order to go about their daily duties. In August 2019, Dr Dominic Pimenta described his experience as a junior doctor in the UK’s National Health Service:

His experiences are very typical for public servants and administrative staff in large, established organisations across the world.

It is common for users of toxic systems to increase their usability with a layer of spreadsheets and paper-based work-arounds. Whilst this can optimise use for long-term users, it makes it harder for new users to learn. As new staff enter the workforce with the raised expectations of internet-era services, they will be less tolerant of technology which is frustrating and confusing to use. The impact of poor user experience is likely to hit certain groups more than others, with the effect of excluding those with permanent, temporary or situational disabilities.

Service outages, caused by neglect of technology, can erode trust with users, and prevent them from meeting their needs. The impact can range from the inconvenient to the life-threatening. Outages to services such as medical advice, security monitoring, housing, and access to money could have enormous impacts on the lives of users. 

Toxic technology is also more vulnerable to cyber attack, which can have a significant impact on users, such as the suicides following the Ashley Maddison breach, or the leak of the HIV status of 14,000 individuals in Singapore.

Institutions such as governments, or monopoly service providers present a big risk to their users if toxic technology accumulates. When the quality of the user experience diminishes, there is no choice of alternative, and users must suffer the consequences.

The symptoms are everywhere

These symptoms are common in larger, more stable organisations. But their causes are systemic, and so start ups and high growth tech companies are not immune. Without bold new approaches to building and sustaining technology, supported by changes to how we fund, staff and govern teams, the outcome is the same: the accumulation of complexity and toxicity.

These systemic causes, and ways to mitigate them are the subject of future instalments.

Continue reading part 3.

Sign up for future instalments

Sign up for email updates below.

Success! You're on the list.