Of all the technologies that have transformed society over the last fifty years, Artificial Intelligence, or AI, is arguably the most radical. Originally a purely academic field of study, AI has rapidly developed into a commercial concern over the last 20 years, forming a burgeoning branch of computer science and now promising to transform life as we know it.
With the global AI market forecast to reach US$267 billion by 2027 and contribute US$15.7 trillion to the global economy by 2030 there is clearly a lot of economic upside, but many commentators believe the true impacts will extend far beyond mere financial output.
“We are at the beginning of a revolution that is fundamentally changing the way we live, work, and relate to one another,” Klaus Schwab founder and executive chairman of the World Economic Forum predicts. “In its scale, scope and complexity, what I consider to be the fourth industrial revolution is unlike anything humankind has experienced before.”
Alongside genuinely ground-breaking advancements in technology however, AI has also become a go-to term for empty marketing spiel, often added to software companies’ advertising as a meaningless claim for superior operation.
“There is no official or generally agreed-upon definition of artificial intelligence,” Devin Coldewey of TechCrunch notes. “But this lack of consensus hasn’t stopped companies great and small from including AI as a revolutionary new feature in their smart TVs, smart plugs, smart headphones and other smart macguffins.”
Despite these concerns, AI is regularly seen as a top priority by business leaders, with research from consultancy Gartner predicting that AI software revenues will grow by over 21 per cent in 2022, hitting more than US$62.5 billion globally.
“The AI software market is picking up speed, but its long-term trajectory will depend on enterprises advancing their AI maturity” Alys Woodward, senior research director at Gartner, explains. “Successful AI business outcomes will depend on the careful selection of use cases. Use cases that deliver significant business value, yet can be scaled to reduce risk, are critical to demonstrate the impact of AI investment to business stakeholders
So just what impact will AI truly have in the 21st Century and is it really the new industrial revolution?
Although the term AI is bandied about with much licence, there remains no universally agreed definition of what it means, and this is in part due to the difficulty in defining exactly what intelligence itself is.
In the broadest terms, the concept of AI relates to the ability of any non-human system to perform tasks commonly associated with intelligent beings, i.e. humans. Traditionally, this meant a computer or computer-controlled robot carrying out a task that required reasoning or advanced pattern-recognition typical of a human, but increasingly AI is producing results far beyond the capabilities of any human and this is largely due to the success of one approach -machine learning.
This method, pioneered in the early 21st Century, transformed pattern-recognition and allowed systems to interpret vast amounts of data and then automatically update their own model, rapidly improving improve accuracy and efficiency. This in turn has spurred applications across industry and led many sectors to adopt an AI-first approach.
Analysis by management consulting firm McKinsey shows that 56 per cent of businesses reported AI adoption in at least one function in 2021, a 50 per cent increase on 2020.
“I see an AI first world,” Bill McDermott, ex-CEO of enterprise technology firm SAP, said.
“I think very strongly that intelligent applications will fundamentally change the way you do work in the enterprise and the way you collaborate with your trading partners outside of the enterprise.”
As AI has rapidly proliferated across sectors it has however raised a number of systemic issues around trust and explicability. One of the key advantages of AI – namely that it can learn unaided – is also its major downfall as business-owners and users have precious little insight into the working behind many of the outputs that AI systems produce.
A global AI Survey, conducted by technology firm IBM, shows that the rate of adoption of AI in business has historically lagged behind the level of interest and that while this may be changing concerns over trustability still impede uptake. Out of more than 5,500 business surveyed, over 90 per cent felt that ‘trustworthy and explainable’ AI is critical to business, with respondents ‘acutely aware’ of the importance of having trustworthy AI but the majority facing ‘significant barriers’ to achieve this.
“As organizations move to a post-pandemic world, data from the Global AI Adoption Index 2021 underscores a major uptick in AI investment,” Rob Thomas, Senior Vice President, IBM Cloud and Data Platform, explains. “A large majority of those investments continue to be focused on the three key capabilities that define AI for business – automating IT and processes, building trust in AI outcomes, and understanding the language of business.”
More than 50 per cent of companies now cite obstacles in deploying AI – most notably lack of skills, inflexible governance tools, and biased data.
As well as the deep trust issues that firms must tackle, if they are to be successful deploying AI, there are also increasing computing costs, with some researchers now suggesting that we may be nearing the computational limits of deep learning
“Deep learning’s recent history has been one of achievement: from triumphing over humans in the game of Go to world-leading performance in image recognition, voice recognition, translation, and other tasks. But this progress has come with a voracious appetite for computing power,” Neil Thompson, research scientist at MIT explains. “Extrapolating forward this reliance reveals that progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable.”
For businesses this cost can mount quickly, especially as AI switches from being a competitive advantage to a competitive necessity. To stand out from the crowd, many firms need to ensure their AI is more accurate and efficient than the competition and in some sectors this is already fuelling an ‘arms race’, with investment increasing exponentially to ensure that companies control the data and computing power necessary to achieve domination.
To further complicate matters, many businesses fail to achieve the commercial benefits of AI due to poor implementation or badly designed project deployment. Analysis by consulting firm Capgemini shows that a mere 27 per cent of data-related projects are ultimately successful, and up to 85 per cent of AI projects never deliver the expected return on investment (ROI).
Despite the challenges that businesses face in deploying AI, the scale of what it can deliver means that many commentators already believe it is the central element of a conceptual revolution, known as Industry 4.0.
Under this theory, Industry 4.0 follows three previous industrial revolutions that span the last two to three hundred years. The first industrial revolution introduced steam power and transformed manufacturing in the early 19th Century. The second revolution introduced electricity, railroad networks and the telegraph in the early 20th century, allowing mass production and long-distance communication. The third revolution in the latter half of the 20th Century introduced the computer and digital technology, changing communications and processing power. This fourth revolution is now predicted to introduce an era of rapid change in technology, industrial manufacturing and society that will see interconnectivity, decentralized operation and smart automation becoming dominant
Nick Davis, lead of Digital Transformation at consulting firm Deloitte sese this fourth industrial revolution as “about bringing a lot more intelligence into how technology is used in manufacturing in particular, and a convergence of digital technologies. So things like the use of the cloud, the internet of things, AI and these various different forms make manufacturing production more optimised in terms of efficiency and more sustainable in terms of low impact on the environment.”
First coined as a term by Schwab of WEF in 2015, the term ‘Industry 4.0’ has since been widely adopted by business, but some critics argue that it is little more than marketing.
“Unfortunately, there are a couple of problems with this current vision of Industry 4.0,” Elhay Farkash, CEO of software firm Zira comments, “the conversations are being led by marketing departments to provide a new spin on current capabilities… Industry 4.0 provides marketers a framework to position some technologies that are getting a bit long in the tooth.”
Despite this scepticism, Farkash does believe that Industry 4.0 is coming, and when it does, it will be more than a marketing pitch – but he also highlights the numerous other technologies that will drive this shift alongside AI, suggesting AI may be a consequence of a wider revolution rather than the fundamental cause. Survey data by Statista shows that 72 per cent of respondents see Internet of Things (IoT) as the leading technology for Industry 4.0, ahead of AI with just 68 per cent.
That AI will play a key part in societal change over the coming decades seems beyond doubt, but whether it is simply one more element of a complex transformation or the single ground-breaking technology that underpins a revolution will likely be something that only the historians can decide.
SEE ALSO: Artificial Intelligence in Marketing