A Big Tech trust reckoning may be sparked by the Microsoft-CrowdStrike outage, which could jeopardize tech companies’ AI ambitions.

1721378004227

Experts emphasize the significance of government regulation and investment in security to mitigate the risks associated with artificial intelligence (AI). A recent software issue from CrowdStrike caused a global Microsoft IT outage, causing significant disruptions across various industries. This incident highlights the fragility of technological systems and highlights the potential risks associated with AI.

The massive IT outage that impacted companies worldwide illustrates the deep integration of society with Big Tech and the potential for widespread chaos from a single error. This incident raises critical questions about whether Big Tech can be trusted to safeguard powerful technologies like AI effectively.

The outage, caused by a software update from cybersecurity firm CrowdStrike, resulted in a Microsoft IT failure that affected airlines, banks, retailers, emergency services, and healthcare providers globally. While CrowdStrike has deployed a software fix, many systems remained offline on Friday as companies worked to restore their services, some requiring manual updates.

A Big Tech trust reckoning may be sparked by the Microsoft-CrowdStrike outage, which could jeopardize tech companies' AI ambitions. 6

Gary Marcus, an AI researcher and founder of Geometric Intelligence, which Uber acquired in 2016, stated that the Microsoft-CrowdStrike outage should serve as a “wake-up call” to consumers. He warned that the impact of a similar issue with AI would be exponentially greater. Marcus emphasized that if a single bug could take down critical sectors like airlines and banks, the readiness for artificial general intelligence (AGI) — AI capable of human-like reasoning and judgment — is highly questionable.

Marcus, who has previously criticized OpenAI, noted that the current systems in place could pose significant problems, as consumers are granting enormous power to Big Tech companies and AI. Dan O’Dowd, founder of The Dawn Project, which campaigns against Tesla’s self-driving systems, pointed out that the CrowdStrike-Microsoft situation highlights the insecurity and unreliability of critical infrastructures. He argued that Big Tech companies often rush products to market, evaluating systems based on their performance “most of the time.”

This rush to market is evident in the recent surge of AI products and offerings over the past six months. Despite transforming how people work, these AI models have also produced notable errors, such as Google’s AI suggesting users eat pizza with glue and Gemini’s inaccurate historical portrayals. Several companies, including OpenAI, Microsoft, Google, and Adobe, have had to roll back or delay AI offerings due to issues revealed during public launches.

A Big Tech trust reckoning may be sparked by the Microsoft-CrowdStrike outage, which could jeopardize tech companies' AI ambitions. 7

While these mistakes or delays might seem minor, the potential risks could become more severe as technology advances. A risk assessment report on AI commissioned by the US Department of State earlier this year indicated that AI poses a high risk of weaponization, potentially leading to biowarfare, mass cyber-attacks, disinformation campaigns, or autonomous robots, with catastrophic consequences, including human extinction.

Javad Abed, assistant professor of information systems at Johns Hopkins’ Carey Business School, noted that incidents like the Microsoft-CrowdStrike outage occur because companies view cybersecurity as a cost rather than a necessary investment. He advocated for Big Tech companies to adopt alternative vendors and multi-layered defense strategies, emphasizing that investing in cybersecurity is far more prudent than facing potential financial losses and damage to reputation and customer trust later.

Public trust in institutions has declined steadily over the past five years, with a 2023 study by the Brookings Institution highlighting a pronounced erosion of confidence in the technology sector. Big Tech companies like Facebook, Amazon, and Google saw the sharpest drop in trust, with confidence ratings falling by 13% to 18%. This trust is likely to continue being tested as consumers and employees of the companies affected by the IT outage grapple with the reality of how a software update can bring operations to a halt.

Sanjay Patnaik, a director at the Brookings Institution, criticized the government for failing to regulate social media and AI adequately. Without proper defenses, technology could become a national security threat. Patnaik emphasized that Big Tech has had “free rein,” but the recent outage has made companies realize the need for more stringent regulation.

A Big Tech trust reckoning may be sparked by the Microsoft-CrowdStrike outage, which could jeopardize tech companies' AI ambitions. 8

Marcus agreed that companies cannot be trusted to build reliable infrastructure independently. He warned that the outage should serve as a reminder that allowing AI systems to operate unregulated is a dangerous gamble, potentially leading to severe consequences.

A recent software issue from CrowdStrike caused a global Microsoft IT outage, disrupting various industries. This incident highlights the fragility of tech systems and the potential risks associated with artificial intelligence (AI). Experts emphasize the need for government regulation and investment in security to prevent such risks.

The massive IT outage that impacted companies worldwide demonstrates the deep integration of society with Big Tech and how a single error can trigger widespread chaos. It also raises questions about whether Big Tech can be trusted to safeguard powerful technologies like AI effectively.

The outage was caused by a software update from the cybersecurity firm CrowdStrike, leading to a Microsoft IT failure that affected airlines, banks, retailers, emergency services, and healthcare providers globally. While CrowdStrike deployed a software fix, many systems remained offline as companies worked to restore their services, some requiring manual updates.

Gary Marcus, an AI researcher and founder of Geometric Intelligence, which was acquired by Uber in 2016, said the Microsoft-CrowdStrike outage should serve as a “wake-up call” to consumers. He warned that the impact of a similar issue with AI would be exponentially greater. Marcus emphasized that if a single bug could take down critical sectors like airlines and banks, the readiness for artificial general intelligence (AGI) — AI capable of human-like reasoning and judgment — is highly questionable.

Marcus, who has previously criticized OpenAI, noted that current systems could pose significant problems, as consumers are granting enormous power to Big Tech companies and AI. Dan O’Dowd, founder of The Dawn Project, which campaigns against Tesla’s self-driving systems, pointed out that the CrowdStrike-Microsoft situation highlights the insecurity and unreliability of critical infrastructures. He argued that Big Tech companies often rush products to market, evaluating systems based on their performance “most of the time.”

This rush to market is evident in the recent surge of AI products and offerings over the past six months. Despite transforming how people work, these AI models have also produced notable errors, such as Google’s AI suggesting users eat pizza with glue and Gemini’s inaccurate historical portrayals. Several companies, including OpenAI, Microsoft, Google, and Adobe, have had to roll back or delay AI offerings due to issues revealed during public launches.

A Big Tech trust reckoning may be sparked by the Microsoft-CrowdStrike outage, which could jeopardize tech companies' AI ambitions. 9

While these mistakes or delays might seem minor, the potential risks could become more severe as technology advances. A risk assessment report on AI commissioned by the US Department of State earlier this year indicated that AI poses a high risk of weaponization, potentially leading to biowarfare, mass cyber-attacks, disinformation campaigns, or autonomous robots, with catastrophic consequences, including human extinction.

Javad Abed, assistant professor of information systems at Johns Hopkins’ Carey Business School, noted that incidents like the Microsoft-CrowdStrike outage occur because companies view cybersecurity as a cost rather than a necessary investment. He advocated for Big Tech companies to adopt alternative vendors and multi-layered defense strategies, emphasizing that investing in cybersecurity is far more prudent than facing potential financial losses and damage to reputation and customer trust later.

Public trust in institutions has declined steadily over the past five years, with a 2023 study by the Brookings Institution highlighting a pronounced erosion of confidence in the technology sector. Big Tech companies like Facebook, Amazon, and Google saw the sharpest drop in trust, with confidence ratings falling by 13% to 18%. This trust is likely to continue being tested as consumers and employees of the companies affected by the IT outage grapple with the reality of how a software update can bring operations to a halt.

Sanjay Patnaik, a director at the Brookings Institution, criticized the government for failing to regulate social media and AI adequately. Without proper defenses, technology could become a national security threat. Patnaik emphasized that Big Tech has had “free rein,” but the recent outage has made companies realize the need for more stringent regulation.

A Big Tech trust reckoning may be sparked by the Microsoft-CrowdStrike outage, which could jeopardize tech companies' AI ambitions. 10

Marcus agreed that companies cannot be trusted to build reliable infrastructure independently. He warned that the outage should serve as a reminder that allowing AI systems to operate unregulated is a dangerous gamble, potentially leading to severe consequences.

If you like the article please follow on THE UBJ.

Exit mobile version