Change Language
wds-media
  • Home
  • VPN
What Responsible AI Means Outside of Big Tech

What Responsible AI Means Outside of Big Tech

As part of Solutions Review’s Expert Insights Series—a collection of contributed articles written by industry experts in enterprise software categories—Kevin Miller, the Chief Technology Officer of IFS in North America, discusses what responsible AI might look like outside of “big tech” markets.

In November 2022, OpenAI released ChatGPT-3 for public use, followed by ChatGPT-4 on March 14, 2023, after GPT3’s explosive popularity. On March 29, over 1,100 tech leaders, including Elon Musk and Steve Wozniak, signed an open letter asking all AI labs to pause research and development for six months. Shortly after, Italy became the first country to ban ChatGPT, followed by the European Union and China announcing their plans to regulate AI. The debate over ethical AI—and the fear that the unknowable intelligence of our creation would wipe out humanity—was once again reignited. 

When we think of responsible AI, what comes to mind is its effect on tech companies—how AI development will be regulated and what types of new AI developments will come from it. Now that intelligent machines are becoming more ubiquitous across the economy, the debate dives into how it affects those outside tech. For many in an industry like manufacturing, the main concerns are not whether AI will be sentient. Instead, it’s about understanding the advice and decisions AI models make and detecting malware as organizations increasingly integrate and rely on them. 

Real-World Uses of AI in Sectors Outside Tech 

The ideal outcome for AI is to make a better world. Take manufacturing or utilities: AI can free up precious time and resources by automating workloads, making optimal business decisions, and streamlining operations. Predictive maintenance is just one example of where AI can plug in to simplify field service operations by identifying machinery maintenance needs before the service team is deployed. This allows businesses to reclaim the time that would have previously been spent on diagnosing an issue and traveling back and forth to the site, which can now be spent on more important tasks or simply getting it done faster. 

What is Explainable AI, and Why is it Key to Successful AI Deployment? 

When it comes to responsible AI, there are two essential aspects to consider. The first is the practical aspect, as in, is it making the right decisions for the right reasons? Having explainable AI is hugely important to understand why it makes the decisions it does and why it went down that path if it makes a wrong decision. Often, this will turn into a cycle where machine learning feeds the AI, and the AI then produces more data for the machine learning model. Faulty reasoning will pollute the output, resulting in unusable data and untrustworthy decision-making. 

Conversely, the ethical aspect centers on more cybersecurity concerns surrounding AI. Ransomware presents a significant problem to any AI system—aside from just delivering malware to shut down a business, what if it’s used for more insidious, discrete purposes? If malware corrupts the data in an AI system, for example, warping the algorithm, it can lead to more disastrous consequences like damaging the products and the company’s reputation.  

Why the Biggest Threat From AI is Malware 

The more autonomous and intelligent AI systems become, the bigger the risk of a malicious entity infiltrating and corrupting them without shutting them down entirely, thus less likely to be detected and fixed promptly. Lack of human intervention gives malware, whose entire goal is propagating an attack and spreading quickly throughout IT as a whole system, more opportunity to slip by without being noticed. 

Cybersecurity, especially zero-trust and isolation principles, becomes critical to the safe and responsible use of AI—from ensuring software produces the correct proofs and audits to separating the duties and permission sets for each task or user. In this way, practical and ethical AI go hand in hand with creating responsible AI, which can then be used as intended to drive business decision-making. 

Of course, the question remains, how do we ensure the AI we’re developing is both ethical and practical? ChatGPT has proven to be more efficient and capable with each iteration while gaining popularity simultaneously. While fear of the unknown will always be present, and for valid reasons, it’s unlikely that people will stop making new AI tools as we continue to explore space or the deep sea. It is instead about ensuring we understand how it works, making it work for us, and protecting it against malicious attacks from bad actors.


Widget not in any sidebars

The post What Responsible AI Means Outside of Big Tech appeared first on Solutions Review Technology News and Vendor Reviews.

Victoria: An introduction to the state’s key varieties and styles

Victoria: An introduction to the state’s key varieties and styles

Read More