AI, or Artificial Intelligence, has gained more notoriety over the past year, primarily thanks to publically available services such as OpenAI’s ChatGPT.
But in recent weeks and months, the rapid use cases of this and other services has meant that artificial intelligence news has many people jittery – especially lawmakers.
AI is Everywhere
Every day, there is new artificial intelligence tech news, showing how far it has advanced. From clearing the bar to passing the medical license tests, AI seems to break new grounds each day.
The advancement of AI, not in months or years, but weeks and days, has many concerned. No, we are not talking about paranoia or siding with Elon Musk’s doomsday outlook. Yet, some concerns are real. The impact of artificial intelligence technology penetrating into our society is very clear.
Students are already using it to spin assignments for them. Spreading like a pandemic, the unfiltered access to AI has educational institutes scrambling and deploying AI detection tools.
Formula 1 king Michael Schumacher’s family has already sued a German magazine that published a simulated interview with the racing legend.
JP Morgan has already developed an AI tool that reads and analyzes the Federal Reserve statements to build trading signals.
Some of the cases are interesting, nearly all intriguing. But where does it all end? What will AI be used for next?
Now before you classify this thought as paranoia and declare us wearing tin foil hats, try to understand that we, by no means, are against development and deployment of AI. However, there is a difference in using AI for good and leveraging it for nefarious purposes.
This is what the proposed EU AI Act is attempting to tackle.
What Does the EU’s Artificial Intelligence Act Propose?
Rather than wait for an unfortunate scenario brought to light by an artificial intelligence technology news, legislatures of the EU are looking into how to curb use of the tech in a more proactive manner.
The proposed draft is already available online, with the bill designed to ensure that artificial intelligence technology and firms developing it are clear on what they can do, or cannot. At the same time, the bill also proposes developing a systematic procedure to have transparency of how AI will use data.
In short, the act has set forward three classifications of artificial intelligence technology.
AI applications with “Unacceptable Risks” will be banned. This includes using AI by authorities or entities that collect data of citizens in real time to enforce laws that have the possibility of violating rights or manipulate sentiments.
“High Risk” uses of AI are planned to be highly regulated, with transparency to ensure that AI technology is only used for purposes that do not violate laws or rights. Example of this is text scanning technologies that can help firms and employers sift through resumes to find the best suitable candidate. With the proposed regulation, underhanded use of AI technology to weed out “undesired” candidates based on race or other traits will be curbed.
The third category has no definition as such but includes all artificial application technologies that are not banned or categorized as high risk. According to the proposed bill, these applications of AI technology will largely be left unregulated.
Why Should We Care?
Big data is already an extremely mature and well understood technology. Using algorithms, firms and governments can process large amounts of data to determine trends and extract useful information.
When paired with artificial intelligence technology, this can have a profound impact. AI can be used to make real time decisions, such as doing your facial analysis to make predictions on what content to show you. It can also go through your personal posts on social media to determine not only where you are leaning politically, but also influence your outcome in the next vote.
Do you think that is over estimating the power of AI? Authorities in China are already using artificial intelligence technology to “predict” if citizens are prone to commit crime. Does this remind you of something? Go watch Tom Cruise’s movie Minority Report.
Will the Act Stifle AI Development?
Short answer, no. Artificial intelligence technology is still nascent and has a long way to go. Like any new technology, it has the potential to help humanity or be used against it. Proper regulations like the EU AI Act, though seemingly slowing down the development pace of AI, is a good example of how this highly unregulated area should be addressed.
With more changes and updates expected in the coming days to the proposed bill, a proper execution can help continue fostering growth of AI. At the same time, it will ensure that the technology is not used for morally questionable actions at all.
