Be Tech Ready!!
Menu Close
Artificial Intelligence

EU’s AI Act might be bad news for small firms 

The US is warning that the EU’s new AI Act could end up hurting smaller European businesses while giving an advantage to larger companies who can easily afford to meet the expensive compliance requirements. Bloomberg got its hands on some papers, revealing that the State Department is getting all worked up about the European Parliament’s take on the upcoming law.

They’re especially concerned about the regulations for large language models (LLMs), which are the backbone of most AI tools that create content. The analysis discovered that several of these regulations were “blurry or unclear.” Additionally, it expressed worries about the act’s emphasis on the hazards associated with creating AI models, rather than those linked to their actual utilization.

Will the new AI Act affect productivity? 

Washington cautioned that the new rules might slash productivity, trigger job shifts, and put a damper on investment in research and development and business growth in the area, messing with the competitiveness of European businesses.

Insiders familiar with the situation told the newspaper that the US input has already been passed on to EU leaders.

On the other hand, European businesses have voiced comparable fears. In June, executives from several major companies in the bloc conveyed “significant worries” in a letter dispatched to the Parliament, Commission, and member states.

“The draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” said the letter.

Also Read: Here’s what EU’s 5 futuristic technologies will test in space

European firms not in favor of the new AI Act

The people who signed the letter include top dogs from major companies like Heineken, Carrefour, and Renault, as well as heads of tech firms such as Ubisoft, TomTom, and Mistral AI. They’re saying the AI Act will make companies bail out of the bloc, investors shift their money elsewhere, and slow down progress in Europe.

Their main worry comes from a recent tweak to the rules. On June 14, the European Parliament tacked on fresh demands for generative AI tools like ChatGPT, which, according to the signers, would trigger “unfairly high” costs for following the rules and risks for taking responsibility.

They’re cautioning that this will shove Europe even further back compared to the US in AI advancement. They add that this effect will stretch from the economy to culture, as big language models will be woven into everything from search engines to digital assistants.

“States with the most powerful large language models will have a decisive competitive advantage… Europe cannot afford to stay on the sidelines,” said the letter.

When will the new rules come into act?

Conversely, certain EU nations, such as Italy, have begun to regulate generative AI even prior to the act coming into force. In a July survey, European consumers expressed their belief that the technology ought to be subject to stringent regulations.

The survey revealed that AI-driven online search is one of the top three fascinating applications in all countries except Spain, where it comes in fourth. People are also stoked about AI being incorporated into healthcare diagnostics, roadside aid, and suggestions for flights and hotels. Yet, most of the European folks polled reckon that society isn’t quite prepared for AI. This feeling is particularly strong in France and Germany, voiced by 74% and 70% of the participants, respectively. 

The AI Act is set to kick in around late 2025 or early 2026. Facing pushback from the business world and concerns of potential extinction, it will need to strike a delicate balance between ensuring safety and fostering innovation.

Also Read: How research of attoseconds helped these physicists win Nobel Prize

Is there really a risk of extinction?

Some of Europe’s top tech whizzes teamed up with a worldwide squad of IT gurus to sound the alarm that AI might bring about the end of the world. The warning, put out by the non-profit Center for AI Safety, has gathered signatures from various business honchos, scholars, and public figures. 

This list features Sam Altman, the head honcho at OpenAI, Kevin Scott, the bigwig at Microsoft, and, um, the musician Grimes. Surprisingly, her ex-boyfriend Elon Musk was nowhere to be found, despite his extensive history of expressing worries about the sector.

A big chunk of the people who signed up are from Europe. Some of the notable names include Demis Hassabis, the CEO of Google DeepMind, who hails from London, Kersti Kaljulaid, the former president of Estonia, and Geoffrey Hinton, a British smarty-pants who won the Turing Award and recently left Google to yap about the perils of AI.

The announcement adds to the flurry of recent warnings about the life-and-death risks brought on by AI. Just in the past couple of months, big shots in the industry have asked for a halt in training supercharged AI systems due to concerns about the threats to mankind. Healthcare experts have requested a break in the development of artificial general intelligence. Musk has cautioned that AI might bring about “the end of civilization,” and Google boss Sundar Pichai has confessed that the perils “give him sleepless nights.

Skeptics might point out that many of the people raising the alarm are also pushing back against any AI rules that could hurt their businesses.

Business leaders have some solutions in mind

On top of voicing their concerns, the business heads came up with some ideas. Their main proposal is to limit EU regulations to general principles using a risk-based method, managed by a specific body that can adjust to new advancements and risks. They emphasized that this procedure should be developed through discussions with the business sector.

The folks who signed also showed some love for parts of the AI Act. They gave a thumbs-up to compulsory safety checks for new systems, a standard label on AI-made content, and a responsibility to be careful when creating models. But these gestures didn’t exactly win the hearts of the lawmakers. Dragos Tudorache, who co-headed the making of the AI Act, quickly shot down the letter.

The good news for the writers is that they still have plenty of time to pen more letters. The AI Act isn’t predicted to take effect until 2026 at the earliest.

Is ChatGPT accurate and should we believe what it says?

Modern chatbots like ChatGPT can churn out dozens of words per second, making them incredibly useful for sifting through and understanding vast amounts of data. With over 500GB of training data and an estimated 300 billion words processed, this AI language model is also capable of providing answers to many factual questions. However, despite how human-like ChatGPT’s responses may appear, a critical question lingers: just how accurate is the information it offers?

While ChatGPT can be really informative most of the time, you’ve likely heard about loads of controversies surrounding generative AI. From racial biases to harmful content, there’s a whole history of issues to think about before fully trusting anything generated by AI.

How accurate is ChatGPT?

Absolutely, ChatGPT can definitely be accurate, especially when it comes to straightforward questions with clear answers. When it comes to well-established information, ChatGPT can pull relevant data from its training and provide truthful responses. So, for a question like “What is the capital of France?”, you’re highly likely to receive the correct answer.

However, chatbots like ChatGPT often make up information when they come across a new or challenging question. This happens because generative language models are programmed to mimic how humans write, not how we think. As a result, their logical reasoning abilities are quite limited.

The issue with ChatGPT’s accuracy goes beyond what you might expect. It frequently includes completely made-up details and comes up with convincing-sounding facts in response to certain prompts. While the chatbot’s creator has implemented various safeguards to prevent these fabrications, as our tests will demonstrate later in this article, they’re not entirely foolproof.

If you’re looking for solid evidence, numerous studies have thoroughly tested ChatGPT’s accuracy, revealing a consistent pattern. ChatGPT tends to have a surprisingly high accuracy rate for typical questions. For instance, in a medical study, the chatbot scored a median rating of 5.5 out of 6 on a scale. However, ChatGPT’s regular updates can sometimes backfire, affecting its accuracy and usefulness.

Another study by researchers from UC Berkeley and Stanford University found that the chatbot’s ability to identify prime numbers dropped from an impressive 84% accuracy to just 51% within three months. In summary, it’s best not to fully trust ChatGPT’s responses without fact-checking them first.

Can ChatGPT’s accuracy be improved?

If you’re just a casual ChatGPT user, you might not have thought about upgrading to the paid tier. However, if you heavily rely on the chatbot’s responses, it’s worth considering. Upgrading can significantly boost its accuracy, making it a top priority. This is because with the $20 ChatGPT Plus subscription, you’ll gain access to the more advanced GPT-4 Turbo language model.

The GPT-4 language model outshines its predecessor, GPT-3.5, which still powers the basic chatbot experience today. OpenAI reports that the newer model scored in the 89th percentile for SAT Math, 90th percentile for the Uniform Bar Exam, and 80th percentile for the GRE Quantitative. These results are almost across the board better than those of GPT-3.5.

Scoring in the 80th to 90th percentile means that GPT-4’s accuracy doesn’t quite reach the level of expertise of human professionals in those fields. However, ChatGPT Plus also gives you access to web browsing support, which means the chatbot can look up information on Wikipedia and other online sources. It’s like having live research at your fingertips, similar to how we use Google to find the right answers.

Is ChatGPT better than Gemini?

ChatGPT and Google Gemini are starting to seem like siblings ever since Gemini Ultra 1.0 came out, giving GPT-4 some serious competition. They both offer free features, almost identical subscription plans, and their interfaces and functionalities are pretty much alike. However, if you dig deeper, that’s where you’ll find the true distinctions – in their language models.

The major difference between ChatGPT and Gemini is where they get their intelligence from. GPT-3.5’s knowledge is limited to January 2022, while GPT-4 extends to April 2023. But Gemini? It’s like a sponge soaking up fresh information from the web as it happens. What’s more, it’s selective – it only pulls data from sources that match specific topics, such as coding or the latest scientific breakthroughs.

With ChatGPT, it’s all about which version you’re using. If you’re sticking with the free version, you’re working with OpenAI’s GPT-3.5 or GPT-4. But if you’ve splurged on ChatGPT Plus, you’re diving into the premium features.

Now, Gemini offers three options: Gemini Pro, Gemini Ultra, and Gemini Nano. Pro is your versatile choice, Ultra is for those big tasks, and Nano is the compact version for mobile use. The Ultra 1.0 is the engine behind the subscription-based Gemini Advanced, outpacing the free Pro version with its speed and intelligence.

Vishal Kawadkar
About author

With over 8 years of experience in tech journalism, Vishal is someone with an innate passion for exploring and delivering fresh takes. Embracing curiosity and innovation, he strives to provide an informed and unique outlook on the ever-evolving world of technology.