A recent survey, considering the opinions of 2,778 researchers, suggests that there’s a pretty concerning chance that artificial intelligence could lead to the extinction of humanity. Slightly more than half of the AI researchers polled believe there’s a 5% chance of humans facing extinction, along with other “really bad outcomes.”
On average, survey participants predicted a 10% likelihood that machines would surpass humans in “every imaginable task” by 2027. They also foresaw a 50% chance of this happening by 2047. But it’s not all bad news: 68.3% of those surveyed believe that “positive outcomes from superhuman AI” are more probable than negative ones.
Above all, the survey underscores the considerable disagreement and uncertainty among researchers, with widespread differences of opinion on whether progress in AI should be accelerated or decelerated.
Also Read: UK doesn’t want AI to invent things; Wants it only for humans
The five percent statistic, however, speaks volumes, indicating a noteworthy perceived threat.
“It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity,” author Katja Grace at the Machine Intelligence Research Institute in California, told New Scientist. “I think this general belief in a non-minuscule risk is much more telling than the exact percentage risk.”
As the survey notes, “forecasting is difficult in general, and subject-matter experts have been observed to perform poorly. Our participants’ expertise is in AI, and they do not, to our knowledge, have any unusual skill at forecasting in general.”
But that doesn’t imply their opinion should be dismissed.
“While unreliable, educated guesses are what we must all rely on, and theirs are informed by expertise in the relevant field,” the researchers write. “These forecasts should be part of a broader set of evidence from sources such as trends in computer hardware, advancements in AI capabilities, economic analyses, and insights from forecasting experts.”
In the near future, rather than anticipating a doomsday scenario caused by an AI, most AI researchers in the survey cautioned against issues like deepfakes, manipulation of public opinion, the development of harmful viruses, or AI systems enabling individuals to thrive at the cost of others.
With the upcoming US presidential election, everyone will be closely watching AI and its unsettling ability to manipulate the truth in a convincing manner.
Talkative AI systems, like those found in chatbots, frequently have a troubling tendency to make up information. Put simply, they often generate false details and present them as if they were real. Researchers from the Oxford Internet Institute are raising concerns about how these AI fabrications not only pose various risks but also directly undermine scientific accuracy and truth.
Chatbots tend to have this disturbing tendency to create false information and pretend it’s legitimate. They term it AI hallucinations, and it’s creating a host of issues. On the positive side, it’s hindering the complete realization of artificial intelligence’s potential. On the downside, it’s actually causing harm to real-world individuals. As generative AI becomes more prevalent, the alarm bells are growing louder.
In a paper published in Nature Human Behaviour by the Oxford Internet Institute, they’re essentially pointing out that while Large Language Models (LLMs) are designed to provide helpful and persuasive answers, there’s no firm guarantee that they’ll consistently be accurate or align with the facts.
Currently, we see LLMs as these information hubs, dishing out details when we throw questions at them. But here’s the catch: the information they absorb isn’t always accurate. One major reason is that these models often grab from online sources, which can be loaded with false claims, opinions, and just downright incorrect info.
“People using LLMs often anthropomorphise the technology, where they trust it as a human-like information source,” explained Professor Brent Mittelstadt, co-author of the paper. “This is, in part, due to the design of LLMs as helpful, human-sounding agents that converse with users and answer seemingly any question with confident sounding, well-written text. The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.”
Also Read: Claude AI is the new kid in the block; How is it different from ChatGPT?
As artificial intelligence advances relentlessly, researchers and tech experts are delving into new possibilities—developing AI health coaches that analyze health data and provide users with advice on how to maintain optimal well-being. Plenty of proof indicates that wearables do bring some advantages. A 2022 review of scientific studies discovered that, among over 160,000 participants in all the studies, those assigned to wear activity trackers took about 1,800 extra steps daily, resulting in a weight loss of roughly two pounds.
Carol Maher, a professor of population and digital health at the University of South Australia and one of the review’s co-authors, explains that wearables impact behavior in different ways. They motivate users to set goals, allow them to monitor important aspects, and alert them when they’re deviating from their objectives.
However, as noted by Andrew Beam, an assistant professor in the Department of Epidemiology at the Harvard T.H. Chan School of Public Health and a researcher in medical artificial intelligence, these impacts usually fade away as time goes on.
Shwetak Patel, a computer science and engineering professor at the University of Washington and the director of health technologies at Google, clarifies that to accurately measure the metrics we’re focused on, such as counting steps from a wrist-worn accelerometer, we require AI, though it’s a straightforward and unglamorous type.
But, he adds, there’s a lot more it can already achieve: “AI can stretch the capability of that sensor to do things that we may not have thought were possible.”
Video editing is one of the most in-demand skills in today’s content creation era. If…
There have been whispers about Samsung's ambition to equip their wearable gadgets with a neat trick:…
Taiwan Semiconductor Manufacturing Co (TSMC) recently dropped the news that they're gearing up to kick off production…
Modern chatbots like ChatGPT can churn out dozens of words per second, making them incredibly…
The race for generative AI is in full swing, but don't count on it raking…
JioCinema, the famous Indian on-demand video-streaming service, unveiled a new monthly subscription plan, starting at…