AI Fear

Sep. 19th, 2025 11:39 am
paserbyp: (Default)
[personal profile] paserbyp


I asked ChstGPT about AI’s take on whether we should fear AI? Here's the response:

"Fear of AI should be replaced with informed vigilance. While powerful AI systems present legitimate concerns around job displacement, privacy, security, and autonomous decision-making, these risks can be managed through robust governance frameworks, transparency requirements, and human oversight mechanisms."

So, the technology itself is neither inherently good nor evil—its impact depends entirely on how we design, deploy, and regulate it. Rather than fear, we need thoughtful engagement from diverse stakeholders to ensure AI development aligns with human values and benefits humanity broadly while minimizing potential harms.

Professionals in AI believe the greatest fears center on both immediate and long-term risks. Many worry about AI eroding core human capabilities like critical thinking, creativity, and empathy as people increasingly defer decision-making to algorithms. There's growing concern about AI exacerbating existing inequalities and bias, as systems trained on limited datasets can perpetuate discrimination.

Some experts, particularly those impressed by recent advances in large language models, fear the potential development of superintelligent AI that could act beyond human control. This remains hotly debated, with figures like Meta's Yann LeCun dismissing existential threats while others like Nate Soares warn of catastrophic risks. Many researchers emphasize that unregulated AI development focused on profit maximization poses immediate societal dangers that shouldn't be overlooked.

Expert estimates on AI catastrophic risks vary dramatically. In a notable survey of AI researchers, probability assessments of extinction-level events by 2070 ranged from virtually zero (0.00002%) to alarmingly high (>77%). A separate survey found half of AI researchers placed the risk at 5% or higher.

Among business leaders, perspectives are equally divided. Forty-two percent of CEOs surveyed believe AI could potentially destroy humanity within 5-10 years, while 58% dismiss such concerns entirely.

Most experts agree that while complete extinction remains difficult to achieve technically, the interconnected nature of AI risks with nuclear weapons, bioterrorism, and critical infrastructure warrants serious preventative measures.

anais_pf: (Default)
[personal profile] anais_pf posting in [community profile] thefridayfive
These questions were originally suggested by [livejournal.com profile] polypolyglot.

1. Do you believe you can have more than one soulmate in life?

2. Are you with that soulmate now?

3. If not, how long did your relationship with your soulmate last?

4. Do you still think about your soulmate, if you are not together?

5. If you're not together, do you think your soulmate still thinks about you?

Copy and paste to your own journal, then reply to this post with a link to your answers. If your journal is private or friends-only, you can post your full answers in the comments below.

If you'd like to suggest questions for a future Friday Five, then do so on DreamWidth or LiveJournal. Old sets that were used have been deleted, so we encourage you to suggest some more!

Proton

Sep. 14th, 2025 09:57 am
paserbyp: (Default)
[personal profile] paserbyp
Proton Mail, an encrypted email messaging service, allegedly disabled the accounts of two journalists investigating cybersecurity breaches in the South Korean government.

Proton is commonly used by people seeking highly secure communications and has been blocked in countries with strict internet censorship, like Russia and Turkey. Many news organizations use the service to manage tips.

The two journalists were working on an article about an “APT,” or advanced persistent threat, that had penetrated computer networks at numerous vital government agencies in South Korea, including the Ministry of Foreign Affairs and the military’s Defense Counterintelligence Command.

The journalists had set up a new Proton Mail account to manage "responsible disclosures" for the article, which is where ethical hackers disclose vulnerabilities to organizations. A week after the article was published, the journalists found the account they had set up for responsible disclosure notifications had been suspended. A day later, one of the journalists allegedly found that his personal Proton Mail account had also been suspended.

Phrack, a hacker-focused magazine that published the article, attacked Proton in an X post, asking, “Why are you cancelling journalists and ghosting us?”.

In a reply on X, Proton’s official account said the company was “alerted by a CERT that certain accounts were being misused by hackers in violation of Proton’s Terms of Service,” leading to their disabling. A CERT is an official government agency working on cybersecurity, such as the US Computer Emergency Readiness Team (US-CERT) in the Department of Homeland Security.

Proton’s CEO later announced that the accounts were reinstated, following another post(https://x.com/ProtonPrivacy/status/1965828424963895605) by the company that said the company does “stand with journalists,” but that it “cannot see the content of accounts and therefore cannot always know when anti-abuse measures may inadvertently affect legitimate activism.”

The relationship between encrypted messaging services and governments continues to be a big issue in 2025. Last month, the UK government dropped its mandate requiring Apple to provide backdoor access to Americans' iCloud data.
anais_pf: (Default)
[personal profile] anais_pf posting in [community profile] thefridayfive
These questions were originally suggested by [livejournal.com profile] wownelwow.

1. What is your favourite fruit?

2. What is the last book you read?

3. Do you like any of your school photos?

4. Do you ever blowdry your armpits to get the deodorant to dry quicker?

5. What was the last film you watched?

Copy and paste to your own journal, then reply to this post with a link to your answers. If your journal is private or friends-only, you can post your full answers in the comments below.

If you'd like to suggest questions for a future Friday Five, then do so on DreamWidth or LiveJournal. Old sets that were used have been deleted, so we encourage you to suggest some more!
paserbyp: (Default)
[personal profile] paserbyp
Pioneering computer scientist Geoffrey Hinton, whose work has earned him a Nobel Prize and the moniker “godfather of AI,” said artificial intelligence will spark a surge in unemployment and profits.

In a wide-ranging interview with the Financial Times, the former Google scientist cleared the air about why he left the tech giant, raised alarms on potential threats from AI, and revealed how he uses the technology. But he also predicted who the winners and losers will be.

“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

That echos comments he gave to Fortune last month, when he said AI companies are more concerned with short-term profits than the long-term consequences of the technology.

For now, layoffs haven’t spiked, but evidence is mounting that AI is shrinking opportunities, especially at the entry level where recent college graduates start their careers.

A survey from the New York Fed found that companies using AI are much more likely to retrain their employees than fire them, though layoffs are expected to rise in the coming months.

Hinton said earlier that healthcare is the one industry that will be safe from the potential jobs armageddon.

“If you could make doctors five times as efficient, we could all have five times as much health care for the same price,” he explained on the Diary of a CEO YouTube series in June. “There’s almost no limit to how much health care people can absorb—[patients] always want more health care if there’s no cost to it.”

Still, Hinton believes that jobs that perform mundane tasks will be taken over by AI, while sparing some jobs that require a high level of skill.

In his interview with the FT, he also dismissed OpenAI CEO Sam Altman’s idea to pay a universal basic income as AI disrupts the economy and reduce demand for workers, saying it “won’t deal with human dignity” and the value people derive from having jobs.

Hinton has long warned about the dangers of AI without guardrails, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence.

In his view, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.

In his FT interview, he warned AI could help someone build a bioweapon and lamented the Trump administration’s unwillingness to regulate AI more closely, while China is taking the threat more seriously. But he also acknowledged potential upside from AI amid its immense possibilities and uncertainties.

“We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly,” Hinton said. “We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad. We can make guesses, but things aren’t going to stay like they are.”

Meanwhile, he told the FT how he uses AI in his own life, saying OpenAI’s ChatGPT is his product of choice. While he mostly uses the chatbot for research, Hinton revealed that a former girlfriend used ChatGPT “to tell me what a rat I was” during their breakup.

“She got the chatbot to explain how awful my behavior was and gave it to me. I didn’t think I had been a rat, so it didn’t make me feel too bad . . . I met somebody I liked more, you know how it goes,” he quipped.

Hinton also explained why he left Google in 2023. While media reports have said he quit so he could speak more freely about the dangers of AI, the 77-year-old Nobel laureate denied that was the reason.

“I left because I was 75, I could no longer program as well as I used to, and there’s a lot of stuff on Netflix I haven’t had a chance to watch,” he said. “I had worked very hard for 55 years, and I felt it was time to retire . . . And I thought, since I am leaving anyway, I could talk about the risks.”

AI Jazz

Sep. 6th, 2025 12:29 pm

Profile

waitingman: (Default)waitingman

November 2024

S M T W T F S
     1 2
3456789
10111213141516
17181920212223
24252627282930

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 20th, 2025 12:03 am
Powered by Dreamwidth Studios