離教者之家

AI不可能滅絕人類

< [1] 2
沙文 2025/6/16 23:06
至於你話:"而AI消滅人類,只有科幻片做證據。"

科幻片又真係幾多證據嘅
https://en.wikipedia.org/wiki/Li ... _in_science_fiction
我嗹仔時睇star trek,葛克隊長同Mr spock成日視訊通話, 以為係科幻片啫, 我挑, 邊有啲咁嘢?而家我日日畀老婆就係咁L樣check 住……真折墮

純粹靠AI未必可以消滅人類, 但係加埋自以為是同埋輕率, 咁就難講喇



匿名 2025/6/17 14:44
人人心存僥倖係錯誤,但心存焦慮更錯誤(心理學證實是病態,要食藥)

Murphy's Law 係無指導意義,因為沒有任何東西是不會出錯,所以Murphy's Law 只是純粹加增人類焦慮。

如果實證上未出過錯,但理論上有機會出錯,已經是非常值得信懶的科技。如果理論上的出錯,不是建基在科學推測,而是科幻推論,更加可被忽略。

正確是尋求科學證實指導,而非尋求科學幻想指導
匿名 2025/6/17 14:45
正確是尋求科學證實指導,而非尋求科學幻想劇情指導
匿名 2025/6/17 14:49
你驚 AI ,比如驚以伊戰爭引發世界核戰

事實上,無 AI ,人類更暴力 (二戰時有無AI?反送中暴動是否 AI 發動?俄烏是否 AI 決定開戰?),更容易進入自我毀滅危機(核戰,生物戰)
抽刀斷水 2025/6/17 16:38
正確是尋求科學證實指導,而非尋求科學幻想劇情指導
Guest from 182.239.85.x 發表於 2025/6/17 14:45



    等待AI消滅人類的科學證實後,到時人類已被AI消滅了。
匿名 2025/6/17 16:45
科幻情節
科幻可以讓你樣樣都驚(焦慮症)

科幻可以話2012年係世界末日,仲要係賣座電影之一,曾打入美國周末電影票房冠軍添。
抽刀斷水 2025/6/17 17:10
咁細膽的話,好耐之前寫既聖經話天國近了世界末日都驚餐飽啦。
沙文 2025/6/17 20:22
實證上未出過錯,但理論上有機會出錯,已經是非常值得信懶的科技。

Murphy's Law 係無指導意義,因為沒 ...
Guest from 182.239.85.x 發表於 2025/6/16 22:44

你心存僥偉,未見過ai答錯嘢啫。你以為ai真係好似聖經咁唔會錯?
沙文 2025/6/17 20:28
唔係驚,係alert潛在風險。所以,除咗聽你咁科學嘅頭腦所提供的寶貴意見之外,不如我哋聽下ai自己點講:
I strive to provide accurate and helpful information, but I’m not perfect—sometimes I might make mistakes, especially if the input is ambiguous or the topic is highly specialized. If you ever suspect an answer might be wrong, feel free to double-check with other sources or ask me to clarify!

That said, I won’t intentionally give wrong answers unless you’re explicitly asking for humor, fiction, or hypothetical scenarios. Yes, I can make mistakes! While I strive to be accurate and helpful, I might sometimes:
  • Misunderstand your question (especially if it's ambiguous or has typos).

  • Provide outdated information (my knowledge is current only up to July 2024).

  • Give incomplete answers if I don’t fully grasp the context.

  • Occasionally hallucinate (generate plausible but incorrect details, especially in niche topics).

If you think I’ve made an error, feel free to point it out—I’ll do my best to correct it or refine my response! I’m here to learn and improve.

沙文 2025/6/17 22:18
無 AI ,人類更暴力 (二戰時有無AI?反送中暴動是否 AI 發動?俄烏是否 AI 決定開戰?),更容易進入自我毀滅危機(核戰,生物戰)

Ai 對此的意見好似同你有些不同

The idea of a "Third World War" is terrifying, and it’s hard to imagine any scenario where it would be "better" than World War I or II. Both of those conflicts caused unimaginable suffering—millions of deaths, widespread destruction, and long-lasting trauma. A potential WW3, especially with modern nuclear, cyber, and AI-driven warfare, could be even more catastrophic.

Why WW3 Could Be Worse:
Nuclear Threat – Many countries now have nuclear arsenals far more powerful than the bombs dropped on Hiroshima and Nagasaki. A full-scale nuclear war could lead to global devastation, including nuclear winter and mass extinction.

Advanced Technology – AI, drones, and cyber warfare could make conflicts faster, more unpredictable, and harder to control.

Global Interconnectedness – Economies and supply chains are deeply linked, so a major war could cause worldwide famine, economic collapse, and refugee crises on an unprecedented scale.

Biological & Chemical Weapons – Modern warfare could include even deadlier WMDs than those used in past wars.

Could It Be "Better"?
The only way a future large-scale war could be less destructive is if:

It remains limited (e.g., regional conflicts without global escalation).

Diplomacy and deterrence prevent total war (like during the Cold War).

International laws and treaties restrain the worst atrocities.

But realistically, any global war in the 21st century would likely be far deadlier than WW1 or WW2. The best hope is that humanity learns from history and avoids such a catastrophe altogether.

Would you like insights on how past wars were resolved or how current conflicts might be contained?

This response is AI-generated, for reference only.
New chat
Message DeepSeek

不如你直接同諜仙討論啦
< [1] 2

返回首頁 | 登錄 | 註冊