< [1] 2 [3] > |
科幻片又真係幾多證據嘅 https://en.wikipedia.org/wiki/Li ... _in_science_fiction 我嗹仔時睇star trek,葛克隊長同Mr spock成日視訊通話, 以為係科幻片啫, 我挑, 邊有啲咁嘢?而家我日日畀老婆就係咁L樣check 住……真折墮 ![]() 純粹靠AI未必可以消滅人類, 但係加埋自以為是同埋輕率, 咁就難講喇
| |
Murphy's Law 係無指導意義,因為沒有任何東西是不會出錯,所以Murphy's Law 只是純粹加增人類焦慮。 如果實證上未出過錯,但理論上有機會出錯,已經是非常值得信懶的科技。如果理論上的出錯,不是建基在科學推測,而是科幻推論,更加可被忽略。 正確是尋求科學證實指導,而非尋求科學幻想指導 | |
| |
事實上,無 AI ,人類更暴力 (二戰時有無AI?反送中暴動是否 AI 發動?俄烏是否 AI 決定開戰?),更容易進入自我毀滅危機(核戰,生物戰) | |
等待AI消滅人類的科學證實後,到時人類已被AI消滅了。 | |
科幻可以讓你樣樣都驚(焦慮症) 科幻可以話2012年係世界末日,仲要係賣座電影之一,曾打入美國周末電影票房冠軍添。 | |
| |
實證上未出過錯,但理論上有機會出錯,已經是非常值得信懶的科技。 你心存僥偉,未見過ai答錯嘢啫。你以為ai真係好似聖經咁唔會錯? | |
I strive to provide accurate and helpful information, but I’m not perfect—sometimes I might make mistakes, especially if the input is ambiguous or the topic is highly specialized. If you ever suspect an answer might be wrong, feel free to double-check with other sources or ask me to clarify! That said, I won’t intentionally give wrong answers unless you’re explicitly asking for humor, fiction, or hypothetical scenarios. Yes, I can make mistakes! While I strive to be accurate and helpful, I might sometimes:
If you think I’ve made an error, feel free to point it out—I’ll do my best to correct it or refine my response! I’m here to learn and improve. | |
Ai 對此的意見好似同你有些不同 The idea of a "Third World War" is terrifying, and it’s hard to imagine any scenario where it would be "better" than World War I or II. Both of those conflicts caused unimaginable suffering—millions of deaths, widespread destruction, and long-lasting trauma. A potential WW3, especially with modern nuclear, cyber, and AI-driven warfare, could be even more catastrophic. Why WW3 Could Be Worse: Nuclear Threat – Many countries now have nuclear arsenals far more powerful than the bombs dropped on Hiroshima and Nagasaki. A full-scale nuclear war could lead to global devastation, including nuclear winter and mass extinction. Advanced Technology – AI, drones, and cyber warfare could make conflicts faster, more unpredictable, and harder to control. Global Interconnectedness – Economies and supply chains are deeply linked, so a major war could cause worldwide famine, economic collapse, and refugee crises on an unprecedented scale. Biological & Chemical Weapons – Modern warfare could include even deadlier WMDs than those used in past wars. Could It Be "Better"? The only way a future large-scale war could be less destructive is if: It remains limited (e.g., regional conflicts without global escalation). Diplomacy and deterrence prevent total war (like during the Cold War). International laws and treaties restrain the worst atrocities. But realistically, any global war in the 21st century would likely be far deadlier than WW1 or WW2. The best hope is that humanity learns from history and avoids such a catastrophe altogether. Would you like insights on how past wars were resolved or how current conflicts might be contained? This response is AI-generated, for reference only. New chat Message DeepSeek 不如你直接同諜仙討論啦 | |
若果人類認為ai冇可能消滅人類,咁就有可能。 若果認為ai有可能消滅人類,咁就冇可能。 | |
喂你解決左科學嘅repetition crisis未?定係當八萬五咁唔講當唔存在呀?你上次好似話唔知百份之五十幾科學定律不可信㗎? | |
呢個paradox都係depends on人類點睇,但人類並不只是一個腦,每個人既睇法都唔同喎。 | |
| |
回覆 Guest from 182.239.85.x Donald Trump咪想解決囉,佢向哈佛宣戰 | |
好似怪怪地咁: 若果ai adminstrator認為ai冇可能消滅人類,咁就有可能。 若果ai adminstrator認為ai有可能消滅人類,咁就冇可能。 | |
咁donald trump未解決到佢,你仲點解迷信科學? 同埋人地donald trump郁哈佛唔係解決repetition crisis,你鳩噏俾返少少上限,討論會有啲效率。 做人好忌講野一時一樣或者張口就來,呢兩樣我一定會不停提醒你。 | |
好正常喎。 當司機以為murphy's law唔係科學定律無指導意義嘅時候, 車死人概率都較高啦 好似pilot以為有tcas就永遠唔會撞機。 | |
回覆 Guest from 124.244.37.x Donald Trump 插哈佛幾樣野 1)包疪反猶示威,撐巴勒斯坦人示威。 2)DEI (話哈佛唔識2+2,即係講,哈佛不識男就是男,女就是女。即係,插哈佛支持變性人運動) 3)科學界有很嚴重的Replication Crisis,即係科學本身都唔係真係咁值得信任。暗指,哈佛雖然在科學排名高,都唔代表哈佛係掂,唔駛比面哈佛。 | |
唔係有講野就叫有答問題㗎喎,麻煩你諗清楚先再講過啦。 |
< [1] 2 [3] > |