Tuesday, March 17, 2026

美國攻打伊朗,關於國際法的儒學辯論;附帶談一下特朗普做戰時總統的癮

三月三日,在面書出了個公開帖文,蹉跎到今晚,才有心情將之整理,下文的粗體字段落和後記,是會員區獨有的:


現代西方的國際法規範主權國家行為的,然而,如果主權國家的政府是憲政民主選出來,該主權就是公義的若否,該主權是不公義的,如果加上殘暴對待人民、奴役人民,則該主權是反該國人民的只是在現代的武器和監管技術,人民無法揭竿起義,無法有效反抗

若一個正義的主權國家誅殺另一個邪惡主權國家元首,則是替天行道,也是為民除害,周武王誅殺商紂王,孟子說是誅殺獨夫。孔子寫的易傳,如此說湯武王誅殺夏桀王、周武王誅殺商紂王(《周易·革卦·彖傳》):「天地革而四時成,湯武革命,順乎天而應乎人。」。


周武王革除了商紂王的天命,取而代之之際,在諸侯面前誓師,創造了一個概念:「天視自我民視,天聽自我民聽」。此句出自《尚書·泰誓中》,意為上天所看到的來自於民眾(諸夏的眾民)所看到的,上天所聽到的來自於民眾所聽到的。「天意即民意」,紂王不恤民命,諸侯可以取而代之誅殺紂王,是替天行道

湯王攻打夏桀,也是基於夏桀的暴政令到天怒人怨。「時日曷喪,予及汝皆亡」出自《尚書·湯誓》,是夏朝百姓咒罵暴君夏桀的話。意為「這個太陽(指夏桀)什麼時候滅亡啊?我寧願和你一起同歸於盡!」。

儒家的政治學說是實用理性,君王到了暴虐無道的時候,繼續容忍他們,是亡國滅族的愚行。基於民意而革除天命的論說,有點接近主權在民的概念,只是差在未連接到現代民主制度。

「主權在民」(Popular Sovereignty)的思想源於十七至十八世紀的「社會契約論」,最系統性建立此理論的是法國思想家郎-雅克·盧梭(Jean-Jacques Rousseau),他在一七六二年出版的《社會契約論》中明確提出。國家主權屬於人民全體(公意),而非君主政府僅是執行人民意願的代理人。「主權在民」的理念,也在一六八九年由英國思想家約翰·洛克(John Locke)提出。他主張每個人都有生命、財產和自由等天賦人權,政府的主要功能,就是保障天賦人權


主權神聖不可侵犯,是歐洲諸侯國在一六四八年為了免戰息爭而訂立的《西法利亞和約》(德文Westfälischer Friede),是規範貴族之間的君子協定,此和約後來演變為國家主權的國際法基礎。然而,正如之前所說,此和約只是貴族之間的君子協定,一旦某國的行為如同流氓,元首品德毫無貴格,則主權國家失其實,亦喪其名,其他公義國家毋須視之為主權國家,可以群起而攻之,誅殺暴虐,為匹夫匹婦復仇,孟子謂之弔民伐罪,儒家謂之春秋大義。

讀聖賢書,所為何事?久違了的儒家國際法學,國師今日講出。


後記:

觀察了兩個星期,我發現特朗普是在補償第一任的時候,沒做戰時總統的悔恨!他第一任的時候無法連任,就是在南中國海不能輕易打中共一場,反而通報中共說,美軍只是執行艦艇驅逐,不會向共軍開砲(後來的美國將軍證實,他奉命向解放軍通報)。中共恩將仇報,在選舉將特朗普搞下來,沒兌現承諾去購買美國黃豆和其他農產品,也在頗多美國媒體詆毀特朗普。

世間上,最大的動力是愛,其反面是恨,其加強版本是悔恨(regret)!我在《蛇年開運》寫過。目下,特朗普是用regret來做動力,極其可怕。政界很多人無法看出這點,以為特朗普怕捲入伊朗的泥淖。他恨不得捲入去,可以大殺四方,以前用關稅,被法院否定,各國也嗤之以鼻;現在,特朗普用的是石油和糧食來威脅各國跟隨,當中包括中共。中東頗多石油和化工產品(如耕作需要的尿素)必須用輪船駛過霍爾木茲海峽,而目前這條海峽是由美國控制(不是伊朗軍閥!),美國說有水雷,或者伊朗那些暗地裡合作的軍頭說不放船通過,石油和糧食就會貴——這些是美國自給自足的,盟友不出兵接受特朗普號令,就只能捱貴油、餓糧食。

此外,規劃好的假疫情(“plandemic“)的目的是製造經濟蕭條,經濟蕭條必須用戰爭解決,以便經濟轉型,推動AI生產。目前的伊朗戰爭,除了美國掠奪AI必須要有的能源之外(AI需要瘋狂發電來支援),也是奪取關鍵礦產,更要緊的是,它確保大蕭條可以到來,沒有例外!美國和以色列滅掉神權領袖和繼任人,也殺滅伊朗政府的高層,就是令伊朗群龍無首,各自為政,無法投降,美國要打到幾時都得。七孔流血,但死不了。


Source: 陳雲

https://www.patreon.com/posts/mei-guo-gong-da-153261311

Saturday, March 14, 2026

Pulse 快評|莊雅婷犯了中共官場甚麼大忌?


委任區議員莊雅婷與富家子弟鍾培生最近上演了一場浮誇求婚騷,這齣喜劇瞬間變成了政治公關災難。傳聞有「重量級建制人士」批評莊雅婷「政治生涯到此為止」,而原定由莊主持、邀請了立法會主席李慧琼及多名局長級高官出席的「香港青年海洋論壇」突然宣告延期至另行通知。這場風波,正好為渴望上位的藍絲們提供了一個絕佳的教學案例。

鍾培生拍片反問:「我訂婚喺屋企閉門搞,區議員唔可以結婚嘅咩?」這句話問得理直氣壯,卻暴露出他不熟悉「新時代中國特色社會主義」。中共當然不反對官員或公職人員結婚,但中共反對「炫富」與「浮誇」。在習近平治下,神州大地高舉「共同富裕」的大旗,連頂級富豪如馬雲都要夾著尾巴做人,娛樂圈明星稍有炫富或行為不端,立刻遭到封殺。香港既然已經「二次回歸」,建制派的政治人物自然也要符合這套主旋律。

莊雅婷作為政府委任的區議員,拿的是公帑,代表的是政府的基層管治形象。她與鍾培生的婚事雖然號稱「閉門」,但以鍾公子一貫高調出位的作風,這場婚事注定會成為八卦雜誌的C1頭條。對於中共而言,一個重點栽培的年輕幹部,不把精力放在落區探訪劏房戶、解決民生疾苦,反而將自己活成了娛樂圈的「名媛小花」渾身散發著資本主義土豪的銅臭味與輕浮感,這絕對是犯了政治上的大忌。

中共需要的是聽話、低調、刻苦耐勞的「螺絲釘」,而不是經常在IG穿緊身白色衫拍片搏出位、惹是非的「KOL」。建制派從來不缺有錢人,但有錢而不知收斂,甚至將公職人員的身份當作自己晉身豪門、抬高身價的嫁妝,這就觸碰了阿爺的底線

有傳「建制重量級人士」質疑莊雅婷「是真心從政還是藉此成名」,這句話可謂一針見血。看看莊雅婷的履歷,港大和北大畢業,23歲被委任為區議員,本來是一條康莊大道。然而,她竟然跑去報名參選「香港小姐」,雖然後來匆匆退選,但這已經暴露了她性格中的致命弱點:虛榮

中共幹部考核邏輯裡,「忠誠」與「動機」是首要考量。你從政,是為了「為人民服務」,還是為了積累知名度去選美?當港姐夢碎,轉頭又高調嫁入豪門,這向公眾釋放了一個極其惡劣的訊號:這個年輕女孩把政府的委任議席,當成了自己人生跳板的「踏腳石」

鍾培生辯稱「過往都有香港小姐擔任政協和人大」,這是犯了時空錯亂的謬誤。例如,鄺美雲能夠擔任人大,是因為她在商界和慈善界打滾多年,積累了足夠的社會資本與統戰價值,是在「功成名就」之後被阿爺吸納;而莊雅婷只是一個毫無社會貢獻、寸功未立的黃毛丫頭,阿爺破格提拔你,是為了讓你去做事、去歷練,而不是讓你拿着「區議員」的銜頭去選美和配對富豪。兩者性質,猶如天壤之別。

要理解中共對建制派年輕人的期望,可以拿江旻憓來作一個對比。江旻憓的成功,在於她完美契合了中共在國際舞台和香港社會雙重需要的「高質量精英」形象。她為中國香港爭光,拿了奧運金牌,這是硬實力;她不炫富,展現出一定的個人素養。

更重要的是,她低調地寫關於「一國兩制」的碩士論文,懂得在適當的時候說適當的話,將自己的個人榮耀與國家民族自豪感綁定。這就是中共最渴望擁有的「統戰樣板」——一個受過西方頂尖教育,卻擁抱中共體制,深受廣大市民喜愛的乖乖女。

反觀莊雅婷,除了鄧炳强給予的「棄暗投明」標籤,她為香港社會貢獻了甚麼?沒有。中共當然不介意藍絲出位,你可以像戰狼一樣在國際舞台上為國家辯護而出位,又可以像江旻憓一樣在體育競技上奪金而出位,但你絕不能因為私人感情的奢靡做作而出位。

莊雅婷與鍾培生的高調,不僅沒有為建制派加分,反而惹來基層市民的反感,加深了社會對「建制派就是權貴俱樂部」的刻板印象。經濟低迷、基層市民還在苦苦掙扎的今天,這對年輕男女的炫富式訂婚,簡直是往社會的貧富懸殊傷口上撒鹽。

鍾培生在短片末尾依然大言不慚地聲稱「自己和家人一直為國家作出貢獻」、「睇唔到有咩影響形象」。這種香港富家子弟特有的傲慢與無知,正是惹怒建制派高層的核心原因。

當《明報》專欄放出「建制中人」的風聲,又突然押後論壇,這已經是阿爺發出的明確訊號:「莊雅婷已經成為負資產,建制派正在止蝕割席。」聰明人此時應該立刻收口,低調行事,但鍾公子卻選擇了最愚蠢的做法:拍片對抗,甚至揚言保留法律追究權利。追究誰?追究傳遞阿爺旨意的《明報》?還是追究背後放風的建制派大佬?

或許幾日後,我們會看到莊雅婷與鍾培生的道歉聲明。


Source: 林兆彬

https://www.facebook.com/pulsehknews/posts/pfbid02AgdzJFNDJnbtrza4NRhx9tbdLYcS89nLNsKTLMNy9Y2GCkpdyoi6SnNNF6RC16TQl

Friday, March 13, 2026

I just had my paper rejected by an MDPI journal

😤 I just had my paper rejected by an MDPI journal.

Not because of scientific flaws. Not because of plagiarism. 

But because "parts were AI-drafted."


➡️ Here's the problem:

1. MDPI published an editorial stating that "AI-written scientific manuscripts should be generally considered acceptable by the scientific community" (Quaia, Tomography 2025).


2. COPE — the international ethics authority — explicitly allows AI use:

"Authors who use AI tools must be transparent in disclosing how 

the AI tool was used. Authors are fully responsible for the content."


I did EXACTLY that:

✅ Disclosed AI use in Methods

✅ Specified the tool (www.publicationgod.com - yes, my own)

✅ Took full responsibility for content

✅ Verified every sentence


This reveals a fundamental contradiction in academic publishing:

We're told to be TRANSPARENT about AI use.

Then we're REJECTED for being transparent.


➡️ The result? Researchers will start hiding AI assistance instead of disclosing it. The exact opposite of what COPE and MDPI claim to want.


The question shouldn't be "was AI used?", it should be "Is the science sound?".


Transparency should be rewarded, not punished.

#AcademicPublishing #ScientificWriting #AIethics #OpenScience


Source: Jens Mittag

https://www.linkedin.com/feed/update/urn:li:activity:7437420702753968128/?originTrackingId=uQfdXY0KwAa6PsudDbT8Tw%3D%3D

Wednesday, March 11, 2026

‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

  

As AI has upended the way students learn, academics worry about the future of the humanities – and society at large

Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events, look at art in the real world.

It’s an effort to reconnect them to the bodily experience of learning, she said, and to keep them from turning to artificial intelligence to do the work for them. “There’s no AI-proof anything,” Pao said. “Rather than policing it, I hope that their overall experiences in this class will show them that there’s a way out.”

It doesn’t always work. Recently, she asked students to visit a local museum, look at a painting for 10 minutes, and then write a few paragraphs describing the experience. It was a purposefully personal assignment, yet one student responded with a sophisticated but drab reflection – “too perfect, without saying anything”, Pao said. She later learned the student had tried to visit the museum on a Monday, when it was closed, and then turned to AI.

As artificial intelligence has upended the way in which students read, learn and write, professors like Pao have been left to their own devices to figure out how to teach in a transformed landscape.

Many faculty members in the hard sciences and social sciences have pointed to the “productivity boost” AI can offer, and the research potential unlocked by its ability to process and analyze vast amounts of data. AI’s most enthusiastic proponents have boasted that the technology may help cure cancer and “accelerate” climate action.

But in fields most explicitly associated with the production of critical thought – what is collectively referred to as the “humanities” – most scholars see AI as a unique threat, one that extends far beyond cheating on homework and casts doubt on the future of higher education itself in a fast-approaching machine-dominated future.

American degrees often cost up to hundreds of thousands of dollars and result in decades of debt, and recent years have seen a freefall in public confidence in US higher education. With the potential for AI to increasingly substitute independent thought, a pressing question becomes even more urgent: what exactly is a university education for?

The Guardian spoke with more than a dozen professors – almost all of them in the humanities or adjacent fields – about how they are adapting at a time of dizzying technological advancement with few standards and little guidance.

By and large, they expressed the view that reliance on artificial intelligence is fundamentally antithetical to the development of human intelligence they are tasked with guiding. They described desperately trying to prevent students from turning to AI as a replacement for thought, at a time when the technology is threatening to upend not only their education, but everything from the stock market to social relations to war.

Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc) off a cliff.”

“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”


A 'soulless' education

AI criticism – or “doomerism”, as the technology’s proponents view it – has been mounting across sectors. But when it comes to its impact on students, early studies point to potentially catastrophic effects on cognitive abilities and critical thinking skills.

Michael Clune, a literature professor and novelist, said that, already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

Ohio State University, where he teaches, has begun requiring every freshman to take a class in generative AI and pitched itself as the first “AI fluent” university, pledging to embed AI “across every major”.

“No one knows what that means,” Clune said of the plan. “In my case, as a literature professor, these tools actually seem to mitigate against the educational goals I have for my students.”

That’s the crux of what many professors in the humanities fear: that technology that may well be a cutting-edge tool in other fields could spell the end of their own.

Alex Karp, the Palantir co-founder and CEO, stoked those anxieties when he said in a recent interview that AI will “destroy humanities jobs”. On the other hand, Daniela Amodei, Anthropic’s president and co-founder – who was a literature major – said the opposite: that “studying the humanities is going to be more important than ever”.

A number of tech and finance companies have recently said that they are looking to hire humanities majors for their creativity and critical thinking skills. Indeed, enrollment data at some universities suggests that the long-struggling humanities might have begun to see a resurgence in the age of AI, with early signs pointing to a reversal in decades-long decline in English majors in favor of Stem ones.

Some caution that the humanities will survive – but as a province of the few. When he predicted the end of the humanities, Karp assured that there would be “more than enough jobs” for those with vocational training. Indeed, several professors spoke about concerns that AI will exacerbate a widening divide in US higher education and that small numbers of elite students will have access to a more traditional, largely tech-free liberal arts education, while everyone else has a “degraded, soulless form of vocational training administered by AI instructors”, said Zhang.

“I fully expect that we will start seeing a kind of bifurcation in education,” said Matt Seybold, a professor at Elmira College in New York, who has written critically about “technofeudalism”.

Many professors talked about keeping the technology out of the classroom as a battle already lost. As many as 92% of students have reported resorting to the technology in their school work, recent surveys show, and the numbers are rapidly increasing even as growing numbers express concerns about the technology’s accuracy and the integrity of using it. Reliance on AI among faculty is also on the rise, with observers pointing to the dystopian possibility that the college experience may soon be reduced to AI systems grading AI-generated homework – “a conversation between two robots”.

Some universities have adopted AI detection software to catch artificially generated work; others prohibit faculty from directly accusing students of having used AI – as they can often be wrong.

Professors said they resorted to oral interrogations, handwritten notebooks and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI.

Many professors spoke of their frustration at having to sift through students’ artificially generated homework. “It creates hours of additional labor,” echoed Danica Savonick, an English professor at the State University of New York Cortland. “And makes me feel like a cop.”

Some allow students to use AI for research – to a point. Karl Steel, an English professor at Brooklyn College, said that AI has helped make students’ presentations richer and more interesting – but that while they may use it to prepare, he has them speak from minimal notes and stand in front of a photo of a text they annotated by hand. He also assigns written responses to texts only after the class has discussed them. “I suppose they could use their phones to record the conversation, feed a transcript into a chatbot and produce a paper that way,” he said. “But that is more trouble, I think, than most students would take.”


Left to their own devices

Many universities’ administrations are embracing AI for instruction, research and evaluation. In some cases, AI has guided decisions about which programs to cut at times of austerity in the education sector.

More than a dozen universities have partnered with OpenAI on a $50m initiative that the company has said will “accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI”. California State University has joined several of the world’s largest tech companies to “create an AI-powered higher education system”, as the university put it. Multiple universities have introduced AI majors and masters.

The plans are lofty but offer little guidance on what professors are supposed to do with students who can’t read more than a couple paragraphs at a time or turn in essays generated in seconds by a machine. Left largely to themselves, some are trying to articulate clearer lines around AI use, and organize a more coordinated effort against its encroaching dominance.

Last year, the American Association of University Professors, which represents 55,000 faculty members nationwide, published a report warning that universities were adopting the technology “uncritically” and with little transparency. Some university unions have begun incorporating protections against AI in their contracts to establish oversight mechanisms and give faculty greater input – and to protect their intellectual property from feeding machines that may soon take their jobs.

But much organizing against AI remains informal and via word of mouth, with faculty-led initiatives like the website Against AI, which offers resources to those trying to shield students from the intellectual ravages of outsourcing elements of their education to a machine.

“Materials here are intended as solidarity solace for educators who might find themselves inventing wheels alone while their administrators, trustees and bosses unrelentingly hype AI,” reads the website, which offers a list of assignment ideas to mitigate AI use – from oral exams, to requirements students submit photographic evidence of their notes, to analog journals.

Many of the professors interviewed by the Guardian said they ban AI in their classrooms altogether – but recognize their hardline approach is discipline-specific.

Megan McNamara, who teaches sociology at the University of California, Santa Cruz and created a guide for faculty across disciplines to deal with AI-related academic misconduct, noted that “cultural” differences in the humanities versus Stem disciplines, or in qualitative social sciences versus quantitative ones, tend to shape faculty members’ responses to students’ use of AI.

“I think that’s just a function of one’s individual relationship with writing/reading/critical analysis,” she wrote in an email.

Several professors spoke of using the issue as an opportunity to get students to think critically about technology.

When she suspects someone has used AI, McNamara talks to them about it, treating the incident as an “opportunity for growth, restorative justice and enhanced authenticity in student-instructor relationships”, she said.

Eric Hayot, a comparative literature professor at Penn State University, said he tries to convince his students that tech companies are trying to make them “helpless” without their product.

These companies are giving these technological tools away partly because they’re hoping to addict a generation of students,” Hayot told the Guardian. “This is part of every single class I teach now, talking to students about why I’m not using AI, why they shouldn’t use AI.”


We can decide that we want to be human

Several professors noted that they have also begun to see mounting discomfort from students against the technology – and technology’s dominance in their lives overall.

Clune, the Ohio State professor, said students have become more curious about his flip phone, which he started using after realizing his smartphone was “destroying” his attention.

“I think the current crop of gen Z students are seeing that they are the guinea pigs in this giant social experiment,” said Zhang, the Berkeley professor.

“There’s a broader and increasing sense from students that something is being stolen from them,” echoed Seybold, the Elmira College professor.

Seybold pointed to students’ mounting disillusion with tech more broadly. Those who are rejecting AI, he added, are often driven by environmental concerns, and suspicion of companies they view as partly responsible for shrinking democracies and a more violent world.

In Michigan, for instance, that has spurred activism. The University of Michigan recently announced plans to contribute $850m toward a datacenter to provide AI infrastructure in collaboration with the Los Alamos National Laboratory – at a time when it is cutting funds for arts and humanities research and on the heels of anti-war protests on campus. A spokesperson for the university said that the planned facility would be smaller and consume less energy than a “typical datacenter”.

As pushback grows, so does an emphasis on those intrinsically human qualities that differentiate people from machines – the very qualities a humanistic education seeks to nurture.

“There’s kind of defeatism, this idea that there’s no stopping technology and resistance is futile, everything will be crushed in its path,” said Clune, the Ohio State professor. “That needs to change … We can decide that we want to be human.”

That idea has also been key to Pao’s approach to teaching in the age of AI.

You plant seeds and you hope,” Pao said, of efforts that at times feel like tilting at windmills. “You hope that in the long term you’re helping them become happy human beings, who are able to take a walk, and experience things, and describe things for themselves.”


Source: Alice Speri

https://www.theguardian.com/technology/ng-interactive/2026/mar/10/ai-impact-professors-students-learning

Monday, March 09, 2026

幾個有關教育的殘酷事實

學文言文、背註釋、背元素週期表跟化學式,這些在以後工作場合都用不到,為甚麼學校要逼迫我們去學習?

我告訴你幾個殘酷的事實:

一、這些課程不是專門設計給你的,是為了那些以後有能力繼續在該領域深造的、有能力的孩子們設計的。

二、教育的成本非常非常昂貴,你可以把你的老師一學年的授課時數加總起來、再去乘上一般家教的基本時薪,你會發現你的家庭收入根本負擔不起讓你學習這麼多種專業。

三、我們在開始教學之前很難精準篩選出有能力值得培養的孩子、同時孩子們根本也負擔不起這個學費,所以就大家一起上課、共同分擔成本,沒錯,我們多數人就是來當分母的,但如果你學得好,你理論上就會變成那個分子。

四、承上第三點,你要嘛就找到某個學科有興趣你也學得好的,讓自己成為分子。你只要有一科學得還不錯,其實就已經打平支出了;兩科以上學得好都是賺;萬一全部都不感興趣呢?你想一下這個課程本來是設計給被選上的孩子,你表現得也不怎樣,也沒有特別出多少錢,但是還可以得到差不多水準的教育。就不要抱怨了。

五、以整個社會來說,這些教育是為了提升整體國民的水準,來減少總是犯一些低級失錯誤而浪費的社會成本。就像你雖然不可能上場打球,但因為稍微了解了這個運動,你就可能會去支持球員、或者至少不會惡搞球場還自以為了不起。

Source: 楊用修

https://www.threads.com/@yong.xiu.10/post/DVqIqcKgb8K?xmt=AQF0dtLof0JS5_ICfS_aUtpF-Gv06277oYDAnqYJtNhsbXOtW97bwXsIYONViaV58AnzunTE&slof=1

Friday, March 06, 2026

You’re still designing for an architecture that no longer exists

Last Tuesday, I asked Claude to prepare a competitive analysis. Not in a chat window. Not through a prompt. I opened Cowork, pointed it to a folder on my desktop, and said what I needed. It read my files. It cross-referenced data from Slack through a connector. It pulled calendar context. It produced a document — formatted, structured, sourced — and saved it to my working folder. I didn’t open a single application. I didn’t navigate a single menu. I didn’t click through a single interface.

I sat there for a moment, staring at the screen. Not because something had gone wrong — but because nothing looked familiar. The windows were gone. The menus were gone. The entire choreography of opening, navigating, operating, saving, closing, and opening the next thing — the choreography I’d been performing for twenty years — had simply… disappeared.

And that’s when I realized: I wasn’t using a tool. I was working inside a different environment. One that nobody had bothered to name yet.

We keep asking the wrong question. We keep asking how good is the assistant — how well it writes, codes, summarizes, reasons. But that question belongs to the old architecture. What’s actually happening is bigger: the environment itself is changing. The space where we work — the one we’ve inhabited for four decades, the one built on windows and menus and folders and clicks — is being replaced by something structurally different. And if you’re still designing screens, flows, and navigation systems, you might be perfecting the blueprint of a building that’s already been demolished.


Forty years of the same interface

In 1984, Apple introduced the Macintosh and, with it, a way of working that would define every professional environment for the next four decades. The graphical user interface gave us windows, icons, menus, and a pointer — the WIMP paradigm. It was revolutionary. And then it froze.

Think about what changed between 1984 and today. Processing power grew exponentially. Storage went from kilobytes to terabytes. Networks connected billions of devices. Screens went from bulky CRTs to panels in our pockets. But the interaction logic — the fundamental way we relate to our work environment — remained essentially the same. You open an application. You navigate to what you need. You operate it manually, step by step. You save. You close. You open the next one.

The internet didn’t change this. It added connectivity, but you still navigated — now through links instead of folders. Mobile didn’t change it either. It made the environment portable, but you still tapped through apps, scrolled through feeds, clicked through menus. Even cloud computing — which transformed infrastructure — left the interaction surface largely untouched. You were still the operator. The system still waited for your commands.

Everything change except the interface. Generated with Gemini.

As Satya Nadella put it: “Thirty years of change is being compressed into three years.” But what’s being compressed isn’t just speed or capability. It’s the architecture of the environment itself.

For four decades, the working environment asked you how. How do you want to format this? Which menu holds the function you need? What’s the right sequence of clicks to get from here to there? The entire interface was a map, and your job was to navigate it.

That map is disappearing. And what’s replacing it isn’t a better map — it’s a fundamentally different kind of space.


What Claude is actually showing us

Let me describe what working with Claude looks like today — not theoretically, but practically. Because the shift becomes obvious once you stop thinking about features and start paying attention to the experience.

Cowork reads files on your desktop, modifies documents, creates deliverables, and operates within your working folder — asking for confirmation before significant actions, working autonomously within defined boundaries. It launched in January 2026 and, within weeks, triggered a $285 billion selloff in software stocks. Not because of what it does, but because of what it replaces: the need to open applications at all.

Claude Code doesn’t assist developers — it is the development environment. Engineers describe entire systems in natural language, and Code builds them: writing files, running tests, submitting pull requests, spawning parallel sub-agents for different tasks. It hit $1 billion in run-rate revenue within six months of general availability. Spotify reports that roughly half of all their updates now flow through AI-generated code, with a 90% reduction in engineering time for large-scale migrations.

Claude in Chrome operates your browser — managing calendars, drafting emails, filling forms, extracting data — maintaining context across sessions.

Claude in Excel reads complex multi-tab workbooks, builds pivot tables, pulls live market data through connectors from S&P Global, Moody’s, and FactSet.

Memory doesn’t just store preferences. It maintains hierarchical context — organization-wide policies, project-level standards, individual preferences — and recovers the full state of your working environment in seconds. As one developer described it: “Treat it as system state. The file becomes the source of truth.”

And underneath all of this, MCP — the Model Context Protocol — connects Claude to your entire technology stack: Google Drive, Slack, GitHub, Gmail, Figma, Notion, Salesforce, and thousands more. With 97 million monthly SDK downloads and adoption by OpenAI, Google, and Microsoft, MCP has been donated to the Linux Foundation as an open standard — what Thoughtworks described as one of the fastest standards convergence cycles in recent tech history.

Now step back and look at what I just described. Not a chat interface with added capabilities. A working environment — one where the system reads your context, understands your purpose, operates across your tools, and delivers results while you focus on what actually matters.

Nick Turley, OpenAI’s Head of Product, said it plainly: “We never meant to build a chatbot; we meant to build a super assistant, and we got a little sidetracked.”

Everyone got sidetracked. The text box made us think we were talking to a tool. We were actually sitting inside the first draft of a new environment.

Claude Ecosystem Diagram. Generated with Gemini.


The three variables of a new architecture

If this is a new environment and not just a better tool, it should be structurally different — not incrementally improved. And it is. But to see the structure, you need to stop looking at capabilities and start looking at variables. What coordinates define this space that didn’t exist in the previous one?

I’ve spent the last two years studying this question, and what I’ve found is that three variables consistently distinguish this new architecture from everything that came before it.


Intention. In the traditional working environment, you tell the system how to do things. You navigate menus, select options, sequence operations. The system doesn’t know what you want — it knows what you clicked. In the new environment, you express what you want to achieve. The system interprets your purpose, weighs context, and determines the path. Claude doesn’t execute commands; it interprets goals. MCP doesn’t connect tools for the sake of integration; it connects them at the service of what you’re trying to accomplish. This is the shift from procedural thinking to intentional thinking — from operating a machine to having a conversation with a collaborator who understands purpose.

Intention. Generated with Gemini.


Autonomy. In the traditional environment, systems assist. They wait for your next instruction. Every action requires a human operator pressing a button, selecting an option, confirming a step. In the new environment, systems act. Claude Code doesn’t wait for you to dictate each line of code — it plans, executes, tests, iterates, and spawns sub-agents to work in parallel. Cowork doesn’t need step-by-step guidance — it works toward outcomes within boundaries you define. This is not automation in the industrial sense, where machines repeat predefined sequences. This is agency: the capacity to pursue goals through autonomous decision-making while maintaining human oversight.

Autonomy. Generated with Gemini.


Adaptation. In the traditional environment, systems remain fixed. Your software behaves the same way on day one and day one thousand. If you want it to change, you configure it manually — or wait for the next version. In the new environment, systems evolve. Claude’s memory learns your preferences, your team’s standards, your organization’s policies. It gets better at understanding you over time. The interaction isn’t static — it’s alive. What was once a tool that needed to be configured becomes an environment that learns to fit.

Adaptation. Generated with Gemini.


These three variables — intention, autonomy, and adaptation — don’t operate in isolation. They are unified by a fourth principle: orchestration. MCP is the clearest manifestation of this. It’s the connective tissue that allows intentions to flow across tools, autonomous actions to coordinate across systems, and adaptive learning to compound across interactions. As Microsoft’s CTO Kevin Scott observed at Build 2025: “MCP is filling such an unbelievably big need in the ecosystem… it’s really kind of breathtaking.” Atlassian’s CTO Rajeev Rajan called it “the gold standard for how LLMs interact with tools.”

But orchestration isn’t just a protocol. It’s the architectural principle that turns three independent variables into a coherent environment. It’s what makes the difference between a collection of smart features and a fundamentally new working space.

Here’s the critical point: these aren’t features of Claude. They are coordinates of a new architecture. Any system that embodies intention, autonomy, adaptation, and orchestration is operating in this new space — regardless of which company built it. Claude happens to be the most complete manifestation today. But the architecture is bigger than any single product.

Architecture 6 (HOW) vs Architecture 7 (WHAT). Generated with Gemini.


Everyone is describing the same thing

What makes this moment historically significant is that the people building these systems are arriving at the same conclusion from entirely different directions — and most of them don’t realize they’re describing the same thing.

Goldman Sachs’ CIO Marco Argenti writes: “Rather than functioning as one-dimensional applications, AI models are becoming operating systems that independently access tools in order to perform tasks.” Sam Altman told Sequoia Capital: “People in college use it as an operating system.” Mustafa Suleyman, CEO of Microsoft AI, frames it as: “This is going to be the next platform of computing.” Jensen Huang and Siemens announced a partnership to build what they literally called “the Industrial AI Operating System.”

From the design world, the language is different but the observation is the same. John Maeda describes the shift from UX to AX — Agentic Experience — where “designers become orchestrators of experiences rather than crafters of interfaces.” Rachel Kobetz, PayPal’s Chief Design Officer, argues that “the real work of design is orchestrating how intelligence behaves.” Jakob Nielsen, in his 2026 predictions, charts what he calls “a fundamental shift from Conversational UI to Delegative UI.” And Jenny Wen, who leads design for Claude at Anthropic, put it bluntly on Lenny’s Podcast: “This design process that designers have been taught, we sort of treat it as gospel. That’s basically dead.”

Orchestrators. Intelligence that behaves. Delegative rather than conversational. The design process itself declared dead — by the person designing the system that killed it. Everyone is circling the same structural transformation.


I should be honest about where this breaks. Jared Spool makes a fair point: The world romanticizes AI as an all-powerful, game-changing technology when, in reality, it barely works. But it demos well. He’s not wrong — intention interpretation still fails, autonomy can produce confidently wrong results, and adaptation has boundaries that aren’t always transparent. The gap between what these systems promise and what they reliably deliver is real, and anyone designing for this architecture needs to take that gap seriously. But the existence of a gap doesn’t invalidate the architecture. The early web had broken links, crashed browsers, and took minutes to load a single page. The architecture was still real. The question was always whether the structural logic would hold as the technology matured. I believe this one will — not because every interaction works today, but because the variables are right.


This is not a product. This is an architecture.

There is a pattern that most people in technology miss because it operates on timescales longer than product cycles.

In 1440, the printing press didn’t improve manuscripts — it created a new architecture for distributing knowledge. In 1876, the Dewey Decimal System didn’t improve bookshelves — it created a new architecture for classifying information. In 1936, the Turing machine didn’t improve calculators — it created a new architecture for computing. In 1984, the graphical interface didn’t improve command lines — it created a new architecture for human-computer interaction. In 1989, the World Wide Web didn’t improve networks — it created a new architecture for connecting information. In 2007, the smartphone didn’t improve phones — it created a new architecture for mobile access.

Each of these was a structural transformation in how humans organize and access information. Not a better version of what came before, but a fundamentally new set of coordinates — new variables, new paradigms, new possibilities that simply didn’t exist in the previous architecture.

Timeline of the 7 Architectures. Generated with Gemini.


What we are witnessing now is the seventh such transformation. The architecture of Intelligence. A structure defined not by windows and clicks and navigation, but by intention, autonomy, adaptation, and orchestration.

I call this Architecture 7 — the seventh structural transformation in how humans organize access to information.

In The Intelligence Architect, I mapped each of these seven transformations in detail: the variables that define them, the design principles that emerge from each shift, and the practical frameworks for building within a new architecture. But you don’t need the book to see what’s happening. You just need to pay attention to what Claude is showing us right now: we are not living through an AI upgrade. We are living through an architectural shift.

And architectural shifts don’t ask for permission. They don’t arrive with a press release explaining what changed. They arrive as a quiet realization that the environment you’ve been working in — the one that felt permanent, the one built on windows and menus and forty years of muscle memory — has already been replaced by something you can’t yet name but can already feel.

Claude is the first Architecture 7 environment built for people who work with information. It won’t be the last. The same structural principles — intention replacing navigation, autonomy replacing manual operation, adaptation replacing static configuration — will reshape environments for entertainment, communication, education, and every domain where humans interact with complex systems. The evidence — from $1 billion run-rates to $285 billion selloffs to 97 million monthly SDK downloads — makes that trajectory unmistakable.


What this means for designers, practically

If the architecture has changed, so has the work. Designing for intention means your wireframes become something closer to operating manuals — not layouts of screens, but descriptions of outcomes the system should pursue. Designing for autonomy means defining boundaries instead of workflows: what the system can do on its own, where it must pause for confirmation, how it recovers when it gets things wrong. Designing for adaptation means building feedback loops rather than preference panels — mechanisms through which the environment learns from each interaction rather than waiting to be manually configured. And designing for orchestration means mapping how intelligence flows across tools, not how users navigate between them. The screen is no longer the unit of design. The intent is.


You are already inside it

If you used Claude this week — or ChatGPT, or Copilot, or any AI system that understood your intent, acted autonomously, and adapted to your context — you didn’t use a chatbot. You worked inside a new environment. You just didn’t have the language for it yet.

The language matters. Because when you call it “a chatbot,” you design chat interfaces. When you call it “an assistant,” you design helper features. When you call it “a tool,” you design toolbars. But when you recognize it as a new architecture — with its own variables, its own structural principles — you design differently. You stop asking “where should the button go?” and start asking “how does the system interpret intention?” You stop designing navigation and start designing orchestration. You stop building static configurations and start building adaptive environments.

The forty-year environment built on windows, menus, and manual navigation is giving way to one built on intention, autonomy, adaptation, and orchestration. Every major voice in technology and design is converging on this observation from different angles. The question is no longer whether the shift is happening.

The question is whether you’ll design for the architecture that’s arriving — or keep drawing menus for the one that already left.


Source: Adrian Levy

https://uxdesign.cc/youre-still-designing-for-an-architecture-that-no-longer-exists-28b0b10900dd

為什麼伊朗革命衛隊「越打越悲壯」?從「伊斯蘭例外論」看懂這場戰爭的文明底層邏輯

 


美以聯軍無預警發動「史詩怒火」行動已經進行了好幾天。利用AI和精準導引武器像切香腸一樣,把伊朗革命衛隊(IRGC)的高級指揮官一個個點名清除。中文網路上有人疑惑:為什麼他們還不投降?甚至還像「慈禧太后向八國聯軍宣戰」一樣,向多個週邊的阿拉伯國家射飛彈?

這個問題,正好可以用《你所不知道的伊斯蘭》(Islamic Exceptionalism)這本書的框架來回答。

這本書的作者沙迪·哈米德是埃及裔美國學者,同時也是穆斯林。他提出一個核心觀點:伊斯蘭教在本質上就是「例外」的,不是更好或更壞,而是出廠設定與基督教文明發展出來的國家截然不同。  

基督教創始人耶穌是「持不同政見者」,沒有軍隊、沒有領土,在羅馬帝國的夾縫中求生存。「上帝的歸上帝,凱撒的歸凱撒」其實是一種政治妥協:我們不要世俗權力,只管靈魂得救。所以當西方後來在宗教改革、宗教戰爭與近代國家形成的長過程中,逐步走向制度性的世俗化和政教分離的變革,對基督教來說只是回歸本源。

伊斯蘭教創始人穆罕默德是「國家建設者」。在麥迪那,他同時是最高行政長官、最高法官和最高軍事統帥。先知本人就是凱撒。《古蘭經》不是人記錄的神的話語,而是真主直接的、逐字逐句的言語。這裡面有具體的法律:如何分配遺產、如何懲罰盜竊、如何發動戰爭。

所以在伊斯蘭教的出廠設定裡,宗教信仰與政治權力是高度交織的。對虔誠穆斯林來說,把宗教變成私人事務、把法律交給世俗政府,是背叛行為,不是改革。 

這就間接解釋了IRGC的悲壯心從何而來。

什葉派有一個核心精神圖騰:西元680年的卡爾巴拉之役。先知的外孫侯賽因率少數追隨者迎戰數千敵軍,全部壯烈殉道。在什葉派神學裡不叫這做失敗,而管這做最神聖的道德勝利:「正義一方在不公義的世界裡以死明志」戰死本身就是光榮的

IRGC自我定位為「侯賽因的繼承人」。面對美以聯軍的絕對AI科技優勢,「越打越像侯賽因的處境」反而強化了他們的認同這場越打越輸的戰爭,在神學框架裡是在複製一場聖戰的原型,而不是自取滅亡

這就是為什麼AI斬首行動好像戰術上很成功但戰略上沒起到瓦解軍心的作用。世俗軍隊失去指揮官會潰散,但宗教軍隊失去指揮官,是在給他們批量製造「殉道者」。死亡本身就是意義。

那為什麼要向阿拉伯各國宣戰?不是單純的瘋狂,是有精心計算過的意識形態豪賭。  

哈米德在書中指出一個中東的殘酷悖論:世俗化與親美路線往往是靠獨裁者用刺刀維持的,而底層群眾多數傾向宗教保守。伊朗向沙特、約旦、阿聯酋等國射飛彈,是在逼這些政府表態。當阿拉伯國家的防空系統升空攔截時,在廣大底層穆斯林眼中,他們的政府就成了「保護以色列、屠殺穆斯林兄弟的叛徒」。

伊朗的目標不是指望能在軍事上打贏美以,而是希望點燃這些遜尼派國家內部的「民意怒火」,讓極端宗教派起來推翻世俗親美的獨裁政府。

這也解釋了為什麼用慈禧宣戰的比喻只是抓到表象。義和團的刀槍不入是謊言,士兵只要有被打死就破滅了。但什葉派的神學更高明,殉道本身就是勝利,不需要活著來證明神的庇佑。

回到那本書的核心論點:早期伊斯蘭帝國的輝煌,反而成為現代穆斯林最深重的心理傷口:如果我們所信奉的是真理,如果真主許諾了勝利,那為什麼我們現在會輸得這麼慘?  

面對這種認知失調,在不少伊斯蘭主義敘事和保守派論述中的解法,不是承認對方更強,而是加強信念:我們輸是因為不夠虔誠,是因為模仿了西方世俗化而偏離了正道

結果就是越是多領袖被斬首、越是死得慘,敵人正在迫害我們,我們仍在堅守真主的道路的敘事就越有說服力。這種神學上的自我強化超越了理性的計算。

所以這場戰爭最無解的地方在於:高科技的導彈可以炸毀核設施和指揮所,但只要伊朗人政教合一的出廠設定還在,這片土地就很難如同西方期望般走向西式的世俗化與民主社會。

哈米德在書的結尾說:世界大同是一個註定要破滅的夢想。我們不必假設所有社會最終會走同一條路,差異是真實存在的。

也許我們也只能懷著敬畏和警惕,眼睜睜地看著這個古老的文明,如何在現代世界的驚濤駭浪中,獨自尋找屬於它的(哪怕是我們並不喜歡的)生存形態。

(本文觀點主要整理自Youtube魏知超《你所不知道的伊斯蘭》書評,以及結合時勢發展和編者本人的觀點)  


Source: loganfung