Tuesday, March 17, 2026

美國攻打伊朗,關於國際法的儒學辯論;附帶談一下特朗普做戰時總統的癮

三月三日,在面書出了個公開帖文,蹉跎到今晚,才有心情將之整理,下文的粗體字段落和後記,是會員區獨有的:


現代西方的國際法規範主權國家行為的,然而,如果主權國家的政府是憲政民主選出來,該主權就是公義的若否,該主權是不公義的,如果加上殘暴對待人民、奴役人民,則該主權是反該國人民的只是在現代的武器和監管技術,人民無法揭竿起義,無法有效反抗

若一個正義的主權國家誅殺另一個邪惡主權國家元首,則是替天行道,也是為民除害,周武王誅殺商紂王,孟子說是誅殺獨夫。孔子寫的易傳,如此說湯武王誅殺夏桀王、周武王誅殺商紂王(《周易·革卦·彖傳》):「天地革而四時成,湯武革命,順乎天而應乎人。」。


周武王革除了商紂王的天命,取而代之之際,在諸侯面前誓師,創造了一個概念:「天視自我民視,天聽自我民聽」。此句出自《尚書·泰誓中》,意為上天所看到的來自於民眾(諸夏的眾民)所看到的,上天所聽到的來自於民眾所聽到的。「天意即民意」,紂王不恤民命,諸侯可以取而代之誅殺紂王,是替天行道

湯王攻打夏桀,也是基於夏桀的暴政令到天怒人怨。「時日曷喪,予及汝皆亡」出自《尚書·湯誓》,是夏朝百姓咒罵暴君夏桀的話。意為「這個太陽(指夏桀)什麼時候滅亡啊?我寧願和你一起同歸於盡!」。

儒家的政治學說是實用理性,君王到了暴虐無道的時候,繼續容忍他們,是亡國滅族的愚行。基於民意而革除天命的論說,有點接近主權在民的概念,只是差在未連接到現代民主制度。

「主權在民」(Popular Sovereignty)的思想源於十七至十八世紀的「社會契約論」,最系統性建立此理論的是法國思想家郎-雅克·盧梭(Jean-Jacques Rousseau),他在一七六二年出版的《社會契約論》中明確提出。國家主權屬於人民全體(公意),而非君主政府僅是執行人民意願的代理人。「主權在民」的理念,也在一六八九年由英國思想家約翰·洛克(John Locke)提出。他主張每個人都有生命、財產和自由等天賦人權,政府的主要功能,就是保障天賦人權


主權神聖不可侵犯,是歐洲諸侯國在一六四八年為了免戰息爭而訂立的《西法利亞和約》(德文Westfälischer Friede),是規範貴族之間的君子協定,此和約後來演變為國家主權的國際法基礎。然而,正如之前所說,此和約只是貴族之間的君子協定,一旦某國的行為如同流氓,元首品德毫無貴格,則主權國家失其實,亦喪其名,其他公義國家毋須視之為主權國家,可以群起而攻之,誅殺暴虐,為匹夫匹婦復仇,孟子謂之弔民伐罪,儒家謂之春秋大義。

讀聖賢書,所為何事?久違了的儒家國際法學,國師今日講出。


後記:

觀察了兩個星期,我發現特朗普是在補償第一任的時候,沒做戰時總統的悔恨!他第一任的時候無法連任,就是在南中國海不能輕易打中共一場,反而通報中共說,美軍只是執行艦艇驅逐,不會向共軍開砲(後來的美國將軍證實,他奉命向解放軍通報)。中共恩將仇報,在選舉將特朗普搞下來,沒兌現承諾去購買美國黃豆和其他農產品,也在頗多美國媒體詆毀特朗普。

世間上,最大的動力是愛,其反面是恨,其加強版本是悔恨(regret)!我在《蛇年開運》寫過。目下,特朗普是用regret來做動力,極其可怕。政界很多人無法看出這點,以為特朗普怕捲入伊朗的泥淖。他恨不得捲入去,可以大殺四方,以前用關稅,被法院否定,各國也嗤之以鼻;現在,特朗普用的是石油和糧食來威脅各國跟隨,當中包括中共。中東頗多石油和化工產品(如耕作需要的尿素)必須用輪船駛過霍爾木茲海峽,而目前這條海峽是由美國控制(不是伊朗軍閥!),美國說有水雷,或者伊朗那些暗地裡合作的軍頭說不放船通過,石油和糧食就會貴——這些是美國自給自足的,盟友不出兵接受特朗普號令,就只能捱貴油、餓糧食。

此外,規劃好的假疫情(“plandemic“)的目的是製造經濟蕭條,經濟蕭條必須用戰爭解決,以便經濟轉型,推動AI生產。目前的伊朗戰爭,除了美國掠奪AI必須要有的能源之外(AI需要瘋狂發電來支援),也是奪取關鍵礦產,更要緊的是,它確保大蕭條可以到來,沒有例外!美國和以色列滅掉神權領袖和繼任人,也殺滅伊朗政府的高層,就是令伊朗群龍無首,各自為政,無法投降,美國要打到幾時都得。七孔流血,但死不了。


Source: 陳雲

https://www.patreon.com/posts/mei-guo-gong-da-153261311

Saturday, March 14, 2026

Pulse 快評|莊雅婷犯了中共官場甚麼大忌?


委任區議員莊雅婷與富家子弟鍾培生最近上演了一場浮誇求婚騷,這齣喜劇瞬間變成了政治公關災難。傳聞有「重量級建制人士」批評莊雅婷「政治生涯到此為止」,而原定由莊主持、邀請了立法會主席李慧琼及多名局長級高官出席的「香港青年海洋論壇」突然宣告延期至另行通知。這場風波,正好為渴望上位的藍絲們提供了一個絕佳的教學案例。

鍾培生拍片反問:「我訂婚喺屋企閉門搞,區議員唔可以結婚嘅咩?」這句話問得理直氣壯,卻暴露出他不熟悉「新時代中國特色社會主義」。中共當然不反對官員或公職人員結婚,但中共反對「炫富」與「浮誇」。在習近平治下,神州大地高舉「共同富裕」的大旗,連頂級富豪如馬雲都要夾著尾巴做人,娛樂圈明星稍有炫富或行為不端,立刻遭到封殺。香港既然已經「二次回歸」,建制派的政治人物自然也要符合這套主旋律。

莊雅婷作為政府委任的區議員,拿的是公帑,代表的是政府的基層管治形象。她與鍾培生的婚事雖然號稱「閉門」,但以鍾公子一貫高調出位的作風,這場婚事注定會成為八卦雜誌的C1頭條。對於中共而言,一個重點栽培的年輕幹部,不把精力放在落區探訪劏房戶、解決民生疾苦,反而將自己活成了娛樂圈的「名媛小花」渾身散發著資本主義土豪的銅臭味與輕浮感,這絕對是犯了政治上的大忌。

中共需要的是聽話、低調、刻苦耐勞的「螺絲釘」,而不是經常在IG穿緊身白色衫拍片搏出位、惹是非的「KOL」。建制派從來不缺有錢人,但有錢而不知收斂,甚至將公職人員的身份當作自己晉身豪門、抬高身價的嫁妝,這就觸碰了阿爺的底線

有傳「建制重量級人士」質疑莊雅婷「是真心從政還是藉此成名」,這句話可謂一針見血。看看莊雅婷的履歷,港大和北大畢業,23歲被委任為區議員,本來是一條康莊大道。然而,她竟然跑去報名參選「香港小姐」,雖然後來匆匆退選,但這已經暴露了她性格中的致命弱點:虛榮

中共幹部考核邏輯裡,「忠誠」與「動機」是首要考量。你從政,是為了「為人民服務」,還是為了積累知名度去選美?當港姐夢碎,轉頭又高調嫁入豪門,這向公眾釋放了一個極其惡劣的訊號:這個年輕女孩把政府的委任議席,當成了自己人生跳板的「踏腳石」

鍾培生辯稱「過往都有香港小姐擔任政協和人大」,這是犯了時空錯亂的謬誤。例如,鄺美雲能夠擔任人大,是因為她在商界和慈善界打滾多年,積累了足夠的社會資本與統戰價值,是在「功成名就」之後被阿爺吸納;而莊雅婷只是一個毫無社會貢獻、寸功未立的黃毛丫頭,阿爺破格提拔你,是為了讓你去做事、去歷練,而不是讓你拿着「區議員」的銜頭去選美和配對富豪。兩者性質,猶如天壤之別。

要理解中共對建制派年輕人的期望,可以拿江旻憓來作一個對比。江旻憓的成功,在於她完美契合了中共在國際舞台和香港社會雙重需要的「高質量精英」形象。她為中國香港爭光,拿了奧運金牌,這是硬實力;她不炫富,展現出一定的個人素養。

更重要的是,她低調地寫關於「一國兩制」的碩士論文,懂得在適當的時候說適當的話,將自己的個人榮耀與國家民族自豪感綁定。這就是中共最渴望擁有的「統戰樣板」——一個受過西方頂尖教育,卻擁抱中共體制,深受廣大市民喜愛的乖乖女。

反觀莊雅婷,除了鄧炳强給予的「棄暗投明」標籤,她為香港社會貢獻了甚麼?沒有。中共當然不介意藍絲出位,你可以像戰狼一樣在國際舞台上為國家辯護而出位,又可以像江旻憓一樣在體育競技上奪金而出位,但你絕不能因為私人感情的奢靡做作而出位。

莊雅婷與鍾培生的高調,不僅沒有為建制派加分,反而惹來基層市民的反感,加深了社會對「建制派就是權貴俱樂部」的刻板印象。經濟低迷、基層市民還在苦苦掙扎的今天,這對年輕男女的炫富式訂婚,簡直是往社會的貧富懸殊傷口上撒鹽。

鍾培生在短片末尾依然大言不慚地聲稱「自己和家人一直為國家作出貢獻」、「睇唔到有咩影響形象」。這種香港富家子弟特有的傲慢與無知,正是惹怒建制派高層的核心原因。

當《明報》專欄放出「建制中人」的風聲,又突然押後論壇,這已經是阿爺發出的明確訊號:「莊雅婷已經成為負資產,建制派正在止蝕割席。」聰明人此時應該立刻收口,低調行事,但鍾公子卻選擇了最愚蠢的做法:拍片對抗,甚至揚言保留法律追究權利。追究誰?追究傳遞阿爺旨意的《明報》?還是追究背後放風的建制派大佬?

或許幾日後,我們會看到莊雅婷與鍾培生的道歉聲明。


Source: 林兆彬

https://www.facebook.com/pulsehknews/posts/pfbid02AgdzJFNDJnbtrza4NRhx9tbdLYcS89nLNsKTLMNy9Y2GCkpdyoi6SnNNF6RC16TQl

Friday, March 13, 2026

I just had my paper rejected by an MDPI journal

😤 I just had my paper rejected by an MDPI journal.

Not because of scientific flaws. Not because of plagiarism. 

But because "parts were AI-drafted."


➡️ Here's the problem:

1. MDPI published an editorial stating that "AI-written scientific manuscripts should be generally considered acceptable by the scientific community" (Quaia, Tomography 2025).


2. COPE — the international ethics authority — explicitly allows AI use:

"Authors who use AI tools must be transparent in disclosing how 

the AI tool was used. Authors are fully responsible for the content."


I did EXACTLY that:

✅ Disclosed AI use in Methods

✅ Specified the tool (www.publicationgod.com - yes, my own)

✅ Took full responsibility for content

✅ Verified every sentence


This reveals a fundamental contradiction in academic publishing:

We're told to be TRANSPARENT about AI use.

Then we're REJECTED for being transparent.


➡️ The result? Researchers will start hiding AI assistance instead of disclosing it. The exact opposite of what COPE and MDPI claim to want.


The question shouldn't be "was AI used?", it should be "Is the science sound?".


Transparency should be rewarded, not punished.

#AcademicPublishing #ScientificWriting #AIethics #OpenScience


Source: Jens Mittag

https://www.linkedin.com/feed/update/urn:li:activity:7437420702753968128/?originTrackingId=uQfdXY0KwAa6PsudDbT8Tw%3D%3D

Wednesday, March 11, 2026

‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

  

As AI has upended the way students learn, academics worry about the future of the humanities – and society at large

Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events, look at art in the real world.

It’s an effort to reconnect them to the bodily experience of learning, she said, and to keep them from turning to artificial intelligence to do the work for them. “There’s no AI-proof anything,” Pao said. “Rather than policing it, I hope that their overall experiences in this class will show them that there’s a way out.”

It doesn’t always work. Recently, she asked students to visit a local museum, look at a painting for 10 minutes, and then write a few paragraphs describing the experience. It was a purposefully personal assignment, yet one student responded with a sophisticated but drab reflection – “too perfect, without saying anything”, Pao said. She later learned the student had tried to visit the museum on a Monday, when it was closed, and then turned to AI.

As artificial intelligence has upended the way in which students read, learn and write, professors like Pao have been left to their own devices to figure out how to teach in a transformed landscape.

Many faculty members in the hard sciences and social sciences have pointed to the “productivity boost” AI can offer, and the research potential unlocked by its ability to process and analyze vast amounts of data. AI’s most enthusiastic proponents have boasted that the technology may help cure cancer and “accelerate” climate action.

But in fields most explicitly associated with the production of critical thought – what is collectively referred to as the “humanities” – most scholars see AI as a unique threat, one that extends far beyond cheating on homework and casts doubt on the future of higher education itself in a fast-approaching machine-dominated future.

American degrees often cost up to hundreds of thousands of dollars and result in decades of debt, and recent years have seen a freefall in public confidence in US higher education. With the potential for AI to increasingly substitute independent thought, a pressing question becomes even more urgent: what exactly is a university education for?

The Guardian spoke with more than a dozen professors – almost all of them in the humanities or adjacent fields – about how they are adapting at a time of dizzying technological advancement with few standards and little guidance.

By and large, they expressed the view that reliance on artificial intelligence is fundamentally antithetical to the development of human intelligence they are tasked with guiding. They described desperately trying to prevent students from turning to AI as a replacement for thought, at a time when the technology is threatening to upend not only their education, but everything from the stock market to social relations to war.

Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc) off a cliff.”

“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”


A 'soulless' education

AI criticism – or “doomerism”, as the technology’s proponents view it – has been mounting across sectors. But when it comes to its impact on students, early studies point to potentially catastrophic effects on cognitive abilities and critical thinking skills.

Michael Clune, a literature professor and novelist, said that, already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

Ohio State University, where he teaches, has begun requiring every freshman to take a class in generative AI and pitched itself as the first “AI fluent” university, pledging to embed AI “across every major”.

“No one knows what that means,” Clune said of the plan. “In my case, as a literature professor, these tools actually seem to mitigate against the educational goals I have for my students.”

That’s the crux of what many professors in the humanities fear: that technology that may well be a cutting-edge tool in other fields could spell the end of their own.

Alex Karp, the Palantir co-founder and CEO, stoked those anxieties when he said in a recent interview that AI will “destroy humanities jobs”. On the other hand, Daniela Amodei, Anthropic’s president and co-founder – who was a literature major – said the opposite: that “studying the humanities is going to be more important than ever”.

A number of tech and finance companies have recently said that they are looking to hire humanities majors for their creativity and critical thinking skills. Indeed, enrollment data at some universities suggests that the long-struggling humanities might have begun to see a resurgence in the age of AI, with early signs pointing to a reversal in decades-long decline in English majors in favor of Stem ones.

Some caution that the humanities will survive – but as a province of the few. When he predicted the end of the humanities, Karp assured that there would be “more than enough jobs” for those with vocational training. Indeed, several professors spoke about concerns that AI will exacerbate a widening divide in US higher education and that small numbers of elite students will have access to a more traditional, largely tech-free liberal arts education, while everyone else has a “degraded, soulless form of vocational training administered by AI instructors”, said Zhang.

“I fully expect that we will start seeing a kind of bifurcation in education,” said Matt Seybold, a professor at Elmira College in New York, who has written critically about “technofeudalism”.

Many professors talked about keeping the technology out of the classroom as a battle already lost. As many as 92% of students have reported resorting to the technology in their school work, recent surveys show, and the numbers are rapidly increasing even as growing numbers express concerns about the technology’s accuracy and the integrity of using it. Reliance on AI among faculty is also on the rise, with observers pointing to the dystopian possibility that the college experience may soon be reduced to AI systems grading AI-generated homework – “a conversation between two robots”.

Some universities have adopted AI detection software to catch artificially generated work; others prohibit faculty from directly accusing students of having used AI – as they can often be wrong.

Professors said they resorted to oral interrogations, handwritten notebooks and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI.

Many professors spoke of their frustration at having to sift through students’ artificially generated homework. “It creates hours of additional labor,” echoed Danica Savonick, an English professor at the State University of New York Cortland. “And makes me feel like a cop.”

Some allow students to use AI for research – to a point. Karl Steel, an English professor at Brooklyn College, said that AI has helped make students’ presentations richer and more interesting – but that while they may use it to prepare, he has them speak from minimal notes and stand in front of a photo of a text they annotated by hand. He also assigns written responses to texts only after the class has discussed them. “I suppose they could use their phones to record the conversation, feed a transcript into a chatbot and produce a paper that way,” he said. “But that is more trouble, I think, than most students would take.”


Left to their own devices

Many universities’ administrations are embracing AI for instruction, research and evaluation. In some cases, AI has guided decisions about which programs to cut at times of austerity in the education sector.

More than a dozen universities have partnered with OpenAI on a $50m initiative that the company has said will “accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI”. California State University has joined several of the world’s largest tech companies to “create an AI-powered higher education system”, as the university put it. Multiple universities have introduced AI majors and masters.

The plans are lofty but offer little guidance on what professors are supposed to do with students who can’t read more than a couple paragraphs at a time or turn in essays generated in seconds by a machine. Left largely to themselves, some are trying to articulate clearer lines around AI use, and organize a more coordinated effort against its encroaching dominance.

Last year, the American Association of University Professors, which represents 55,000 faculty members nationwide, published a report warning that universities were adopting the technology “uncritically” and with little transparency. Some university unions have begun incorporating protections against AI in their contracts to establish oversight mechanisms and give faculty greater input – and to protect their intellectual property from feeding machines that may soon take their jobs.

But much organizing against AI remains informal and via word of mouth, with faculty-led initiatives like the website Against AI, which offers resources to those trying to shield students from the intellectual ravages of outsourcing elements of their education to a machine.

“Materials here are intended as solidarity solace for educators who might find themselves inventing wheels alone while their administrators, trustees and bosses unrelentingly hype AI,” reads the website, which offers a list of assignment ideas to mitigate AI use – from oral exams, to requirements students submit photographic evidence of their notes, to analog journals.

Many of the professors interviewed by the Guardian said they ban AI in their classrooms altogether – but recognize their hardline approach is discipline-specific.

Megan McNamara, who teaches sociology at the University of California, Santa Cruz and created a guide for faculty across disciplines to deal with AI-related academic misconduct, noted that “cultural” differences in the humanities versus Stem disciplines, or in qualitative social sciences versus quantitative ones, tend to shape faculty members’ responses to students’ use of AI.

“I think that’s just a function of one’s individual relationship with writing/reading/critical analysis,” she wrote in an email.

Several professors spoke of using the issue as an opportunity to get students to think critically about technology.

When she suspects someone has used AI, McNamara talks to them about it, treating the incident as an “opportunity for growth, restorative justice and enhanced authenticity in student-instructor relationships”, she said.

Eric Hayot, a comparative literature professor at Penn State University, said he tries to convince his students that tech companies are trying to make them “helpless” without their product.

These companies are giving these technological tools away partly because they’re hoping to addict a generation of students,” Hayot told the Guardian. “This is part of every single class I teach now, talking to students about why I’m not using AI, why they shouldn’t use AI.”


We can decide that we want to be human

Several professors noted that they have also begun to see mounting discomfort from students against the technology – and technology’s dominance in their lives overall.

Clune, the Ohio State professor, said students have become more curious about his flip phone, which he started using after realizing his smartphone was “destroying” his attention.

“I think the current crop of gen Z students are seeing that they are the guinea pigs in this giant social experiment,” said Zhang, the Berkeley professor.

“There’s a broader and increasing sense from students that something is being stolen from them,” echoed Seybold, the Elmira College professor.

Seybold pointed to students’ mounting disillusion with tech more broadly. Those who are rejecting AI, he added, are often driven by environmental concerns, and suspicion of companies they view as partly responsible for shrinking democracies and a more violent world.

In Michigan, for instance, that has spurred activism. The University of Michigan recently announced plans to contribute $850m toward a datacenter to provide AI infrastructure in collaboration with the Los Alamos National Laboratory – at a time when it is cutting funds for arts and humanities research and on the heels of anti-war protests on campus. A spokesperson for the university said that the planned facility would be smaller and consume less energy than a “typical datacenter”.

As pushback grows, so does an emphasis on those intrinsically human qualities that differentiate people from machines – the very qualities a humanistic education seeks to nurture.

“There’s kind of defeatism, this idea that there’s no stopping technology and resistance is futile, everything will be crushed in its path,” said Clune, the Ohio State professor. “That needs to change … We can decide that we want to be human.”

That idea has also been key to Pao’s approach to teaching in the age of AI.

You plant seeds and you hope,” Pao said, of efforts that at times feel like tilting at windmills. “You hope that in the long term you’re helping them become happy human beings, who are able to take a walk, and experience things, and describe things for themselves.”


Source: Alice Speri

https://www.theguardian.com/technology/ng-interactive/2026/mar/10/ai-impact-professors-students-learning

Monday, March 09, 2026

幾個有關教育的殘酷事實

學文言文、背註釋、背元素週期表跟化學式,這些在以後工作場合都用不到,為甚麼學校要逼迫我們去學習?

我告訴你幾個殘酷的事實:

一、這些課程不是專門設計給你的,是為了那些以後有能力繼續在該領域深造的、有能力的孩子們設計的。

二、教育的成本非常非常昂貴,你可以把你的老師一學年的授課時數加總起來、再去乘上一般家教的基本時薪,你會發現你的家庭收入根本負擔不起讓你學習這麼多種專業。

三、我們在開始教學之前很難精準篩選出有能力值得培養的孩子、同時孩子們根本也負擔不起這個學費,所以就大家一起上課、共同分擔成本,沒錯,我們多數人就是來當分母的,但如果你學得好,你理論上就會變成那個分子。

四、承上第三點,你要嘛就找到某個學科有興趣你也學得好的,讓自己成為分子。你只要有一科學得還不錯,其實就已經打平支出了;兩科以上學得好都是賺;萬一全部都不感興趣呢?你想一下這個課程本來是設計給被選上的孩子,你表現得也不怎樣,也沒有特別出多少錢,但是還可以得到差不多水準的教育。就不要抱怨了。

五、以整個社會來說,這些教育是為了提升整體國民的水準,來減少總是犯一些低級失錯誤而浪費的社會成本。就像你雖然不可能上場打球,但因為稍微了解了這個運動,你就可能會去支持球員、或者至少不會惡搞球場還自以為了不起。

Source: 楊用修

https://www.threads.com/@yong.xiu.10/post/DVqIqcKgb8K?xmt=AQF0dtLof0JS5_ICfS_aUtpF-Gv06277oYDAnqYJtNhsbXOtW97bwXsIYONViaV58AnzunTE&slof=1

Friday, March 06, 2026

You’re still designing for an architecture that no longer exists

Last Tuesday, I asked Claude to prepare a competitive analysis. Not in a chat window. Not through a prompt. I opened Cowork, pointed it to a folder on my desktop, and said what I needed. It read my files. It cross-referenced data from Slack through a connector. It pulled calendar context. It produced a document — formatted, structured, sourced — and saved it to my working folder. I didn’t open a single application. I didn’t navigate a single menu. I didn’t click through a single interface.

I sat there for a moment, staring at the screen. Not because something had gone wrong — but because nothing looked familiar. The windows were gone. The menus were gone. The entire choreography of opening, navigating, operating, saving, closing, and opening the next thing — the choreography I’d been performing for twenty years — had simply… disappeared.

And that’s when I realized: I wasn’t using a tool. I was working inside a different environment. One that nobody had bothered to name yet.

We keep asking the wrong question. We keep asking how good is the assistant — how well it writes, codes, summarizes, reasons. But that question belongs to the old architecture. What’s actually happening is bigger: the environment itself is changing. The space where we work — the one we’ve inhabited for four decades, the one built on windows and menus and folders and clicks — is being replaced by something structurally different. And if you’re still designing screens, flows, and navigation systems, you might be perfecting the blueprint of a building that’s already been demolished.


Forty years of the same interface

In 1984, Apple introduced the Macintosh and, with it, a way of working that would define every professional environment for the next four decades. The graphical user interface gave us windows, icons, menus, and a pointer — the WIMP paradigm. It was revolutionary. And then it froze.

Think about what changed between 1984 and today. Processing power grew exponentially. Storage went from kilobytes to terabytes. Networks connected billions of devices. Screens went from bulky CRTs to panels in our pockets. But the interaction logic — the fundamental way we relate to our work environment — remained essentially the same. You open an application. You navigate to what you need. You operate it manually, step by step. You save. You close. You open the next one.

The internet didn’t change this. It added connectivity, but you still navigated — now through links instead of folders. Mobile didn’t change it either. It made the environment portable, but you still tapped through apps, scrolled through feeds, clicked through menus. Even cloud computing — which transformed infrastructure — left the interaction surface largely untouched. You were still the operator. The system still waited for your commands.

Everything change except the interface. Generated with Gemini.

As Satya Nadella put it: “Thirty years of change is being compressed into three years.” But what’s being compressed isn’t just speed or capability. It’s the architecture of the environment itself.

For four decades, the working environment asked you how. How do you want to format this? Which menu holds the function you need? What’s the right sequence of clicks to get from here to there? The entire interface was a map, and your job was to navigate it.

That map is disappearing. And what’s replacing it isn’t a better map — it’s a fundamentally different kind of space.


What Claude is actually showing us

Let me describe what working with Claude looks like today — not theoretically, but practically. Because the shift becomes obvious once you stop thinking about features and start paying attention to the experience.

Cowork reads files on your desktop, modifies documents, creates deliverables, and operates within your working folder — asking for confirmation before significant actions, working autonomously within defined boundaries. It launched in January 2026 and, within weeks, triggered a $285 billion selloff in software stocks. Not because of what it does, but because of what it replaces: the need to open applications at all.

Claude Code doesn’t assist developers — it is the development environment. Engineers describe entire systems in natural language, and Code builds them: writing files, running tests, submitting pull requests, spawning parallel sub-agents for different tasks. It hit $1 billion in run-rate revenue within six months of general availability. Spotify reports that roughly half of all their updates now flow through AI-generated code, with a 90% reduction in engineering time for large-scale migrations.

Claude in Chrome operates your browser — managing calendars, drafting emails, filling forms, extracting data — maintaining context across sessions.

Claude in Excel reads complex multi-tab workbooks, builds pivot tables, pulls live market data through connectors from S&P Global, Moody’s, and FactSet.

Memory doesn’t just store preferences. It maintains hierarchical context — organization-wide policies, project-level standards, individual preferences — and recovers the full state of your working environment in seconds. As one developer described it: “Treat it as system state. The file becomes the source of truth.”

And underneath all of this, MCP — the Model Context Protocol — connects Claude to your entire technology stack: Google Drive, Slack, GitHub, Gmail, Figma, Notion, Salesforce, and thousands more. With 97 million monthly SDK downloads and adoption by OpenAI, Google, and Microsoft, MCP has been donated to the Linux Foundation as an open standard — what Thoughtworks described as one of the fastest standards convergence cycles in recent tech history.

Now step back and look at what I just described. Not a chat interface with added capabilities. A working environment — one where the system reads your context, understands your purpose, operates across your tools, and delivers results while you focus on what actually matters.

Nick Turley, OpenAI’s Head of Product, said it plainly: “We never meant to build a chatbot; we meant to build a super assistant, and we got a little sidetracked.”

Everyone got sidetracked. The text box made us think we were talking to a tool. We were actually sitting inside the first draft of a new environment.

Claude Ecosystem Diagram. Generated with Gemini.


The three variables of a new architecture

If this is a new environment and not just a better tool, it should be structurally different — not incrementally improved. And it is. But to see the structure, you need to stop looking at capabilities and start looking at variables. What coordinates define this space that didn’t exist in the previous one?

I’ve spent the last two years studying this question, and what I’ve found is that three variables consistently distinguish this new architecture from everything that came before it.


Intention. In the traditional working environment, you tell the system how to do things. You navigate menus, select options, sequence operations. The system doesn’t know what you want — it knows what you clicked. In the new environment, you express what you want to achieve. The system interprets your purpose, weighs context, and determines the path. Claude doesn’t execute commands; it interprets goals. MCP doesn’t connect tools for the sake of integration; it connects them at the service of what you’re trying to accomplish. This is the shift from procedural thinking to intentional thinking — from operating a machine to having a conversation with a collaborator who understands purpose.

Intention. Generated with Gemini.


Autonomy. In the traditional environment, systems assist. They wait for your next instruction. Every action requires a human operator pressing a button, selecting an option, confirming a step. In the new environment, systems act. Claude Code doesn’t wait for you to dictate each line of code — it plans, executes, tests, iterates, and spawns sub-agents to work in parallel. Cowork doesn’t need step-by-step guidance — it works toward outcomes within boundaries you define. This is not automation in the industrial sense, where machines repeat predefined sequences. This is agency: the capacity to pursue goals through autonomous decision-making while maintaining human oversight.

Autonomy. Generated with Gemini.


Adaptation. In the traditional environment, systems remain fixed. Your software behaves the same way on day one and day one thousand. If you want it to change, you configure it manually — or wait for the next version. In the new environment, systems evolve. Claude’s memory learns your preferences, your team’s standards, your organization’s policies. It gets better at understanding you over time. The interaction isn’t static — it’s alive. What was once a tool that needed to be configured becomes an environment that learns to fit.

Adaptation. Generated with Gemini.


These three variables — intention, autonomy, and adaptation — don’t operate in isolation. They are unified by a fourth principle: orchestration. MCP is the clearest manifestation of this. It’s the connective tissue that allows intentions to flow across tools, autonomous actions to coordinate across systems, and adaptive learning to compound across interactions. As Microsoft’s CTO Kevin Scott observed at Build 2025: “MCP is filling such an unbelievably big need in the ecosystem… it’s really kind of breathtaking.” Atlassian’s CTO Rajeev Rajan called it “the gold standard for how LLMs interact with tools.”

But orchestration isn’t just a protocol. It’s the architectural principle that turns three independent variables into a coherent environment. It’s what makes the difference between a collection of smart features and a fundamentally new working space.

Here’s the critical point: these aren’t features of Claude. They are coordinates of a new architecture. Any system that embodies intention, autonomy, adaptation, and orchestration is operating in this new space — regardless of which company built it. Claude happens to be the most complete manifestation today. But the architecture is bigger than any single product.

Architecture 6 (HOW) vs Architecture 7 (WHAT). Generated with Gemini.


Everyone is describing the same thing

What makes this moment historically significant is that the people building these systems are arriving at the same conclusion from entirely different directions — and most of them don’t realize they’re describing the same thing.

Goldman Sachs’ CIO Marco Argenti writes: “Rather than functioning as one-dimensional applications, AI models are becoming operating systems that independently access tools in order to perform tasks.” Sam Altman told Sequoia Capital: “People in college use it as an operating system.” Mustafa Suleyman, CEO of Microsoft AI, frames it as: “This is going to be the next platform of computing.” Jensen Huang and Siemens announced a partnership to build what they literally called “the Industrial AI Operating System.”

From the design world, the language is different but the observation is the same. John Maeda describes the shift from UX to AX — Agentic Experience — where “designers become orchestrators of experiences rather than crafters of interfaces.” Rachel Kobetz, PayPal’s Chief Design Officer, argues that “the real work of design is orchestrating how intelligence behaves.” Jakob Nielsen, in his 2026 predictions, charts what he calls “a fundamental shift from Conversational UI to Delegative UI.” And Jenny Wen, who leads design for Claude at Anthropic, put it bluntly on Lenny’s Podcast: “This design process that designers have been taught, we sort of treat it as gospel. That’s basically dead.”

Orchestrators. Intelligence that behaves. Delegative rather than conversational. The design process itself declared dead — by the person designing the system that killed it. Everyone is circling the same structural transformation.


I should be honest about where this breaks. Jared Spool makes a fair point: The world romanticizes AI as an all-powerful, game-changing technology when, in reality, it barely works. But it demos well. He’s not wrong — intention interpretation still fails, autonomy can produce confidently wrong results, and adaptation has boundaries that aren’t always transparent. The gap between what these systems promise and what they reliably deliver is real, and anyone designing for this architecture needs to take that gap seriously. But the existence of a gap doesn’t invalidate the architecture. The early web had broken links, crashed browsers, and took minutes to load a single page. The architecture was still real. The question was always whether the structural logic would hold as the technology matured. I believe this one will — not because every interaction works today, but because the variables are right.


This is not a product. This is an architecture.

There is a pattern that most people in technology miss because it operates on timescales longer than product cycles.

In 1440, the printing press didn’t improve manuscripts — it created a new architecture for distributing knowledge. In 1876, the Dewey Decimal System didn’t improve bookshelves — it created a new architecture for classifying information. In 1936, the Turing machine didn’t improve calculators — it created a new architecture for computing. In 1984, the graphical interface didn’t improve command lines — it created a new architecture for human-computer interaction. In 1989, the World Wide Web didn’t improve networks — it created a new architecture for connecting information. In 2007, the smartphone didn’t improve phones — it created a new architecture for mobile access.

Each of these was a structural transformation in how humans organize and access information. Not a better version of what came before, but a fundamentally new set of coordinates — new variables, new paradigms, new possibilities that simply didn’t exist in the previous architecture.

Timeline of the 7 Architectures. Generated with Gemini.


What we are witnessing now is the seventh such transformation. The architecture of Intelligence. A structure defined not by windows and clicks and navigation, but by intention, autonomy, adaptation, and orchestration.

I call this Architecture 7 — the seventh structural transformation in how humans organize access to information.

In The Intelligence Architect, I mapped each of these seven transformations in detail: the variables that define them, the design principles that emerge from each shift, and the practical frameworks for building within a new architecture. But you don’t need the book to see what’s happening. You just need to pay attention to what Claude is showing us right now: we are not living through an AI upgrade. We are living through an architectural shift.

And architectural shifts don’t ask for permission. They don’t arrive with a press release explaining what changed. They arrive as a quiet realization that the environment you’ve been working in — the one that felt permanent, the one built on windows and menus and forty years of muscle memory — has already been replaced by something you can’t yet name but can already feel.

Claude is the first Architecture 7 environment built for people who work with information. It won’t be the last. The same structural principles — intention replacing navigation, autonomy replacing manual operation, adaptation replacing static configuration — will reshape environments for entertainment, communication, education, and every domain where humans interact with complex systems. The evidence — from $1 billion run-rates to $285 billion selloffs to 97 million monthly SDK downloads — makes that trajectory unmistakable.


What this means for designers, practically

If the architecture has changed, so has the work. Designing for intention means your wireframes become something closer to operating manuals — not layouts of screens, but descriptions of outcomes the system should pursue. Designing for autonomy means defining boundaries instead of workflows: what the system can do on its own, where it must pause for confirmation, how it recovers when it gets things wrong. Designing for adaptation means building feedback loops rather than preference panels — mechanisms through which the environment learns from each interaction rather than waiting to be manually configured. And designing for orchestration means mapping how intelligence flows across tools, not how users navigate between them. The screen is no longer the unit of design. The intent is.


You are already inside it

If you used Claude this week — or ChatGPT, or Copilot, or any AI system that understood your intent, acted autonomously, and adapted to your context — you didn’t use a chatbot. You worked inside a new environment. You just didn’t have the language for it yet.

The language matters. Because when you call it “a chatbot,” you design chat interfaces. When you call it “an assistant,” you design helper features. When you call it “a tool,” you design toolbars. But when you recognize it as a new architecture — with its own variables, its own structural principles — you design differently. You stop asking “where should the button go?” and start asking “how does the system interpret intention?” You stop designing navigation and start designing orchestration. You stop building static configurations and start building adaptive environments.

The forty-year environment built on windows, menus, and manual navigation is giving way to one built on intention, autonomy, adaptation, and orchestration. Every major voice in technology and design is converging on this observation from different angles. The question is no longer whether the shift is happening.

The question is whether you’ll design for the architecture that’s arriving — or keep drawing menus for the one that already left.


Source: Adrian Levy

https://uxdesign.cc/youre-still-designing-for-an-architecture-that-no-longer-exists-28b0b10900dd

Friday, February 13, 2026

26 Rules to Be a Better Thinker in 2026

A couple of years ago, I asked Robert Greene what ​he thought about AI. “I think back to when I was 19-years-old and in college,” Robert said. It was a class where they were to read and translate classical Greek texts “They gave us a passage of Thucydides, the hardest writer of all to read in ancient Greek,” he explained. “I had this one paragraph I must have spent ten hours trying to translate…That had an incredible impact on me. It developed character, patience, and discipline that helps me even to this day. What if I had ChatGPT, and I put the passage in there, and it gave me the translation right away? The whole thinking process would have been annihilated right there.”

What does he mean by “thinking process”? He means the slow, tedious, difficult work of figuring something out for yourself. The discipline. The patience. The hours and hours of sitting with frustration and confusion on your way to knowledge and understanding.

This is why I do all my research on physical notecards. It is not fast, easy, or efficient. And that is the point. Writing things down by hand forces me to engage and struggle with the material for an extended period of time. It forces me to take my time. To go over things again and again. To be immersed. To be focused, patient, and disciplined. To come to understand things deeply.

People are talking about what AI is going to replace, that it’s the sum total of all human knowledge, that it’s going to make expertise obsolete. And it’s true it will do a lot and it is unbelievably powerful, but in many ways it makes thinking even more important. You have to be able to interpret what it spits out. You need to know when something’s off. Without domain expertise, without the ability to think critically, to question, to push back, you’ll be fooled. Again and again.

The irony of AI, this cutting-edge technology, is that it makes the humanities more valuable than ever. It makes brainpower even more important. Reading. Knowing things. Having taste. Understanding context. Detecting lies or nonsense. In short: being a discerning, critical, clear thinker.

The tools are only getting more powerful. The noise is only getting louder. We’re being bombarded with more information than any generation in history, and I worry — from some of the emails I get, from the comments I see — that too many people just don’t have the ability to wrap their heads around what’s being thrown at them. Which makes clear thinking one of the most essential skills of our time.


What follows is my advice for what you’re going to need more than ever in this brave new world — 26 rules for becoming a better thinker.


– Take another think. The problem with our thoughts is that they’re often wrong — sometimes preposterously so. Nothing illustrates this quite like what’s called an “eggcorn,” words or expressions we confidently mishear and then contort to match our misperception. “All for not” instead of all for naught. “All intensive purposes” instead of all intents and purposes. But the greatest eggcorn is doubly ironic: people who say “you’ve got another thing coming” are, in fact, proving the point of the actual expression, “you’ve got another think coming.” We need to be able to slow down and use a second think. Especially when we’re sure what we think is right. (And by the way, at least 50% of the time I have to ask ChatGPT to think again because it’s answers are obviously wrong).


– Take walks. For centuries, thinkers have walked many miles a day — because they had to, because they were bored, because they wanted to escape the putrid cities they lived in, because they wanted to get their blood flowing. In the process, they discovered an important side-effect: it cleared their minds and made them better thinkers. Tesla discovered the rotating magnetic field — one of the most important scientific discoveries in modern history — on a walk through a Budapest park in 1882. Hemingway took long walks along the quais in Paris whenever he was stuck and needed to think. Nietzsche — who conceived of Thus Spoke Zarathustra on a long walk — said: “It is only ideas gained from walking that have any worth.” I have never taken a walk without thinking, after, “I am so glad I did that.”


– Embrace contradiction. F. Scott Fitzgerald said, “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.” The world is complicated, ambiguous, paradoxical. To make sense of it, you must be able to balance conflicting truths.. F. Scott Fitzgerald said, “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.” The world is complicated, ambiguous, paradoxical. To make sense of it, you must be able to balance conflicting truths.


– But don’t confuse complexity with nonsense. Stupid people are especially good at having a bunch of contradictory thoughts in their head at once. So the first-rate mind Fitzgerald described isn’t just about tolerating contradiction — it’s really about the ability to examine and interrogate it. It’s asking, Does this actually make sense?


– Go to first principles. Aristotle taught that one must go to the origins of things, go all the way to the primary truth of the matter, instead of just accepting common observation or belief. Don’t just blindly accept what everyone else seems to say or believe. Go to first principles. Instead of engaging with an issue from a headline, a tweet, or a take, go to the beginning. Break things down and build them back up. Put every idea to the test, the Stoics said. The good thinker approaches things with a fresh set of eyes and an open mind.


– Think for yourself. Generally, people just do what other people are doing and want what other people want and think what other people think. This was the insight of the philosopher René Girard, who coined the theory of mimetic desire. He believed that since we don’t know what we want, we end up being drawn — subconsciously or overtly — to what others want. We don’t think for ourselves, we follow tradition or the crowd.


– Don’t be contrarian for contrarian’s sake. Peter Thiel, widely considered a “contrarian,” (and a big fan of Girard) once told me that being a contrarian is actually a bad way to go. You can’t just take what everyone else thinks and put a minus sign in front of it. That’s not thinking for yourself. So in fact, if you find yourself constantly in opposition to everyone and everything (or most consensuses) that’s probably a sign you’re not doing much thinking. You’re just being reactionary.


– Ask good questions. When Isidor Rabi came home from school each day, his mother didn’t ask about grades or tests. “Izzy,” she would say, “did you ask a good question today?” This doesn’t seem like much, and yet it is everything. After all, questions drive discovery. The habit of asking questions turned Rabi into one of the greatest physicists of his time — a Nobel Prize winner whose work led to the invention of the MRI. Questions are the key not just to knowledge but to success, discovery, and mastery. They’re how we learn and how we get better. And they don’t have to be brilliant, probing, or incisive. They can be simple: “What do you mean?” They can be inquisitive: “How does that work?” They can aim for clarity: “Sorry, I didn’t understand, can you explain it another way?” The point is to stay curious. To never stop asking questions.


– Watch your information diet. When I’m not feeling great physically — tired, irritable, sluggish — usually it’s because I’m eating poorly. In the same way, when I feel mentally scattered and distracted — I know it’s time to focus on cleaning up my information diet. In programming, there’s a saying: “garbage in, garbage out.” Aim to let in the opposite of garbage. Because that leads to the opposite of garbage coming out.


– Go deep. I thought I knew a lot about Lincoln. I’d read biographies, watched documentaries, interviewed scholars, visited the sites. I’d even written about him in my books. So when I sat down to write about him in Part III of Wisdom Takes Work, I thought I was set. I wasn’t even close. So I went deeper. I read Hay and Nicolay. Doris Kearns Goodwin’s 944-page Team of Rivals. Michael Gerhardt’s 496-page book on Lincoln’s mentors. David S. Reynolds’s 1088-page Abe. David Herbert Donald’s 720-page Lincoln. Garry Wills’s Pulitzer Prize-winning book on the Gettysburg Address. I spoke with the documentarian Ken Burns about him, and Doris too. I read Lincoln’s letters and speeches. I went, multiple times while writing the book, to the Lincoln Memorial. In the end, I spent hundreds of hours reading thousands and thousands of pages on the man. Basically, I “dug deeply,” as Lincoln’s law partner once said of Lincoln’s own approach to learning, in order to get to the “nub” of a subject. This is a skill you need. Whether you’re an author, politician, lawyer, entrepreneur, scientist, educator, parent — you have to be able to pursue an idea, a question, a thread of curiosity until you’ve gotten to the nub and wrapped your head completely around it.


– Don’t just read, re-read. A lot of people read, not enough people re-read. Don’t just read books, re-read books. There’s a great line the Stoics loved — that we never step in the same river twice. The books don’t change, but you do.


– Seek out people who disagree with you. In 1961, the Navy sent Commander James Stockdale to Stanford to study Marxist theory. Not criticisms of Marxism — primary sources. Marx. Lenin. The works. His parents had taught him: you can’t compete against something you don’t understand. A few years later, Stockdale was shot down over North Vietnam and spent seven years being tortured in the Hanoi Hilton. His knowledge of Marxism proved essential — he understood the ideology better than his interrogators did. Seneca said we should read dangerous ideas “like a spy in the enemy’s camp.”


– Ego is the enemy. Epictetus reminds us that “it’s impossible to learn that which you think you already know.” The physicist John Wheeler said that “as our island of knowledge grows, so does the shore of our ignorance.” Conceitedness is the primary impediment to wisdom. That’s something I often find with AI, its quickness and confidence in its answers…which are laughably wrong. If you want to stay humble, focus on all that you still don’t know. After all, isn’t that the Socratic method?


– Beware the Gell-Mann amnesia effect. Named after the Nobel Prize-winning physicist Murray Gell-Mann, the Gell-Mann amnesia effect is the term for a familiar experience: You read an article about something you know well, and you recognize that it’s full of errors, it’s missing context, it’s grossly oversimplifying things. You can’t believe something so bad got published. Then you turn to an article on something you know little about — foreign policy, international affairs, the economy, pop culture — and believe every word. It’s not just that the media exaggerates and sensationalizes. It’s actually worse: Most of the time they don’t even know what they’re talking about. The same goes for AI, which is trained on many of those error-filled sources. I’ve had ChatGPT confidently butcher things I know well. Why would I unquestioningly trust it on things I don’t? The problem is we don’t know what we don’t know. Which means we don’t know when we’re being fooled.


– Be flexible. A colleague of Churchill once observed that Churchill “venerated tradition but ridiculed convention.” The past was important, but it was not a prison. The old ways — what the Romans called the mos maiorum — were important but not to be mistaken as perfect. Plenty of people have been buried in coffins of their own making. Before their time too. Because they couldn’t understand that “the way they’d always done things” wasn’t working anymore. Or that “the way they were raised” wasn’t acceptable anymore. We must cultivate the capacity for change, for flexibility and adaptability. Continuously, constantly.


– Empty the cup. There is an old Zen story about a master who receives a student for tea. As the visitor extends their cup, the master pours…and pours, and pours. The cup begins to overflow. Finally, the student says something: “Stop! The cup is full. It can hold no more.” “Yes,” the master replies. “And your mind is like this cup, full of opinions and speculations. How am I to show you Zen unless you empty your cup?” This is a message about the perils of ego, obviously. It’s a message about keeping an open mind. Because the cup also does not have to be full to cause problems. “If this vessel is not clean,” the Roman poet Horace said in the first century BC, “then whatever you pour in goes sour.”


– Seek understanding, not trivia. Whenever you’re consuming anything, don’t just try to find random pieces of information. What’s the point of that? The point is to understand, to build a foundation of real, true wisdom — that you can turn to and apply in your actual life. On the literary snobs who speculate for hours about whether The Iliad or The Odyssey was written first, or who the real author was (a debate that rages on today), Seneca said, “Far too many good brains have been afflicted by the pointless enthusiasm for useless knowledge.”


– Write to think right. Peter Burke, one of Montaigne’s biographers, believed that Montaigne’s essays were precisely that, a man’s “attempt to catch himself in the act of thinking.” Montaigne said that he wrote as though he was speaking to another person. But that doesn’t mean his essays were casual or off the cuff. Montaigne had to sit and really think — the act of his thoughts flowing from his brain, down his arm, through his pen, and onto the page was a process by which much reflection was transcribed, and, since he continued to edit his writing until the day he died, refined. Only a fool goes with their first thought. A wise person takes time to contemplate.


– Create a second brain — a collection of ideas, quotes, observations, and information gathered over time. As Seneca wrote: “We should hunt out the helpful pieces of teaching and the spirited and noble-minded sayings which are capable of immediate practical application — not far far-fetched or archaic expressions or extravagant metaphors and figures of speech — and learn them so well that words become works.” (Here’s a video on my method).


– Cultivate empathy. Empathy is as much a practical skill as it is a moral one. If you don’t have the ability to think about what other people think about this or that situation, to imagine how something looks from someone else’s perspective, then you have a very limited view of reality.


– Look at the fish. When Samuel Scudder interviewed for a job with the great Harvard biologist Louis Agassiz in 1864, Agassiz placed a dead fish on a tray in front of him. “Look at the fish,” said, and then he left the room. Scudder picked it up, turned it over, counted the scales, and drew it. When Agassiz returned, he was unimpressed. “You have not looked very carefully,” he said. “You haven’t even seen one of the most conspicuous features of the animal, which is as plainly before your eyes as the fish itself; look again, look again!” This went on for three days. “Look, look, look,” Agassiz would say. What did Scudder ultimately discover about the fish? Nothing. It wasn’t about the fish. It was about focus — looking long enough and hard enough to truly see what’s in front of you. This is the skill that good, clear, deep thinking depends on.


– Find your scene. “Tell me who you consort with,” Goethe said, “and I will tell you who you are.” You need to find a scene that challenges you, inspires you, exposes you to new ideas, holds you accountable, and pushes you beyond your limits. Put yourself in rooms where you’re the least knowledgeable person. Observe. Ask questions. That uncomfortable feeling when your assumptions are challenged? Seek it out. Let it humble you.


– Assemble a board of directors. It’s important to have a mentor. It’s important to have a scene. But at the highest levels, we must develop a board of directors — people who advise and consult, who check and even correct you. This isn’t a formality but an essential practice to always be learning and improving. Whose collective experiences are you drawing on? Who in your life can tell you that you’re wrong? That you’re being an idiot? We need other voices around us. We need help. We need to be able to yield. Only a fool declines this priceless resource.


– Beware your inner child. Where do your own emotional patterns get in the way of clear thinking? When you’re hurt or betrayed or unexpectedly challenged, pay attention to how you react. Notice the “age” of that reaction. Is it mature, measured, proportional? Or does it feel more like a wounded eight-year-old lashing out? That’s your inner child — the pain you still carry from early experiences, hijacking your adult mind. Good thinking requires the ability to recognize when your inner child has taken the wheel. This is another benefit of having a board of directors — they can serve as parents to our inner child.


– Keep your identity small. This is a rule from the great Paul Graham. His point was that the more you identify with things — being a member of a certain political party, being seen as smart, being seen as someone who drives a fancy car or someone who belongs to this club or that ideology — the harder it is for you to change your mind or entertain new points of view. Stay a free agent!


– Do the work. In Wisdom Takes Work, I quote Seneca, “No man was ever wise by chance.We must get it ourselves. We cannot delegate it to someone or something else. There is no technology that can do it for you. There is no app. There is no prompt, no shortcut or summary or step-by-step formula. There is no LLM that can spit it out in thirty seconds.


Source: Ryan Holiday

https://ryanholiday.medium.com/26-rules-to-be-a-better-thinker-in-2026-6393399aad3d

Saturday, January 31, 2026

強制安全帶,問題意識何在?立法議會在立法懲罰之前有考慮過嗎?

 

圖一、希臘神話中,奧德修斯射穿十二斧眼。圖片來源:網絡

強制安全帶的法例昨日停止執行了。安全帶的鬧劇,如大埔宏福苑的大火災一樣,都是連串的不利因素,由於官員層層把關疏忽,這些不利因素串聯起來,形成災難。宏福苑是防火圍網不及格、窗口用發泡膠圍封、七座大廈一齊做外牆裝修、冬天風高物燥、風向有利大火傳播、防火警鐘關閉及消防水喉停水(消防署拒絕深圳救兵等因素不算入),七個因素串聯,才會釀成大火災。假如,當日風向是由山吹向海,只有一兩座大廈遭殃。

這就好似荷馬的希臘史詩《奧德賽》(Homer's The Odyssey)在《奧德賽》中,奧德修斯(Odysseus)離開以色加(Ithaca)太久,回來的時候門衛都辨認不到,唯有射術來證明自己是國王。他拉弓射箭,射穿十二把斧頭的孔來證明自己是國王。

安全帶法例也是行政災難,好彩只是引起阿伯打架和安全帶困乘客兩件小事,沒有釀成大災。這行政災難要貫串這些因素才可以形成:律政署起草法例,提交立法會審議的時候,職員並未講清楚執行日期及執行限制是在二〇二六年一月二十五日或之後的新登記巴士,即是說,法例初期只是備用,可以宣傳但不必執行。這是必須向議會講解清楚的。其次,議員審議漫不經心。其三,法例生效之前政府要宣傳,但負責宣傳的部門並未有法律顧問去審閱法例。其四、宣傳之後,未有傳媒或關注團體去審閱法例。其五,法例生效之後,群情洶湧,政府高官振振有詞,部門上下無人去審閱法例。五個因素貫串起來,才發生這宗鬧劇。

政府和傳媒全班人好似被鬼迷。這與宏福苑大火災的情況相似,是香港失運的徵兆。

圖二、巴士陳伯的煩惱。圖片來源:網上,TVB新聞截圖

安全帶事件,我日前在本欄寫了法學原則,之後老政客陳婉嫻和前議員江玉歡連環說破此法例。


今日中午,我在面書寫了鹹濕論政的帖文,嘲諷一下:

香港強制巴士乘客戴安全帶的法例其實沒有錯,法例規定只能在二〇二六年一月二十五日或之後登記的新巴士。政府在草擬法例的時候是考慮到現行的巴士是不適宜強制配戴安全帶的,因為巴士必須預留空間給予乘客活動,才可以強制執行,否則就會插入其他乘客不方便但或者也很期待被插入的地方。而靠窗的乘客如要落車,必須先請側邊乘客鬆開安全帶,前面也要預留空間,否則很容易觀音坐蓮,起碼玩它幾分鐘,趕不及落車。

綁了安全帶,打鐘落車有困難,故此新巴士必須將鐘安裝在前面椅背,不必站立就可以打鐘。

此外,也要顧及社會公義:政府立例不能忽略企位乘客的安全而不給予保障,故此預設的新登記巴士,必定是禁止企位的,也不會有雙層巴士,避免上落樓梯離開安全帶的保護太久。

一句話:為了執行新法例,巴士公司必須重新訂製為此目標而建造(purpose-built)的新巴士!模樣類似豪華版的、加大的小巴,四十座位以內。錢從何來?當然是政府巨資補貼。巴士的載客量減少了,巴士公司的收入減少,除了加價之外,也要政府補貼票價。

如此周到的法例的立法原意,居然沒被議員留意,也沒被政府推廣的人留意,居然埋沒了。你說可惜不可惜?


附錄:新聞提要

巴士強制配帶安全帶的新法規爭議不斷,前立法會議員江玉歡於一月二十九日晚上發文,質疑根據法例的行文,巴士強制配帶安全帶的罰則,只適於今年 1 月 25 日、即法例生效後才首次登記的巴士,根本不適用於絕大部分現行的巴士。

運輸及物流局長陳美寶一月三十日下午見記者,承認徵詢律政司法律意見後,安全帶法例法律條文「技術上不足」,「未能完全反映立法原意」,為「清晰起見」,會盡快安排首先刪除有關條文,「即係話,令目前唔會有法定要求乘客喺乘客專營巴士上邊配帶安全帶。」

陳美寶又指,會在完善相關法律條文後,徵詢立法會再推出。

陳美寶整個見記者的過程約十分鐘,但她多次被問到會否為失誤致歉均無回應,至於她所指會「盡快」刪除條文即需時多欠,她都無回應。


Source: 陳雲

https://www.patreon.com/posts/ba-shi-quan-dai-149563559

Friday, January 30, 2026

How to Become a Better Learner

Everybody hates that feeling when you spend three weeks reading a book, and a month later somebody asks you about it and you can’t remember a damn thing you read. Not only does it make you feel stupid, but it also makes you wonder why the hell you wasted a couple dozen hours of your life on a bunch of words that didn’t stick.

There are better and worse ways to learn. And interestingly, despite all of the babbling that goes on in school when you’re a kid about what you need to learn, not much is said about how to learn effectively.

And when I say “to learn effectively” what I mean is A) to not just accumulate knowledge but B) to be able to apply that knowledge effectively at some point in the future.

By this definition, much of what you did in school was not learning. It was temporary exercises in memorization. By this definition, most of the seminars and courses and books and conferences people spend money on is not learning either.

Something is not truly learned until it changes you in some way, no matter how subtle or simple.


1. Memory Is Based on Relevance

Since my book came out last year, I’ve done probably 500 different promotional things for it. But one of my favorites was participating in this online book club called Mentor Box.

Like most book clubs, Mentor Box sends you a couple books each month and you’re supposed to read them. But what’s cool about Mentor Box is not only do they send you the books, they send you study material related to the books, as well as video interviews with the authors. But their study materials, instead of being like school, where it asks you to repeat information in the book to help memorize it, are designed to force you to apply the lessons to various areas of your life.

That’s because memory works based on relevance. We’re selfish creatures by nature and we only remember what our brain has deemed important to our own lives. You can learn the coolest thing in the world, but if you don’t find a way to make it relevant to you and your well-being in some way, your brain will conveniently forget it.

If you want to remember information, then you need to stop and take a second to ask yourself, “How is this relevant to me?” or “How can I apply this in my life?” You basically have to get personal with it. And if you’re not willing to get personal or think about your own life critically in that way, then most of the information you consume will just wash away.

Mentor Box is a great tool, but you can do this on your own when you’re reading at home. You can go out and buy a notebook (or keep a folder on your computer) and every time you come across something interesting in a book, write down its application or relevance to something in your life — how you can use the concept, how it explains something in your past, how it can help with your problems, etc.

Basically, you need to approach whatever material you’re studying with a clear purpose in your mind. You can’t just read a book to say you read it. That’s like dating someone just to say you dated them. It’s empty and pointless and soon you’ll forget it ever happened. Go into everything you read with a clear purpose of what you want to get out of it, then do the extra mental steps to make sure that happens.


2. Memory Functions by Association, Not by Blind Recall

We’ve all had that experience of watching a documentary or something, and then when we try to think back a couple days later, not remembering what was in it.

That’s because blindly recalling information out of the blue rarely works, and is not an efficient way for our brain to work.

Our memory works via associations. For instance, I saw a documentary a few years ago about the Soviet Union hockey team. It was one of those things that I not only forgot what was in it, but forgot that I had even watched it.

Then, a couple months ago, I was talking to a guy who was writing a book about teamwork. He mentioned something about hockey and the documentary immediately shot back into my head. I started describing it to him, and suddenly flashes of various scenes and interviews started returning to my conscious memory.

The information had always been in my head, it just hadn’t been accessible because it wasn’t associated or relevant to anything I was discussing.

Understanding that memory works in this way is useful though because it means you can become more economical in what you choose to remember and what you don’t.

In this day and age, where we can Google and Wiki everything, sometimes just remembering the core idea or general principle behind a book or article is useful enough in and of itself. I couldn’t tell you the studies or statistics about men’s job prospects and college graduation rates, but I do know they’re declining and I do remember there’s a famous article from The Atlantic (and book) that I could easily look up if I wanted to know all of that stuff (I just did, it’s here). I remember the principle point is that new technologies are creating an economy where the skills that men excel at are no longer as useful as those that women excel at. I couldn’t tell you anything else about the article, but I know enough to find it, pull it up and grab whatever facts I may need and then move on.


3. Reading Does Not Have to Be Linear

Another mistake a lot of people make is assuming that they have to read everything, line by line, one after another. This is not only not true, but it’s often a waste of time and energy.

If you’re reading a nonfiction book and you already understand the main idea of a paragraph, skip to the next one. If you’re reading a study or story that you’ve heard before, skip it (unless you want to reinforce it, of course). If a book is kind of bad and there’s really only one chapter that sounds appealing, just read that chapter and put the rest away.

When you buy a book, you’re not buying the words, you’re buying the useful ideas. The job of the writer is simply to convey those ideas as efficiently as possible. If the writer is doing a poor job of that, then take it upon yourself and act accordingly.

The point of a book (or article, or video, or podcast) is to glean the information that is relevant and important to you. Not to finish it or to understand every word. What matters is the principle or key idea. Everything else is merely a vehicle designed to get that principle or idea to as many people as possible. Once you’ve received that principle/idea, there’s no reason to feel obligated to sit there and read/watch/listen to the rest (unless you want to).


4. Thinking Critically and Asking the Right Questions

Everything you read should be questioned. You should question the author’s biases, whether they’re interpreting information correctly, whether they’re overlooking something.

One thing I try to force myself to do, especially when I’m reading something I agree with, is ask, “How could this potentially be wrong?”

You’ll be surprised how often you come up with stuff.

Other useful questions to ask after everything you read include:

  • “How does the author benefit from writing this?”
  • “Is this something relevant to my own life and happiness? Is it worth remembering?”
  • “What’s the underlying principle here? How could it be applied to other areas of life?”

The truth is, there’s little that we know with absolute certainty. Most models and theories have little empirical support for them (looking at you personality tests), and outside of the hard sciences, much of the academic research out there is flimsy at best, and outright misleading and wrong at worst.

Everything should be taken with a grain of salt (including what I’m writing right here), for the simple reason that almost everything is largely uncertain. And it’s the ability to navigate those uncertainties effectively that will determine the depth of your knowledge and understanding, NOT the simple ability to memorize a bunch of facts and numbers.


Source: Mark Manson

https://markmanson.medium.com/how-to-become-a-better-learner-c1def7f9d5da

Thursday, January 29, 2026

強制安全帶,問題意識何在?立法議會在立法懲罰之前有考慮過嗎?

 


為什麼香港土地公陳雲這次不極力反對巴士強制戴安全帶?此乃是次安全帶爭議之中,最為神秘的地方!

去年,我曾經一力反對過膠袋徵費,拍片三次及寫公開帖文十幾次,並得到全國僑聯的前領導加入反對陣營而令事情告吹。今年一月二十五日,強制安全帶來了,我除了傳閱一些嘲諷的帖文和新聞,並無公開大力反對。原因有二:首先,我不想政府又針對我。其次,我知道此法無法執行。膠袋徵費是籌劃了用政府垃圾袋來執行的,故此必須反對。強制戴安全帶,需要大量執法人員來巡查,阻礙巴士運行,故此執法力量不大。法不治眾之下,人民會鬆懈對待,結果變成自願戴安全帶。

為了與大家鍛煉一下腦筋,這裡不妨與各位一齊思考一下,政府憑什麼強制巴士乘客戴安全帶,否則罰款五千元及監禁三個月(最高刑罰)?


強制佩戴安全帶的公義原則及法學考慮


假若我在議會,遇到這種立法,我思考的法學原則如下:

一、首先,要調查外地和香港的實際情況。目前各地的安全帶強制都是限於跨境巴士,有企位的市內巴士是自願戴上安全帶(如設有)而並非強制。香港原先也是限於無企位的小巴才強制佩戴。各國的理由是跨境巴士容易發生意外,故此強制安全帶,但罰則並無香港之重。

二、此事的問題意識(problematics)在於,安全帶是保障有座位的乘客,那邊沒座位的企位乘客,政府是否放棄保護,各安天命呢?有否座位,視乎上站的地點和時間,那麼於不利的地點和不利的時間上車的人,是否放棄保護呢?這就是安全帶強制與否最深刻的公義原則:那就是如果政府要保護乘客,就全部一體保護,不能放棄企位的乘客。基於這潛藏的法學原則(各地政府未必這樣辯論過而我相信是無),各地的市內巴士只會鼓勵戴安全帶而不能立法強制。故此,我在去年十月二十五日於面書公開呼籲一次,如果要強制安全帶,巴士必須廢除企位!(帖文見本文附錄)當然,我不會將法學的道理講出來!這是極為昂貴的學術機密。

安全帶的問題,討論完畢。

(從上述的討論,大家會知道,為何深層國家絕對不會允許陳雲進入立法議會,因為我會提升民智,不利於豪強和財閥剝削勞苦大眾。我在辯論採用的道理,不限於階級鬥爭,而是源自古希臘邏輯學的、普遍性的理性。)


附錄:陳雲呼籲取消巴士企位的公開帖文

陳雲:星期日講道。巴士要扣安全帶之後,大大增加了皮膚、汗液和口涎的傳染病傳播的機會。小巴也是。有無醫學團體夠膽——我說是夠膽!!!!!!!!!!——去化驗一下這些公用交通工具的安全帶的含菌量和(如有)消毒情況所帶來的化學感染?

是哪一屆立法會通過這些法例的?當然估計是有泛民在內的一屆。

為什麼立法不是限制在危險座位(如前面沒有座位防護有事直接飛出)而是遍及所有座位呢?貪方便嗎?或預設了只是危險座位的乘客知機自己扣安全帶而其他可以隨意選擇嗎?

此外,為了安全,請立法取消巴士的企位,好似小巴那樣,坐夠人就開車!

不超度。

(閱讀智力門檻:智人)


Source: 陳雲

https://www.patreon.com/posts/qiang-zhi-quan-149390337


[2] 2025-10-26 11:54 https://www.facebook.com/wan.chin.75/posts/pfbid0zyddDyVN9ehvtX2foNrg7FduRyr6qg2qzrAmRgsSCphsLWDAZ68QUV95seBce81jl

Tuesday, January 27, 2026

English professors double down on requiring printed copies of readings

Amid the rise of artificial intelligence and concerns about distraction, more English professors are turning to no-technology policies that prioritize physical books and reading packets.


This academic year, some English professors have increased their preference for physical copies of readings, citing concerns related to artificial intelligence.

Many English professors have identified the use of chatbots as harmful to critical thinking and writing. Now, professors who had previously allowed screens in class are tightening technology restrictions.

Professor Kim Shirkhani, who teaches “Reading and Writing the Modern Essay,” explained that for about a decade prior to this semester, she did not require printed readings. This semester, she is requiring all students to have printed options.

“Over the years I’ve found that when students read on paper they're more likely to read carefully, and less likely in a pinch to read on their phones or rely on chatbot summaries,” Shirkhani wrote to the News. “This improves the quality of class time by orders of magnitude.”

As the course director for “Reading and Writing the Modern Essay,” Shirkhani leaves the decision of allowing technology in the classroom up to each individual instructor. Yet others have followed her practice.

Last semester, professor Pamela Newton, who also teaches the course, allowed students to bring readings either on tablets or in printed form. While laptops felt like a “wall” in class, Newton said, students could use iPads to annotate readings and lie them flat on the table during discussions. However, Newton said she felt “paranoid” that students could be texting during class.

This semester, Newton has removed the option to bring iPads to class, except for accessibility needs, as a part of the general movement in the “Reading and Writing the Modern Essay” seminars to “swim against the tide of AI use,” reduce “the infiltration of tech,” and “go back to pen and paper,” she said.

Regarding the printing cost, Newton and Shirkhani both emphasized that Yale has programs to help students who need financial assistance paying for printing.

“I totally get that cost and the burden of that cost,” Newton said in an interview. “I kind of feel like there's going to be a book in most classes that you have to buy, and the course package just sort of replaces a physics textbook.”

Spring semester courses offered a total of 34 TYCO packets this year, up from 20 at the same point last spring, according to archived versions of the TYCO Student Course Packet website. Fall semester courses increased from 30 packets in 2024 to 35 last semester.

TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option.

Other English professors are maintaining preexisting no-technology policies.

Professor Nancy Yousef, continuing from her approach at previous schools, has kept a requirement for printed readings.

“The English classroom is increasingly a kind of special place where it’s still possible to converse without the screen,” Yousef said in a phone interview. “AI only seems to make it more imperative to make sure that students are having a direct experience with the text.”

Yousef explained that literature courses are a “practice of attention and a practice of learning how to ask a good question.” Yousef said she hopes students come away from class with greater questions and increased engagement with the texts rather than “a set of bullet points that can go on a PowerPoint.”

Writing professor Anne Fadiman wrote to the News that she asks students to buy the course packet and, when possible, to use physical copies of the books.

“When you read a book or a printed course packet, you turn real pages instead of scrolling, so you have a different, more direct, and (I think) more focused relationship with the words,” Fadiman wrote.

Professors who continue to allow technology in their classroom cite printing costs and concerns about paper usage.

Professor Stephanie Kelley does not require students to bring printed readings and allows technology “for accessibility, cost-related and environmental reasons.” While she has noticed students being distracted during class, such as by online shopping, she wrote to the News that “it can be a lot of paper, most of it going straight in the bin once class is done.”

Kelley wrote that she wonders why the discussion of course material costs “more often falls on humanities classes rather than those with required textbooks that are often prohibitively expensive to rent or purchase.”

In the fall, Yale College Council Senators Siena Valdivia ’28, Alex Chen ’28 and Alexander Medel ’27 — who is a staff writer for the News — sponsored a $3,500 stipend prioritizing first-generation, low-income students to receive financial aid for printing costs. Medel and Senator Aaron Lin ’28 also sponsored a $6,000 stipend to “alleviate the cost of course materials and textbooks for Yale College students.” These stipends come from the YCC budget.


“In an ideal world, printing would be subsumed into the fiscal responsibilities of the university. But under further priority reconfiguring in light of the endowment tax, any such changes face an uphill climb,” Chen wrote to the News, referring to the upcoming increase in the federal tax on Yale’s investment returns, which was enacted as part of President Donald Trump’s One Big Beautiful Bill Act last year.

For Yale students, printing one double-sided black-and-white page on a University printer costs 12 cents.


Source: Jolynda Wang

https://yaledailynews.com/articles/english-professors-double-down-on-requiring-printed-copies-of-readings