Friday, March 13, 2026

I just had my paper rejected by an MDPI journal

😤 I just had my paper rejected by an MDPI journal.

Not because of scientific flaws. Not because of plagiarism. 

But because "parts were AI-drafted."


➡️ Here's the problem:

1. MDPI published an editorial stating that "AI-written scientific manuscripts should be generally considered acceptable by the scientific community" (Quaia, Tomography 2025).


2. COPE — the international ethics authority — explicitly allows AI use:

"Authors who use AI tools must be transparent in disclosing how 

the AI tool was used. Authors are fully responsible for the content."


I did EXACTLY that:

✅ Disclosed AI use in Methods

✅ Specified the tool (www.publicationgod.com - yes, my own)

✅ Took full responsibility for content

✅ Verified every sentence


This reveals a fundamental contradiction in academic publishing:

We're told to be TRANSPARENT about AI use.

Then we're REJECTED for being transparent.


➡️ The result? Researchers will start hiding AI assistance instead of disclosing it. The exact opposite of what COPE and MDPI claim to want.


The question shouldn't be "was AI used?", it should be "Is the science sound?".


Transparency should be rewarded, not punished.

#AcademicPublishing #ScientificWriting #AIethics #OpenScience


Source: Jens Mittag

https://www.linkedin.com/feed/update/urn:li:activity:7437420702753968128/?originTrackingId=uQfdXY0KwAa6PsudDbT8Tw%3D%3D

Wednesday, March 11, 2026

‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

  

As AI has upended the way students learn, academics worry about the future of the humanities – and society at large

Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events, look at art in the real world.

It’s an effort to reconnect them to the bodily experience of learning, she said, and to keep them from turning to artificial intelligence to do the work for them. “There’s no AI-proof anything,” Pao said. “Rather than policing it, I hope that their overall experiences in this class will show them that there’s a way out.”

It doesn’t always work. Recently, she asked students to visit a local museum, look at a painting for 10 minutes, and then write a few paragraphs describing the experience. It was a purposefully personal assignment, yet one student responded with a sophisticated but drab reflection – “too perfect, without saying anything”, Pao said. She later learned the student had tried to visit the museum on a Monday, when it was closed, and then turned to AI.

As artificial intelligence has upended the way in which students read, learn and write, professors like Pao have been left to their own devices to figure out how to teach in a transformed landscape.

Many faculty members in the hard sciences and social sciences have pointed to the “productivity boost” AI can offer, and the research potential unlocked by its ability to process and analyze vast amounts of data. AI’s most enthusiastic proponents have boasted that the technology may help cure cancer and “accelerate” climate action.

But in fields most explicitly associated with the production of critical thought – what is collectively referred to as the “humanities” – most scholars see AI as a unique threat, one that extends far beyond cheating on homework and casts doubt on the future of higher education itself in a fast-approaching machine-dominated future.

American degrees often cost up to hundreds of thousands of dollars and result in decades of debt, and recent years have seen a freefall in public confidence in US higher education. With the potential for AI to increasingly substitute independent thought, a pressing question becomes even more urgent: what exactly is a university education for?

The Guardian spoke with more than a dozen professors – almost all of them in the humanities or adjacent fields – about how they are adapting at a time of dizzying technological advancement with few standards and little guidance.

By and large, they expressed the view that reliance on artificial intelligence is fundamentally antithetical to the development of human intelligence they are tasked with guiding. They described desperately trying to prevent students from turning to AI as a replacement for thought, at a time when the technology is threatening to upend not only their education, but everything from the stock market to social relations to war.

Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc) off a cliff.”

“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”


A 'soulless' education

AI criticism – or “doomerism”, as the technology’s proponents view it – has been mounting across sectors. But when it comes to its impact on students, early studies point to potentially catastrophic effects on cognitive abilities and critical thinking skills.

Michael Clune, a literature professor and novelist, said that, already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

Ohio State University, where he teaches, has begun requiring every freshman to take a class in generative AI and pitched itself as the first “AI fluent” university, pledging to embed AI “across every major”.

“No one knows what that means,” Clune said of the plan. “In my case, as a literature professor, these tools actually seem to mitigate against the educational goals I have for my students.”

That’s the crux of what many professors in the humanities fear: that technology that may well be a cutting-edge tool in other fields could spell the end of their own.

Alex Karp, the Palantir co-founder and CEO, stoked those anxieties when he said in a recent interview that AI will “destroy humanities jobs”. On the other hand, Daniela Amodei, Anthropic’s president and co-founder – who was a literature major – said the opposite: that “studying the humanities is going to be more important than ever”.

A number of tech and finance companies have recently said that they are looking to hire humanities majors for their creativity and critical thinking skills. Indeed, enrollment data at some universities suggests that the long-struggling humanities might have begun to see a resurgence in the age of AI, with early signs pointing to a reversal in decades-long decline in English majors in favor of Stem ones.

Some caution that the humanities will survive – but as a province of the few. When he predicted the end of the humanities, Karp assured that there would be “more than enough jobs” for those with vocational training. Indeed, several professors spoke about concerns that AI will exacerbate a widening divide in US higher education and that small numbers of elite students will have access to a more traditional, largely tech-free liberal arts education, while everyone else has a “degraded, soulless form of vocational training administered by AI instructors”, said Zhang.

“I fully expect that we will start seeing a kind of bifurcation in education,” said Matt Seybold, a professor at Elmira College in New York, who has written critically about “technofeudalism”.

Many professors talked about keeping the technology out of the classroom as a battle already lost. As many as 92% of students have reported resorting to the technology in their school work, recent surveys show, and the numbers are rapidly increasing even as growing numbers express concerns about the technology’s accuracy and the integrity of using it. Reliance on AI among faculty is also on the rise, with observers pointing to the dystopian possibility that the college experience may soon be reduced to AI systems grading AI-generated homework – “a conversation between two robots”.

Some universities have adopted AI detection software to catch artificially generated work; others prohibit faculty from directly accusing students of having used AI – as they can often be wrong.

Professors said they resorted to oral interrogations, handwritten notebooks and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI.

Many professors spoke of their frustration at having to sift through students’ artificially generated homework. “It creates hours of additional labor,” echoed Danica Savonick, an English professor at the State University of New York Cortland. “And makes me feel like a cop.”

Some allow students to use AI for research – to a point. Karl Steel, an English professor at Brooklyn College, said that AI has helped make students’ presentations richer and more interesting – but that while they may use it to prepare, he has them speak from minimal notes and stand in front of a photo of a text they annotated by hand. He also assigns written responses to texts only after the class has discussed them. “I suppose they could use their phones to record the conversation, feed a transcript into a chatbot and produce a paper that way,” he said. “But that is more trouble, I think, than most students would take.”


Left to their own devices

Many universities’ administrations are embracing AI for instruction, research and evaluation. In some cases, AI has guided decisions about which programs to cut at times of austerity in the education sector.

More than a dozen universities have partnered with OpenAI on a $50m initiative that the company has said will “accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI”. California State University has joined several of the world’s largest tech companies to “create an AI-powered higher education system”, as the university put it. Multiple universities have introduced AI majors and masters.

The plans are lofty but offer little guidance on what professors are supposed to do with students who can’t read more than a couple paragraphs at a time or turn in essays generated in seconds by a machine. Left largely to themselves, some are trying to articulate clearer lines around AI use, and organize a more coordinated effort against its encroaching dominance.

Last year, the American Association of University Professors, which represents 55,000 faculty members nationwide, published a report warning that universities were adopting the technology “uncritically” and with little transparency. Some university unions have begun incorporating protections against AI in their contracts to establish oversight mechanisms and give faculty greater input – and to protect their intellectual property from feeding machines that may soon take their jobs.

But much organizing against AI remains informal and via word of mouth, with faculty-led initiatives like the website Against AI, which offers resources to those trying to shield students from the intellectual ravages of outsourcing elements of their education to a machine.

“Materials here are intended as solidarity solace for educators who might find themselves inventing wheels alone while their administrators, trustees and bosses unrelentingly hype AI,” reads the website, which offers a list of assignment ideas to mitigate AI use – from oral exams, to requirements students submit photographic evidence of their notes, to analog journals.

Many of the professors interviewed by the Guardian said they ban AI in their classrooms altogether – but recognize their hardline approach is discipline-specific.

Megan McNamara, who teaches sociology at the University of California, Santa Cruz and created a guide for faculty across disciplines to deal with AI-related academic misconduct, noted that “cultural” differences in the humanities versus Stem disciplines, or in qualitative social sciences versus quantitative ones, tend to shape faculty members’ responses to students’ use of AI.

“I think that’s just a function of one’s individual relationship with writing/reading/critical analysis,” she wrote in an email.

Several professors spoke of using the issue as an opportunity to get students to think critically about technology.

When she suspects someone has used AI, McNamara talks to them about it, treating the incident as an “opportunity for growth, restorative justice and enhanced authenticity in student-instructor relationships”, she said.

Eric Hayot, a comparative literature professor at Penn State University, said he tries to convince his students that tech companies are trying to make them “helpless” without their product.

These companies are giving these technological tools away partly because they’re hoping to addict a generation of students,” Hayot told the Guardian. “This is part of every single class I teach now, talking to students about why I’m not using AI, why they shouldn’t use AI.”


We can decide that we want to be human

Several professors noted that they have also begun to see mounting discomfort from students against the technology – and technology’s dominance in their lives overall.

Clune, the Ohio State professor, said students have become more curious about his flip phone, which he started using after realizing his smartphone was “destroying” his attention.

“I think the current crop of gen Z students are seeing that they are the guinea pigs in this giant social experiment,” said Zhang, the Berkeley professor.

“There’s a broader and increasing sense from students that something is being stolen from them,” echoed Seybold, the Elmira College professor.

Seybold pointed to students’ mounting disillusion with tech more broadly. Those who are rejecting AI, he added, are often driven by environmental concerns, and suspicion of companies they view as partly responsible for shrinking democracies and a more violent world.

In Michigan, for instance, that has spurred activism. The University of Michigan recently announced plans to contribute $850m toward a datacenter to provide AI infrastructure in collaboration with the Los Alamos National Laboratory – at a time when it is cutting funds for arts and humanities research and on the heels of anti-war protests on campus. A spokesperson for the university said that the planned facility would be smaller and consume less energy than a “typical datacenter”.

As pushback grows, so does an emphasis on those intrinsically human qualities that differentiate people from machines – the very qualities a humanistic education seeks to nurture.

“There’s kind of defeatism, this idea that there’s no stopping technology and resistance is futile, everything will be crushed in its path,” said Clune, the Ohio State professor. “That needs to change … We can decide that we want to be human.”

That idea has also been key to Pao’s approach to teaching in the age of AI.

You plant seeds and you hope,” Pao said, of efforts that at times feel like tilting at windmills. “You hope that in the long term you’re helping them become happy human beings, who are able to take a walk, and experience things, and describe things for themselves.”


Source: Alice Speri

https://www.theguardian.com/technology/ng-interactive/2026/mar/10/ai-impact-professors-students-learning

Friday, March 06, 2026

You’re still designing for an architecture that no longer exists

Last Tuesday, I asked Claude to prepare a competitive analysis. Not in a chat window. Not through a prompt. I opened Cowork, pointed it to a folder on my desktop, and said what I needed. It read my files. It cross-referenced data from Slack through a connector. It pulled calendar context. It produced a document — formatted, structured, sourced — and saved it to my working folder. I didn’t open a single application. I didn’t navigate a single menu. I didn’t click through a single interface.

I sat there for a moment, staring at the screen. Not because something had gone wrong — but because nothing looked familiar. The windows were gone. The menus were gone. The entire choreography of opening, navigating, operating, saving, closing, and opening the next thing — the choreography I’d been performing for twenty years — had simply… disappeared.

And that’s when I realized: I wasn’t using a tool. I was working inside a different environment. One that nobody had bothered to name yet.

We keep asking the wrong question. We keep asking how good is the assistant — how well it writes, codes, summarizes, reasons. But that question belongs to the old architecture. What’s actually happening is bigger: the environment itself is changing. The space where we work — the one we’ve inhabited for four decades, the one built on windows and menus and folders and clicks — is being replaced by something structurally different. And if you’re still designing screens, flows, and navigation systems, you might be perfecting the blueprint of a building that’s already been demolished.


Forty years of the same interface

In 1984, Apple introduced the Macintosh and, with it, a way of working that would define every professional environment for the next four decades. The graphical user interface gave us windows, icons, menus, and a pointer — the WIMP paradigm. It was revolutionary. And then it froze.

Think about what changed between 1984 and today. Processing power grew exponentially. Storage went from kilobytes to terabytes. Networks connected billions of devices. Screens went from bulky CRTs to panels in our pockets. But the interaction logic — the fundamental way we relate to our work environment — remained essentially the same. You open an application. You navigate to what you need. You operate it manually, step by step. You save. You close. You open the next one.

The internet didn’t change this. It added connectivity, but you still navigated — now through links instead of folders. Mobile didn’t change it either. It made the environment portable, but you still tapped through apps, scrolled through feeds, clicked through menus. Even cloud computing — which transformed infrastructure — left the interaction surface largely untouched. You were still the operator. The system still waited for your commands.

Everything change except the interface. Generated with Gemini.

As Satya Nadella put it: “Thirty years of change is being compressed into three years.” But what’s being compressed isn’t just speed or capability. It’s the architecture of the environment itself.

For four decades, the working environment asked you how. How do you want to format this? Which menu holds the function you need? What’s the right sequence of clicks to get from here to there? The entire interface was a map, and your job was to navigate it.

That map is disappearing. And what’s replacing it isn’t a better map — it’s a fundamentally different kind of space.


What Claude is actually showing us

Let me describe what working with Claude looks like today — not theoretically, but practically. Because the shift becomes obvious once you stop thinking about features and start paying attention to the experience.

Cowork reads files on your desktop, modifies documents, creates deliverables, and operates within your working folder — asking for confirmation before significant actions, working autonomously within defined boundaries. It launched in January 2026 and, within weeks, triggered a $285 billion selloff in software stocks. Not because of what it does, but because of what it replaces: the need to open applications at all.

Claude Code doesn’t assist developers — it is the development environment. Engineers describe entire systems in natural language, and Code builds them: writing files, running tests, submitting pull requests, spawning parallel sub-agents for different tasks. It hit $1 billion in run-rate revenue within six months of general availability. Spotify reports that roughly half of all their updates now flow through AI-generated code, with a 90% reduction in engineering time for large-scale migrations.

Claude in Chrome operates your browser — managing calendars, drafting emails, filling forms, extracting data — maintaining context across sessions.

Claude in Excel reads complex multi-tab workbooks, builds pivot tables, pulls live market data through connectors from S&P Global, Moody’s, and FactSet.

Memory doesn’t just store preferences. It maintains hierarchical context — organization-wide policies, project-level standards, individual preferences — and recovers the full state of your working environment in seconds. As one developer described it: “Treat it as system state. The file becomes the source of truth.”

And underneath all of this, MCP — the Model Context Protocol — connects Claude to your entire technology stack: Google Drive, Slack, GitHub, Gmail, Figma, Notion, Salesforce, and thousands more. With 97 million monthly SDK downloads and adoption by OpenAI, Google, and Microsoft, MCP has been donated to the Linux Foundation as an open standard — what Thoughtworks described as one of the fastest standards convergence cycles in recent tech history.

Now step back and look at what I just described. Not a chat interface with added capabilities. A working environment — one where the system reads your context, understands your purpose, operates across your tools, and delivers results while you focus on what actually matters.

Nick Turley, OpenAI’s Head of Product, said it plainly: “We never meant to build a chatbot; we meant to build a super assistant, and we got a little sidetracked.”

Everyone got sidetracked. The text box made us think we were talking to a tool. We were actually sitting inside the first draft of a new environment.

Claude Ecosystem Diagram. Generated with Gemini.


The three variables of a new architecture

If this is a new environment and not just a better tool, it should be structurally different — not incrementally improved. And it is. But to see the structure, you need to stop looking at capabilities and start looking at variables. What coordinates define this space that didn’t exist in the previous one?

I’ve spent the last two years studying this question, and what I’ve found is that three variables consistently distinguish this new architecture from everything that came before it.


Intention. In the traditional working environment, you tell the system how to do things. You navigate menus, select options, sequence operations. The system doesn’t know what you want — it knows what you clicked. In the new environment, you express what you want to achieve. The system interprets your purpose, weighs context, and determines the path. Claude doesn’t execute commands; it interprets goals. MCP doesn’t connect tools for the sake of integration; it connects them at the service of what you’re trying to accomplish. This is the shift from procedural thinking to intentional thinking — from operating a machine to having a conversation with a collaborator who understands purpose.

Intention. Generated with Gemini.


Autonomy. In the traditional environment, systems assist. They wait for your next instruction. Every action requires a human operator pressing a button, selecting an option, confirming a step. In the new environment, systems act. Claude Code doesn’t wait for you to dictate each line of code — it plans, executes, tests, iterates, and spawns sub-agents to work in parallel. Cowork doesn’t need step-by-step guidance — it works toward outcomes within boundaries you define. This is not automation in the industrial sense, where machines repeat predefined sequences. This is agency: the capacity to pursue goals through autonomous decision-making while maintaining human oversight.

Autonomy. Generated with Gemini.


Adaptation. In the traditional environment, systems remain fixed. Your software behaves the same way on day one and day one thousand. If you want it to change, you configure it manually — or wait for the next version. In the new environment, systems evolve. Claude’s memory learns your preferences, your team’s standards, your organization’s policies. It gets better at understanding you over time. The interaction isn’t static — it’s alive. What was once a tool that needed to be configured becomes an environment that learns to fit.

Adaptation. Generated with Gemini.


These three variables — intention, autonomy, and adaptation — don’t operate in isolation. They are unified by a fourth principle: orchestration. MCP is the clearest manifestation of this. It’s the connective tissue that allows intentions to flow across tools, autonomous actions to coordinate across systems, and adaptive learning to compound across interactions. As Microsoft’s CTO Kevin Scott observed at Build 2025: “MCP is filling such an unbelievably big need in the ecosystem… it’s really kind of breathtaking.” Atlassian’s CTO Rajeev Rajan called it “the gold standard for how LLMs interact with tools.”

But orchestration isn’t just a protocol. It’s the architectural principle that turns three independent variables into a coherent environment. It’s what makes the difference between a collection of smart features and a fundamentally new working space.

Here’s the critical point: these aren’t features of Claude. They are coordinates of a new architecture. Any system that embodies intention, autonomy, adaptation, and orchestration is operating in this new space — regardless of which company built it. Claude happens to be the most complete manifestation today. But the architecture is bigger than any single product.

Architecture 6 (HOW) vs Architecture 7 (WHAT). Generated with Gemini.


Everyone is describing the same thing

What makes this moment historically significant is that the people building these systems are arriving at the same conclusion from entirely different directions — and most of them don’t realize they’re describing the same thing.

Goldman Sachs’ CIO Marco Argenti writes: “Rather than functioning as one-dimensional applications, AI models are becoming operating systems that independently access tools in order to perform tasks.” Sam Altman told Sequoia Capital: “People in college use it as an operating system.” Mustafa Suleyman, CEO of Microsoft AI, frames it as: “This is going to be the next platform of computing.” Jensen Huang and Siemens announced a partnership to build what they literally called “the Industrial AI Operating System.”

From the design world, the language is different but the observation is the same. John Maeda describes the shift from UX to AX — Agentic Experience — where “designers become orchestrators of experiences rather than crafters of interfaces.” Rachel Kobetz, PayPal’s Chief Design Officer, argues that “the real work of design is orchestrating how intelligence behaves.” Jakob Nielsen, in his 2026 predictions, charts what he calls “a fundamental shift from Conversational UI to Delegative UI.” And Jenny Wen, who leads design for Claude at Anthropic, put it bluntly on Lenny’s Podcast: “This design process that designers have been taught, we sort of treat it as gospel. That’s basically dead.”

Orchestrators. Intelligence that behaves. Delegative rather than conversational. The design process itself declared dead — by the person designing the system that killed it. Everyone is circling the same structural transformation.


I should be honest about where this breaks. Jared Spool makes a fair point: The world romanticizes AI as an all-powerful, game-changing technology when, in reality, it barely works. But it demos well. He’s not wrong — intention interpretation still fails, autonomy can produce confidently wrong results, and adaptation has boundaries that aren’t always transparent. The gap between what these systems promise and what they reliably deliver is real, and anyone designing for this architecture needs to take that gap seriously. But the existence of a gap doesn’t invalidate the architecture. The early web had broken links, crashed browsers, and took minutes to load a single page. The architecture was still real. The question was always whether the structural logic would hold as the technology matured. I believe this one will — not because every interaction works today, but because the variables are right.


This is not a product. This is an architecture.

There is a pattern that most people in technology miss because it operates on timescales longer than product cycles.

In 1440, the printing press didn’t improve manuscripts — it created a new architecture for distributing knowledge. In 1876, the Dewey Decimal System didn’t improve bookshelves — it created a new architecture for classifying information. In 1936, the Turing machine didn’t improve calculators — it created a new architecture for computing. In 1984, the graphical interface didn’t improve command lines — it created a new architecture for human-computer interaction. In 1989, the World Wide Web didn’t improve networks — it created a new architecture for connecting information. In 2007, the smartphone didn’t improve phones — it created a new architecture for mobile access.

Each of these was a structural transformation in how humans organize and access information. Not a better version of what came before, but a fundamentally new set of coordinates — new variables, new paradigms, new possibilities that simply didn’t exist in the previous architecture.

Timeline of the 7 Architectures. Generated with Gemini.


What we are witnessing now is the seventh such transformation. The architecture of Intelligence. A structure defined not by windows and clicks and navigation, but by intention, autonomy, adaptation, and orchestration.

I call this Architecture 7 — the seventh structural transformation in how humans organize access to information.

In The Intelligence Architect, I mapped each of these seven transformations in detail: the variables that define them, the design principles that emerge from each shift, and the practical frameworks for building within a new architecture. But you don’t need the book to see what’s happening. You just need to pay attention to what Claude is showing us right now: we are not living through an AI upgrade. We are living through an architectural shift.

And architectural shifts don’t ask for permission. They don’t arrive with a press release explaining what changed. They arrive as a quiet realization that the environment you’ve been working in — the one that felt permanent, the one built on windows and menus and forty years of muscle memory — has already been replaced by something you can’t yet name but can already feel.

Claude is the first Architecture 7 environment built for people who work with information. It won’t be the last. The same structural principles — intention replacing navigation, autonomy replacing manual operation, adaptation replacing static configuration — will reshape environments for entertainment, communication, education, and every domain where humans interact with complex systems. The evidence — from $1 billion run-rates to $285 billion selloffs to 97 million monthly SDK downloads — makes that trajectory unmistakable.


What this means for designers, practically

If the architecture has changed, so has the work. Designing for intention means your wireframes become something closer to operating manuals — not layouts of screens, but descriptions of outcomes the system should pursue. Designing for autonomy means defining boundaries instead of workflows: what the system can do on its own, where it must pause for confirmation, how it recovers when it gets things wrong. Designing for adaptation means building feedback loops rather than preference panels — mechanisms through which the environment learns from each interaction rather than waiting to be manually configured. And designing for orchestration means mapping how intelligence flows across tools, not how users navigate between them. The screen is no longer the unit of design. The intent is.


You are already inside it

If you used Claude this week — or ChatGPT, or Copilot, or any AI system that understood your intent, acted autonomously, and adapted to your context — you didn’t use a chatbot. You worked inside a new environment. You just didn’t have the language for it yet.

The language matters. Because when you call it “a chatbot,” you design chat interfaces. When you call it “an assistant,” you design helper features. When you call it “a tool,” you design toolbars. But when you recognize it as a new architecture — with its own variables, its own structural principles — you design differently. You stop asking “where should the button go?” and start asking “how does the system interpret intention?” You stop designing navigation and start designing orchestration. You stop building static configurations and start building adaptive environments.

The forty-year environment built on windows, menus, and manual navigation is giving way to one built on intention, autonomy, adaptation, and orchestration. Every major voice in technology and design is converging on this observation from different angles. The question is no longer whether the shift is happening.

The question is whether you’ll design for the architecture that’s arriving — or keep drawing menus for the one that already left.


Source: Adrian Levy

https://uxdesign.cc/youre-still-designing-for-an-architecture-that-no-longer-exists-28b0b10900dd

Friday, February 13, 2026

26 Rules to Be a Better Thinker in 2026

A couple of years ago, I asked Robert Greene what ​he thought about AI. “I think back to when I was 19-years-old and in college,” Robert said. It was a class where they were to read and translate classical Greek texts “They gave us a passage of Thucydides, the hardest writer of all to read in ancient Greek,” he explained. “I had this one paragraph I must have spent ten hours trying to translate…That had an incredible impact on me. It developed character, patience, and discipline that helps me even to this day. What if I had ChatGPT, and I put the passage in there, and it gave me the translation right away? The whole thinking process would have been annihilated right there.”

What does he mean by “thinking process”? He means the slow, tedious, difficult work of figuring something out for yourself. The discipline. The patience. The hours and hours of sitting with frustration and confusion on your way to knowledge and understanding.

This is why I do all my research on physical notecards. It is not fast, easy, or efficient. And that is the point. Writing things down by hand forces me to engage and struggle with the material for an extended period of time. It forces me to take my time. To go over things again and again. To be immersed. To be focused, patient, and disciplined. To come to understand things deeply.

People are talking about what AI is going to replace, that it’s the sum total of all human knowledge, that it’s going to make expertise obsolete. And it’s true it will do a lot and it is unbelievably powerful, but in many ways it makes thinking even more important. You have to be able to interpret what it spits out. You need to know when something’s off. Without domain expertise, without the ability to think critically, to question, to push back, you’ll be fooled. Again and again.

The irony of AI, this cutting-edge technology, is that it makes the humanities more valuable than ever. It makes brainpower even more important. Reading. Knowing things. Having taste. Understanding context. Detecting lies or nonsense. In short: being a discerning, critical, clear thinker.

The tools are only getting more powerful. The noise is only getting louder. We’re being bombarded with more information than any generation in history, and I worry — from some of the emails I get, from the comments I see — that too many people just don’t have the ability to wrap their heads around what’s being thrown at them. Which makes clear thinking one of the most essential skills of our time.


What follows is my advice for what you’re going to need more than ever in this brave new world — 26 rules for becoming a better thinker.


– Take another think. The problem with our thoughts is that they’re often wrong — sometimes preposterously so. Nothing illustrates this quite like what’s called an “eggcorn,” words or expressions we confidently mishear and then contort to match our misperception. “All for not” instead of all for naught. “All intensive purposes” instead of all intents and purposes. But the greatest eggcorn is doubly ironic: people who say “you’ve got another thing coming” are, in fact, proving the point of the actual expression, “you’ve got another think coming.” We need to be able to slow down and use a second think. Especially when we’re sure what we think is right. (And by the way, at least 50% of the time I have to ask ChatGPT to think again because it’s answers are obviously wrong).


– Take walks. For centuries, thinkers have walked many miles a day — because they had to, because they were bored, because they wanted to escape the putrid cities they lived in, because they wanted to get their blood flowing. In the process, they discovered an important side-effect: it cleared their minds and made them better thinkers. Tesla discovered the rotating magnetic field — one of the most important scientific discoveries in modern history — on a walk through a Budapest park in 1882. Hemingway took long walks along the quais in Paris whenever he was stuck and needed to think. Nietzsche — who conceived of Thus Spoke Zarathustra on a long walk — said: “It is only ideas gained from walking that have any worth.” I have never taken a walk without thinking, after, “I am so glad I did that.”


– Embrace contradiction. F. Scott Fitzgerald said, “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.” The world is complicated, ambiguous, paradoxical. To make sense of it, you must be able to balance conflicting truths.. F. Scott Fitzgerald said, “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.” The world is complicated, ambiguous, paradoxical. To make sense of it, you must be able to balance conflicting truths.


– But don’t confuse complexity with nonsense. Stupid people are especially good at having a bunch of contradictory thoughts in their head at once. So the first-rate mind Fitzgerald described isn’t just about tolerating contradiction — it’s really about the ability to examine and interrogate it. It’s asking, Does this actually make sense?


– Go to first principles. Aristotle taught that one must go to the origins of things, go all the way to the primary truth of the matter, instead of just accepting common observation or belief. Don’t just blindly accept what everyone else seems to say or believe. Go to first principles. Instead of engaging with an issue from a headline, a tweet, or a take, go to the beginning. Break things down and build them back up. Put every idea to the test, the Stoics said. The good thinker approaches things with a fresh set of eyes and an open mind.


– Think for yourself. Generally, people just do what other people are doing and want what other people want and think what other people think. This was the insight of the philosopher René Girard, who coined the theory of mimetic desire. He believed that since we don’t know what we want, we end up being drawn — subconsciously or overtly — to what others want. We don’t think for ourselves, we follow tradition or the crowd.


– Don’t be contrarian for contrarian’s sake. Peter Thiel, widely considered a “contrarian,” (and a big fan of Girard) once told me that being a contrarian is actually a bad way to go. You can’t just take what everyone else thinks and put a minus sign in front of it. That’s not thinking for yourself. So in fact, if you find yourself constantly in opposition to everyone and everything (or most consensuses) that’s probably a sign you’re not doing much thinking. You’re just being reactionary.


– Ask good questions. When Isidor Rabi came home from school each day, his mother didn’t ask about grades or tests. “Izzy,” she would say, “did you ask a good question today?” This doesn’t seem like much, and yet it is everything. After all, questions drive discovery. The habit of asking questions turned Rabi into one of the greatest physicists of his time — a Nobel Prize winner whose work led to the invention of the MRI. Questions are the key not just to knowledge but to success, discovery, and mastery. They’re how we learn and how we get better. And they don’t have to be brilliant, probing, or incisive. They can be simple: “What do you mean?” They can be inquisitive: “How does that work?” They can aim for clarity: “Sorry, I didn’t understand, can you explain it another way?” The point is to stay curious. To never stop asking questions.


– Watch your information diet. When I’m not feeling great physically — tired, irritable, sluggish — usually it’s because I’m eating poorly. In the same way, when I feel mentally scattered and distracted — I know it’s time to focus on cleaning up my information diet. In programming, there’s a saying: “garbage in, garbage out.” Aim to let in the opposite of garbage. Because that leads to the opposite of garbage coming out.


– Go deep. I thought I knew a lot about Lincoln. I’d read biographies, watched documentaries, interviewed scholars, visited the sites. I’d even written about him in my books. So when I sat down to write about him in Part III of Wisdom Takes Work, I thought I was set. I wasn’t even close. So I went deeper. I read Hay and Nicolay. Doris Kearns Goodwin’s 944-page Team of Rivals. Michael Gerhardt’s 496-page book on Lincoln’s mentors. David S. Reynolds’s 1088-page Abe. David Herbert Donald’s 720-page Lincoln. Garry Wills’s Pulitzer Prize-winning book on the Gettysburg Address. I spoke with the documentarian Ken Burns about him, and Doris too. I read Lincoln’s letters and speeches. I went, multiple times while writing the book, to the Lincoln Memorial. In the end, I spent hundreds of hours reading thousands and thousands of pages on the man. Basically, I “dug deeply,” as Lincoln’s law partner once said of Lincoln’s own approach to learning, in order to get to the “nub” of a subject. This is a skill you need. Whether you’re an author, politician, lawyer, entrepreneur, scientist, educator, parent — you have to be able to pursue an idea, a question, a thread of curiosity until you’ve gotten to the nub and wrapped your head completely around it.


– Don’t just read, re-read. A lot of people read, not enough people re-read. Don’t just read books, re-read books. There’s a great line the Stoics loved — that we never step in the same river twice. The books don’t change, but you do.


– Seek out people who disagree with you. In 1961, the Navy sent Commander James Stockdale to Stanford to study Marxist theory. Not criticisms of Marxism — primary sources. Marx. Lenin. The works. His parents had taught him: you can’t compete against something you don’t understand. A few years later, Stockdale was shot down over North Vietnam and spent seven years being tortured in the Hanoi Hilton. His knowledge of Marxism proved essential — he understood the ideology better than his interrogators did. Seneca said we should read dangerous ideas “like a spy in the enemy’s camp.”


– Ego is the enemy. Epictetus reminds us that “it’s impossible to learn that which you think you already know.” The physicist John Wheeler said that “as our island of knowledge grows, so does the shore of our ignorance.” Conceitedness is the primary impediment to wisdom. That’s something I often find with AI, its quickness and confidence in its answers…which are laughably wrong. If you want to stay humble, focus on all that you still don’t know. After all, isn’t that the Socratic method?


– Beware the Gell-Mann amnesia effect. Named after the Nobel Prize-winning physicist Murray Gell-Mann, the Gell-Mann amnesia effect is the term for a familiar experience: You read an article about something you know well, and you recognize that it’s full of errors, it’s missing context, it’s grossly oversimplifying things. You can’t believe something so bad got published. Then you turn to an article on something you know little about — foreign policy, international affairs, the economy, pop culture — and believe every word. It’s not just that the media exaggerates and sensationalizes. It’s actually worse: Most of the time they don’t even know what they’re talking about. The same goes for AI, which is trained on many of those error-filled sources. I’ve had ChatGPT confidently butcher things I know well. Why would I unquestioningly trust it on things I don’t? The problem is we don’t know what we don’t know. Which means we don’t know when we’re being fooled.


– Be flexible. A colleague of Churchill once observed that Churchill “venerated tradition but ridiculed convention.” The past was important, but it was not a prison. The old ways — what the Romans called the mos maiorum — were important but not to be mistaken as perfect. Plenty of people have been buried in coffins of their own making. Before their time too. Because they couldn’t understand that “the way they’d always done things” wasn’t working anymore. Or that “the way they were raised” wasn’t acceptable anymore. We must cultivate the capacity for change, for flexibility and adaptability. Continuously, constantly.


– Empty the cup. There is an old Zen story about a master who receives a student for tea. As the visitor extends their cup, the master pours…and pours, and pours. The cup begins to overflow. Finally, the student says something: “Stop! The cup is full. It can hold no more.” “Yes,” the master replies. “And your mind is like this cup, full of opinions and speculations. How am I to show you Zen unless you empty your cup?” This is a message about the perils of ego, obviously. It’s a message about keeping an open mind. Because the cup also does not have to be full to cause problems. “If this vessel is not clean,” the Roman poet Horace said in the first century BC, “then whatever you pour in goes sour.”


– Seek understanding, not trivia. Whenever you’re consuming anything, don’t just try to find random pieces of information. What’s the point of that? The point is to understand, to build a foundation of real, true wisdom — that you can turn to and apply in your actual life. On the literary snobs who speculate for hours about whether The Iliad or The Odyssey was written first, or who the real author was (a debate that rages on today), Seneca said, “Far too many good brains have been afflicted by the pointless enthusiasm for useless knowledge.”


– Write to think right. Peter Burke, one of Montaigne’s biographers, believed that Montaigne’s essays were precisely that, a man’s “attempt to catch himself in the act of thinking.” Montaigne said that he wrote as though he was speaking to another person. But that doesn’t mean his essays were casual or off the cuff. Montaigne had to sit and really think — the act of his thoughts flowing from his brain, down his arm, through his pen, and onto the page was a process by which much reflection was transcribed, and, since he continued to edit his writing until the day he died, refined. Only a fool goes with their first thought. A wise person takes time to contemplate.


– Create a second brain — a collection of ideas, quotes, observations, and information gathered over time. As Seneca wrote: “We should hunt out the helpful pieces of teaching and the spirited and noble-minded sayings which are capable of immediate practical application — not far far-fetched or archaic expressions or extravagant metaphors and figures of speech — and learn them so well that words become works.” (Here’s a video on my method).


– Cultivate empathy. Empathy is as much a practical skill as it is a moral one. If you don’t have the ability to think about what other people think about this or that situation, to imagine how something looks from someone else’s perspective, then you have a very limited view of reality.


– Look at the fish. When Samuel Scudder interviewed for a job with the great Harvard biologist Louis Agassiz in 1864, Agassiz placed a dead fish on a tray in front of him. “Look at the fish,” said, and then he left the room. Scudder picked it up, turned it over, counted the scales, and drew it. When Agassiz returned, he was unimpressed. “You have not looked very carefully,” he said. “You haven’t even seen one of the most conspicuous features of the animal, which is as plainly before your eyes as the fish itself; look again, look again!” This went on for three days. “Look, look, look,” Agassiz would say. What did Scudder ultimately discover about the fish? Nothing. It wasn’t about the fish. It was about focus — looking long enough and hard enough to truly see what’s in front of you. This is the skill that good, clear, deep thinking depends on.


– Find your scene. “Tell me who you consort with,” Goethe said, “and I will tell you who you are.” You need to find a scene that challenges you, inspires you, exposes you to new ideas, holds you accountable, and pushes you beyond your limits. Put yourself in rooms where you’re the least knowledgeable person. Observe. Ask questions. That uncomfortable feeling when your assumptions are challenged? Seek it out. Let it humble you.


– Assemble a board of directors. It’s important to have a mentor. It’s important to have a scene. But at the highest levels, we must develop a board of directors — people who advise and consult, who check and even correct you. This isn’t a formality but an essential practice to always be learning and improving. Whose collective experiences are you drawing on? Who in your life can tell you that you’re wrong? That you’re being an idiot? We need other voices around us. We need help. We need to be able to yield. Only a fool declines this priceless resource.


– Beware your inner child. Where do your own emotional patterns get in the way of clear thinking? When you’re hurt or betrayed or unexpectedly challenged, pay attention to how you react. Notice the “age” of that reaction. Is it mature, measured, proportional? Or does it feel more like a wounded eight-year-old lashing out? That’s your inner child — the pain you still carry from early experiences, hijacking your adult mind. Good thinking requires the ability to recognize when your inner child has taken the wheel. This is another benefit of having a board of directors — they can serve as parents to our inner child.


– Keep your identity small. This is a rule from the great Paul Graham. His point was that the more you identify with things — being a member of a certain political party, being seen as smart, being seen as someone who drives a fancy car or someone who belongs to this club or that ideology — the harder it is for you to change your mind or entertain new points of view. Stay a free agent!


– Do the work. In Wisdom Takes Work, I quote Seneca, “No man was ever wise by chance.We must get it ourselves. We cannot delegate it to someone or something else. There is no technology that can do it for you. There is no app. There is no prompt, no shortcut or summary or step-by-step formula. There is no LLM that can spit it out in thirty seconds.


Source: Ryan Holiday

https://ryanholiday.medium.com/26-rules-to-be-a-better-thinker-in-2026-6393399aad3d

Saturday, January 31, 2026

強制安全帶,問題意識何在?立法議會在立法懲罰之前有考慮過嗎?

 

圖一、希臘神話中,奧德修斯射穿十二斧眼。圖片來源:網絡

強制安全帶的法例昨日停止執行了。安全帶的鬧劇,如大埔宏福苑的大火災一樣,都是連串的不利因素,由於官員層層把關疏忽,這些不利因素串聯起來,形成災難。宏福苑是防火圍網不及格、窗口用發泡膠圍封、七座大廈一齊做外牆裝修、冬天風高物燥、風向有利大火傳播、防火警鐘關閉及消防水喉停水(消防署拒絕深圳救兵等因素不算入),七個因素串聯,才會釀成大火災。假如,當日風向是由山吹向海,只有一兩座大廈遭殃。

這就好似荷馬的希臘史詩《奧德賽》(Homer's The Odyssey)在《奧德賽》中,奧德修斯(Odysseus)離開以色加(Ithaca)太久,回來的時候門衛都辨認不到,唯有射術來證明自己是國王。他拉弓射箭,射穿十二把斧頭的孔來證明自己是國王。

安全帶法例也是行政災難,好彩只是引起阿伯打架和安全帶困乘客兩件小事,沒有釀成大災。這行政災難要貫串這些因素才可以形成:律政署起草法例,提交立法會審議的時候,職員並未講清楚執行日期及執行限制是在二〇二六年一月二十五日或之後的新登記巴士,即是說,法例初期只是備用,可以宣傳但不必執行。這是必須向議會講解清楚的。其次,議員審議漫不經心。其三,法例生效之前政府要宣傳,但負責宣傳的部門並未有法律顧問去審閱法例。其四、宣傳之後,未有傳媒或關注團體去審閱法例。其五,法例生效之後,群情洶湧,政府高官振振有詞,部門上下無人去審閱法例。五個因素貫串起來,才發生這宗鬧劇。

政府和傳媒全班人好似被鬼迷。這與宏福苑大火災的情況相似,是香港失運的徵兆。

圖二、巴士陳伯的煩惱。圖片來源:網上,TVB新聞截圖

安全帶事件,我日前在本欄寫了法學原則,之後老政客陳婉嫻和前議員江玉歡連環說破此法例。


今日中午,我在面書寫了鹹濕論政的帖文,嘲諷一下:

香港強制巴士乘客戴安全帶的法例其實沒有錯,法例規定只能在二〇二六年一月二十五日或之後登記的新巴士。政府在草擬法例的時候是考慮到現行的巴士是不適宜強制配戴安全帶的,因為巴士必須預留空間給予乘客活動,才可以強制執行,否則就會插入其他乘客不方便但或者也很期待被插入的地方。而靠窗的乘客如要落車,必須先請側邊乘客鬆開安全帶,前面也要預留空間,否則很容易觀音坐蓮,起碼玩它幾分鐘,趕不及落車。

綁了安全帶,打鐘落車有困難,故此新巴士必須將鐘安裝在前面椅背,不必站立就可以打鐘。

此外,也要顧及社會公義:政府立例不能忽略企位乘客的安全而不給予保障,故此預設的新登記巴士,必定是禁止企位的,也不會有雙層巴士,避免上落樓梯離開安全帶的保護太久。

一句話:為了執行新法例,巴士公司必須重新訂製為此目標而建造(purpose-built)的新巴士!模樣類似豪華版的、加大的小巴,四十座位以內。錢從何來?當然是政府巨資補貼。巴士的載客量減少了,巴士公司的收入減少,除了加價之外,也要政府補貼票價。

如此周到的法例的立法原意,居然沒被議員留意,也沒被政府推廣的人留意,居然埋沒了。你說可惜不可惜?


附錄:新聞提要

巴士強制配帶安全帶的新法規爭議不斷,前立法會議員江玉歡於一月二十九日晚上發文,質疑根據法例的行文,巴士強制配帶安全帶的罰則,只適於今年 1 月 25 日、即法例生效後才首次登記的巴士,根本不適用於絕大部分現行的巴士。

運輸及物流局長陳美寶一月三十日下午見記者,承認徵詢律政司法律意見後,安全帶法例法律條文「技術上不足」,「未能完全反映立法原意」,為「清晰起見」,會盡快安排首先刪除有關條文,「即係話,令目前唔會有法定要求乘客喺乘客專營巴士上邊配帶安全帶。」

陳美寶又指,會在完善相關法律條文後,徵詢立法會再推出。

陳美寶整個見記者的過程約十分鐘,但她多次被問到會否為失誤致歉均無回應,至於她所指會「盡快」刪除條文即需時多欠,她都無回應。


Source: 陳雲

https://www.patreon.com/posts/ba-shi-quan-dai-149563559

Thursday, January 29, 2026

強制安全帶,問題意識何在?立法議會在立法懲罰之前有考慮過嗎?

 


為什麼香港土地公陳雲這次不極力反對巴士強制戴安全帶?此乃是次安全帶爭議之中,最為神秘的地方!

去年,我曾經一力反對過膠袋徵費,拍片三次及寫公開帖文十幾次,並得到全國僑聯的前領導加入反對陣營而令事情告吹。今年一月二十五日,強制安全帶來了,我除了傳閱一些嘲諷的帖文和新聞,並無公開大力反對。原因有二:首先,我不想政府又針對我。其次,我知道此法無法執行。膠袋徵費是籌劃了用政府垃圾袋來執行的,故此必須反對。強制戴安全帶,需要大量執法人員來巡查,阻礙巴士運行,故此執法力量不大。法不治眾之下,人民會鬆懈對待,結果變成自願戴安全帶。

為了與大家鍛煉一下腦筋,這裡不妨與各位一齊思考一下,政府憑什麼強制巴士乘客戴安全帶,否則罰款五千元及監禁三個月(最高刑罰)?


強制佩戴安全帶的公義原則及法學考慮


假若我在議會,遇到這種立法,我思考的法學原則如下:

一、首先,要調查外地和香港的實際情況。目前各地的安全帶強制都是限於跨境巴士,有企位的市內巴士是自願戴上安全帶(如設有)而並非強制。香港原先也是限於無企位的小巴才強制佩戴。各國的理由是跨境巴士容易發生意外,故此強制安全帶,但罰則並無香港之重。

二、此事的問題意識(problematics)在於,安全帶是保障有座位的乘客,那邊沒座位的企位乘客,政府是否放棄保護,各安天命呢?有否座位,視乎上站的地點和時間,那麼於不利的地點和不利的時間上車的人,是否放棄保護呢?這就是安全帶強制與否最深刻的公義原則:那就是如果政府要保護乘客,就全部一體保護,不能放棄企位的乘客。基於這潛藏的法學原則(各地政府未必這樣辯論過而我相信是無),各地的市內巴士只會鼓勵戴安全帶而不能立法強制。故此,我在去年十月二十五日於面書公開呼籲一次,如果要強制安全帶,巴士必須廢除企位!(帖文見本文附錄)當然,我不會將法學的道理講出來!這是極為昂貴的學術機密。

安全帶的問題,討論完畢。

(從上述的討論,大家會知道,為何深層國家絕對不會允許陳雲進入立法議會,因為我會提升民智,不利於豪強和財閥剝削勞苦大眾。我在辯論採用的道理,不限於階級鬥爭,而是源自古希臘邏輯學的、普遍性的理性。)


附錄:陳雲呼籲取消巴士企位的公開帖文

陳雲:星期日講道。巴士要扣安全帶之後,大大增加了皮膚、汗液和口涎的傳染病傳播的機會。小巴也是。有無醫學團體夠膽——我說是夠膽!!!!!!!!!!——去化驗一下這些公用交通工具的安全帶的含菌量和(如有)消毒情況所帶來的化學感染?

是哪一屆立法會通過這些法例的?當然估計是有泛民在內的一屆。

為什麼立法不是限制在危險座位(如前面沒有座位防護有事直接飛出)而是遍及所有座位呢?貪方便嗎?或預設了只是危險座位的乘客知機自己扣安全帶而其他可以隨意選擇嗎?

此外,為了安全,請立法取消巴士的企位,好似小巴那樣,坐夠人就開車!

不超度。

(閱讀智力門檻:智人)


Source: 陳雲

https://www.patreon.com/posts/qiang-zhi-quan-149390337


[2] 2025-10-26 11:54 https://www.facebook.com/wan.chin.75/posts/pfbid0zyddDyVN9ehvtX2foNrg7FduRyr6qg2qzrAmRgsSCphsLWDAZ68QUV95seBce81jl

Tuesday, January 06, 2026

東征西怨與簞食壺漿:評美國擄走委內瑞拉總統之是非

星期二深夜講道,評委內瑞拉,說春秋大義!有人說,特朗普攻入委內瑞拉,擄走馬杜羅,是違反國際法,破壞主權,如此,則普京在二〇二二年春天攻入基輔也是可以諒解云云。然則,此事在孔孟的春秋戰國,如何看待?


首先,看事實,普京攻入基輔,烏克蘭百姓有否上街跳舞慶祝?沒有。反之,俄軍攻入烏克蘭遇到的不是烏克蘭人民夾道歡迎,而是交相唾罵,烏克蘭阿婆將向日葵種子塞入俄國士兵衣袋,希望他能做到唯一一件好事:在他於烏克蘭戰場成為屍體之後,可以長出一株株向日葵。現在,特朗普攻入委內瑞拉,不單是委內瑞拉的人慶祝,英國也有人說,為什麼不擄走我們的首相Starmer?

至於委內瑞拉這些殘暴統治、通脹飛起、人民四處流竄逃命的失敗國家,其主權又是什麼回事呢?古人謂之:國將不國!


我舉出兩條古書,給大家看看孔孟之道。首先是東征西怨,其次是簞食壺漿。在我讀中學的香港英治時代,這是必須讀過的兩則古文或成語故事。


東征西怨:出兵征討東方,而西方的百姓埋怨不先來解救自己。語出《書經.仲虺之誥》:「乃葛伯仇餉,初征自葛,東征西夷怨,南征北狄怨。曰:『奚獨後予。』」形容人民對仁義之師的盼望。商湯王初次征伐葛伯,討伐東方異族,西方夷人就埋怨不先去救他們;征伐南方狄人,東方夷人也抱怨不先去救他們,說:「為什麼只留下我們?」你以為這是古書將商湯王的仁政浪漫化?你現在就親眼看到實例!窮國老百姓見到特朗普,都說Sir, this way.


簞食壺漿,以迎王師:以簞盛食,以壺盛漿來迎王師。指軍隊受到人民的擁護與愛戴,紛紛慰勞犒賞。語出《孟子.梁惠王下》:齊人伐燕,勝之。宣王問曰:「或謂寡人勿取,或謂寡人取之。以萬乘之國伐萬乘之國,五旬而舉之,人力不至於此。不取,必有天殃。取之,何如?」孟子對曰:「取之而燕民悅,則取之。古之人有行之者,武王是也。取之而燕民不悅,則勿取。古之人有行之者,文王是也。以萬乘之國伐萬乘之國,簞食壺漿,以迎王師,豈有它哉?避水火也。如水益深,如火益熱,亦運而已矣。」

語譯:齊國攻打燕國並取得勝利後,齊宣王詢問孟子:「有人說我不該佔領燕國,有人說應該佔領。我以強國攻打強國,只用了五十天就拿下,這不可能只是人力辦到的,不佔領恐怕會招致天譴。如果佔領,怎麼樣呢?」

孟子回答:「如果你佔領之後,燕國百姓感到歡喜,那就佔領,古人中有這麼做的,如武王。如果佔領而燕國百姓不悅,那就不要佔領,古人中也有這樣的,如文王。像這樣一個大國攻打另一大國,結果百姓端著飯食與水來迎接軍隊,這不是別的原因,而是因為他們要逃避暴政之苦。如果你帶去的不是解救,而是更嚴重的壓迫,那麼百姓就會像逃避水火一樣地逃離,政權也就會隨之覆滅。」


孔孟之道,是士大夫的常理,也是老百姓的常理。新課程不要背誦成語故事,不讀《孟子》,就是要下一代不懂得常理,只懂得國際法


Source: 陳雲

https://www.patreon.com/posts/dong-zheng-xi-yu-147558590

https://www.facebook.com/wan.chin.75/posts/pfbid04VqsV8Z4SiBFE9QUDqYZoPD8mURxZawNhBhqu38m7fJXC2GNbhGDDoyKJUaGN1iBl

Monday, January 05, 2026

特朗普在軍事上施展矩陣戰術,伊朗、俄國有排驚,中共也寢食難安

 

圖一:美軍突襲委內瑞拉,擄走總統馬杜羅夫婦後,特朗普即點名墨西哥、哥倫比亞與古巴威脅美國安全,又放話美國絕對需要格陵蘭島。(圖片來源:businessfocus,星島頭條,New York Post)

特朗普的矩陣戰術是政治上的聯動效應(linkage effect)利用多個政治和經濟的據點來引發效果。要執行矩陣戰術,要有一個行動大綱領,之後分出幾個日程,歸結到大綱領。遇到障礙就繞過去,從其他據點來對付那個障礙。例如是次特朗普突襲委內瑞拉,是因為普京不答應停火,特朗普唯有先做其他預備成熟的小事情,逐個火頭燒着,產生聯動效應,最終燒到普京,逼他以更低的條件答應停火。普京如果一直捱着,最終可能逼到中共在一兩年內攻台,被美國收拾。這是一場好驚險的博弈!

關稅戰、貿易制裁只是常規的、持續的壓力,最終的解決,仍是要用戰爭的鐵拳!大家還記得我寫過的哈德遜研究所的中共接收計劃嗎?文化沙龍也談過。很多人以為是天馬行空,但是次突襲委內瑞拉及接管該國,就看到美國是講得出、做得到的,委內瑞拉的中共防衛武器,雷達系統之類,在美軍突襲的考驗下,也全盤敗北,甚至馬杜羅聲稱無法侵入的華為手機,更協助了美國軍隊的定位追踪。


赤馬紅羊第一炮

踏入二〇二六年,赤馬紅羊第一道火不在台海或中日之間點燃,而是南美。美軍一月二日突襲委內瑞拉,生擒總統馬杜羅夫婦,隨後美國總統特朗普即點名墨西哥、哥倫比亞與古巴威脅美國安全;然後在空軍一號上向隨行記者密切關注伊朗局勢,若當局殺害示威者將遭受重創;又美國絕對需要格陵蘭島。

外界認為這是印證去年底華府的《國家安全戰略》當中,列明會實行特朗普式門羅主義。中國與及多個南美洲國家美方這樣擄走馬杜羅違反國際法,西方陣營當中,只有歐盟主席馮德萊恩即時表態,公開支持委內瑞拉和平民主過渡。烏克蘭總統澤倫斯基則趁機發聲明,暗示美國應該考慮逮捕俄羅斯總統普京:「如果能夠對付這樣的獨裁者,那麼美國就知道下一步該怎麼做了。」

雖然特朗普在一月三日的記者會上詳細解釋這次行動的正當性,列舉馬杜羅政權透過毒品走私、派遣幫派入境、竊取美國石油公司資產,以及在西半球引入敵對勢力威脅美國;又解釋美國暫時接管委內瑞拉,是因為如果美國立即撤走,由其他人接管,該國或重蹈過去數十年的覆轍;他不願冒險讓另一個不把委內瑞拉人民利益放在心上的人掌權,不過我的結論只有四字:矩陣戰術項莊舞劍,意在沛公,特朗普要打的大老,目前恐怕是寢食難安。


從產油大國到逾百倍通脹,委內瑞拉親中之路

圖二:委內瑞拉是南美洲產油大國,人均GDP曾踞全球第四,但全國經濟仰賴賣油收入,導致通脹率受油價波動影響,經濟迅速衰退。(圖片來源:yahoo,香港01)

先談一下委內瑞拉這個南美洲產油大國(產的是重油,提煉不及阿拉伯國家的輕油那麼容易,但價錢便宜,中共輸入很多。)。他們的人均GDP曾踞全球第四,但一九七〇年代起把石油產業國有化,全國經濟仰賴賣油收入,導致通脹率受油價波動影響,從一九七八年的7.2%暴漲至一九八九年的81%,必須向外國借錢。

二〇〇〇年起,中國用貸款與直接投資能源、家電、汽車,國防系統等來換取委內瑞拉石油,至今總額六百億美元;同時委內瑞拉亦逐漸疏遠歐美。馬杜羅二〇一四年上台後,國際油價從每桶一百一十五美元一直跌至二〇一六年二月的卅五美元,委內瑞拉經濟迅速衰退,通脹率達800%,全國糧食短缺、治安惡化。國際貨幣基金組織(IMF)預計該國今年通脹率270%,明年更飆升至685%;當地九成民眾處於貧窮線以下,過去十三年逾七百萬人為生計出逃。


伊朗全國爆發騷亂,最高領袖哈米尼傳出逃俄國

至於伊朗,長期遭西方國家嚴厲制裁,令經濟長年蕭條、失業率與通脹高企。其後加入了中國倡儀的金磚國家,二〇二二年起更以人民幣計價,把九成石油賣給中國來換取中國車、消費品、電子產品等軍民兩用的物資。不過近期伊朗貨幣里亞爾(Rial)嚴重貶值、全國出現四十年來最嚴重乾旱,糧食難以自給自足,民怨沸騰;去年底起全國爆發騷亂,迄今至少十五死逾六百人被捕。

圖三:伊朗去年底起因貨幣大貶值引發全國騷亂,據報最高領袖哈米尼計劃,一旦局勢失控他將逃到俄羅斯。(圖片來源:AP,FTVNEWS)

伊朗最高領袖哈米尼(Ayatollah Ali Khamenei)示威者是外國傭兵,下令安全部隊重挫對方氣焰。生擒馬杜羅後,華府國務院即在波斯語社交帳戶發布一段十二秒的短片,內容是戰爭部長赫格塞斯的發言,並附有波斯語字幕:「馬杜羅曾有機會,就像伊朗也有機會一樣,直到機會消失,他也失去了機會。他做了一些事,並承擔了後果。」外界認為是華府特意發放給哈米尼看的。

特朗普在空軍一號上向隨行記者「我們密切關注伊朗局勢。如果他們像過去那樣殺害示威者,他們將遭受重創。」英媒《泰晤士報》則引述情報稱哈米尼早有計劃,一旦伊朗局勢失控、軍隊叛逃,他將流亡俄羅斯。若然屬實,伊朗政教合一的政治體制或告終。


特朗普回朝再施矩陣戰術,一石二鳥對付中俄


圖四:中美新冷戰下,委內瑞拉和伊朗這種失敗國家對特朗普來說,相當有利用價值。(圖片來源:yahoo財經,香港01)


委內瑞拉伊朗的共通點是,兩者都是產油國被美國點名制裁與中俄非常友好加入了一帶一路倡議,而且兩國經濟衰退國家陷入崩潰邊緣。中美新冷戰下,這種失敗國家對特朗普來說都相當有利用價值,足以拿來隔山打牛他把火頭逐一燒着,使之產生聯動效應,拿來對付不願結束俄烏戰爭的俄羅斯,最終普京只能以更低的條件答應停火。同時借力打力,若普京一直捱着,即順勢拖累中共,來個一石二鳥。

事實上,目前委內瑞拉尚有過百億美元貸款未以石油方式歸還中國,卻已被美國暫時托管,加上特朗普已表明美國大型石油公司、全球最大的石油公司,將投資數十億美元來修復委當地的相關設施來開採石油,中國那筆錢相信覆水難收。更禍不單行的是,外媒爆料,已有數十個美國投資者計劃今年三月到委內瑞拉考察,看來美資好快取代當地中資,可謂順道幫了美國經濟一把。


附錄:美軍公開生擒委內瑞拉總統細節

二〇二六年一月三日美國總統特朗普在白宮召開記者會,宣布美軍在一日前突擊委內瑞拉,並已活捉總統馬杜羅(Nicolas Maduro)與太太弗洛雷斯(Cilia Flores);行動中沒有美軍陣亡,也沒有損失軍事設備。他列舉了馬杜羅政權如何威脅美國,包括毒品走私、派遣幫派入境、竊取美國石油資產,以及在西半球引入敵對勢力;他在會上宣布美國暫時管理委內瑞拉,直到該國能夠進行「安全、適當且審慎的權力移交」。

這次行動美方歷時數月策劃,代號「絕對決心」(Operation Absolute Resolve)。中情局一方面掌握馬杜羅作息時間,與線人裏應外合;同時美軍搭建了馬杜羅的安全屋來反覆演練。期間他們曾多次試圖發動攻擊,包括在剛過去的聖誕節等,但都因天氣不佳而臨時叫停。最終美軍在一月二日深夜十時拍板,空襲委內瑞拉首都卡拉卡斯(Caracas)並癱瘓防空系統之後,美軍直升機乘機攻入馬杜羅官邸捉人。

一直有傳今屆諾貝爾和平獎得主委內瑞拉反對派領袖馬查多(Maria Corina Machado)將接任總統一職,不過目前仍是未知數,因為該國暫時由馬杜羅副手羅德里格斯(James Rodríguez)代任總統。特朗普警告羅德里格斯若不合作,她將付上比馬杜羅更沉重的代價;羅德里格斯則發公開信邀請美國合作制定國家發展策略,強調希望享有和平。


Source: 陳雲

https://www.patreon.com/posts/te-lang-pu-zai-e-147468991

Saturday, January 03, 2026

關於委內瑞拉:你應當瞭解的五件事

華盛頓與加拉加斯之間的衝突正劇烈升級。美國顯然已對委內瑞拉境內目標發動攻擊。儘管衝突表面上與禁毒有關,但深層原因遠不止於此。

委內瑞拉加拉加斯。據稱美國空襲後,高速公路幾乎空無一人
圖片來源: Juan Barreto/AFP


(德國之聲中文網)美國顯然已對委內瑞拉境內目標實施了軍事打擊,這標誌著雙方衝突的嚴重升級。自去年9月以來,美軍已在加勒比海和太平洋海域多次攔截並攻擊據稱載有毒品的船隻。以下是關於委內瑞拉局勢的五個核心背景:


委內瑞拉一直由左翼威權領導人尼古拉斯‧馬杜羅(Nicolás Maduro)執掌
圖片來源: Ariana Cubillos/AP Photo/dpa/picture alliance

1. 威權主義統治

自2013年以來,委內瑞拉一直由左翼威權領導人尼古拉斯‧馬杜羅(Nicolás Maduro)執掌。在去年充滿舞弊爭議的選舉後,馬杜羅宣誓就職,任期將延長至2031年。國際組織和人權活動人士指責馬杜羅政府鎮壓異見人士、非法逮捕反對派、實施酷刑和暴力。儘管面臨大規模抗議,馬杜羅依然地位穩固,這主要得益於軍方對其保持效忠。

今年,委內瑞拉反對派領袖瑪麗亞‧科裡納‧馬查多(María Corina Machado)因「為推動獨裁向民主的公正和平轉型而鬥爭」被授予諾貝爾和平獎。然而,由於身陷叛國罪調查並隱居地下長達一年,她在秘密離境後,直到奧斯陸頒獎禮結束數小時才抵達現場。委內瑞拉檢方已將其列為逃犯。若她嘗試回國,將面臨被捕或被禁止入境的嚴厲後果。


受制於制裁、國營石油公司(PDVSA)的管理不善及貪腐,該國目前日產量僅約100萬桶,遠低於20年前的近300萬桶
圖片來源: Yuri Cortez/AFP

2. 全球最大的石油儲量

據估計,委內瑞拉擁有3030億桶石油儲量,位居世界第一。這些儲量多為重質原油,需要特殊技術進行開采和精煉。儘管資源驚人,但受制於制裁、國營石油公司(PDVSA)的管理不善及貪腐,該國目前日產量僅約100萬桶,遠低於20年前的近300萬桶。今年起,美國石油巨頭雪佛龍(Chevron)已恢復在委業務。

12月中旬,美軍強行登上一艘委內瑞拉油輪,令局勢進一步惡化。華盛頓辯稱該船屬於支持外國恐怖組織的非法運輸網路;加拉加斯則譴責美方實施了「國際海盜行為」。


委內瑞拉86%的家庭生活在貧困線以下
圖片來源: Juan Carlos Hernandez/ZUMA Wire/imago images

3. 極度貧困

儘管擁有石油、黃金和稀土等豐富資源,委內瑞拉卻深陷極端貧困。根據委內瑞拉金融觀察站(OVF)的報告,該國86%的家庭生活在貧困線以下。平均家庭月收入僅為231美元,而一個家庭的基本食品開支則需391美元。許多家庭完全依賴海外親屬的匯款度日。


聯合國數據顯示,已有790萬委內瑞拉人離開家鄉,約佔總人口的四分之一。
圖片來源: Herika Martinez/AFP/Getty Images

4. 難民危機

嚴重的經濟危機疊加國家鎮壓,引發了大規模移民潮。大量高素質勞動力早已流失。聯合國數據顯示,已有790萬委內瑞拉人離開家鄉,約佔總人口的四分之一。大多數移民滯留在鄰近的拉美國家,但也有許多人前往美國或歐洲尋求生機。今年,歐盟接收的難民申請者中,委內瑞拉人首次位居首位。


中國是委內瑞拉的關鍵盟友之一
圖片來源: picture-alliance/Xinhua/Y. Dawei

5. 與美國勁敵結盟

馬杜羅將自己塑造成在美國「後院」反抗「洋基帝國主義」的鬥士。他常年抨擊帝國主義,將委內瑞拉描繪成社會主義樣板。古巴和尼加拉瓜的左翼政府是其堅定支持者,據稱古巴特工在協助馬杜羅維持軍隊紀律方面發揮了作用。此外,俄羅斯、中國和伊朗也是委內瑞拉的關鍵盟友。


Source: 德聞(德聞是德國之聲中文網集體筆名之一。)X: @dw_chinese