Wednesday, August 20, 2025

健康碼就是電子軟禁

 


健康碼是什麼?是電子軟禁(house arrest through electronic device)。

不給你綠色的健康碼,你連家門都無法出去。更甚者,你連家住的大廈門口都不能進去,要住到政府為你預備的、香港醫護為你親自搭建的社區臨時居住設施——集中營、再教育營,你要在裡面學習愛國教育,直至檢疫滿意變成綠色碼。大廈看更同情你也無法為你開門,因為他沒有你的DNA認證來為電子門鎖下指令。

電子軟禁?好新奇嗎?不是。我們香港人演練過的,而且官民合作愉快。今年三月二十五日,口罩爭論的時候,政府同時推出的電子手環,用來隔離懷疑感染的機場入境者。

當時偶然有些佩戴電子手環的人出街,又很巧啊,在天氣還冷的手環,這些人會吧長袖衫捲起,露出手環,但以香港人那種息事寧人的態度,是不會有反應的,但恰巧又會有議員助理看見,熱烈舉報,在KOL的區塊瘋傳。出街的未必是政府安排的人,但舉報的,肯定大部分是得到政府歡心的人員。大家看看當時熱烈舉報手環出街的黃絲議員,看看他們當時那副嘴臉,就知道港共政府玩弄的把戲和人員安排——他們都是你投票的,你用稅金來供應的,而他們是踏在去年街上的屍體而當選的人血饅頭議員。

香港人那種驕傲和愚昧,那種恃勢凌人的刻薄,是上天賜予共產黨的美食。在香港,也只有陳雲懂得怎麼保護你,而他受到很多報紙和論壇排斥,當然,那些也是共產黨安排的媒體和KOL,而香港人民熱烈跟隨的。

各位可以在下面貼一下,當時捉電子手環的新聞和人物。 

Source: 陳雲

https://www.patreon.com/posts/jian-kang-ma-jiu-40635129

Tuesday, August 19, 2025

7 Visualization Hacks Every Data Analyst Should Know (But Most Don’t)

Last Tuesday, I was presenting quarterly sales insights to our C-suite when the CEO stopped me mid-sentence. “Hey, this chart is… confusing. Can you make it tell a story?”

That moment hit me hard. I’d spent 40+ hours analyzing customer behavior patterns, uncovered a 23% increase in retention after our product update, and built what I thought was a comprehensive dashboard. Yet my visualization failed the most basic test: clarity.

Here’s the uncomfortable truth — 67% of data analysts can crunch numbers like wizards, but their visualizations look like rainbow spaghetti threw up on a spreadsheet. I learned this the hard way after 8 years in analytics roles across fintech and e-commerce.

Today, I’m sharing 7 visualization hacks that transformed how I communicate data insights. These aren’t textbook theories; they’re battle-tested techniques that helped me go from “confusing presenter” to “data storytelling expert” in 6 months.

Image - Dribble.

Why Most Data Visualizations Fail (And It’s Not What You Think)

Before jumping into the hacks, let’s address the elephant in the room. Most data analysts approach visualization like they approach SQL queries — technically correct but missing the human element.

I used to create charts that answered every possible question. Color-coded by region, segmented by time, filtered by product category. Technically impressive? Yes. Actionable for business decisions? Not really.

The problem isn’t technical skill. It’s empathy. We forget that our audience doesn’t live and breathe data like we do. They need guidance, context, and most importantly, a clear path to action.


Hack #1: The “So What?” Test for Every Chart

Every visualization should pass this simple test: A busy executive should understand the key insight within 5 seconds.

Before: I created a complex multi-line chart showing website traffic trends across 12 months for 8 different channels.

After: I highlighted the one insight that mattered — organic search traffic dropped 34% in Q3, directly correlating with our competitor’s aggressive SEO campaign.

Implementation: Add a single sentence annotation to every chart stating the main takeaway. Use tools like Tableau’s annotation feature or Python’s matplotlib text() function.

# Example: Adding context to a matplotlib chart

plt.annotate('Organic traffic declined 34% due to competitor SEO push', 

             xy=(7, 15000), xytext=(8, 20000),

             arrowprops=dict(arrowstyle='->'))

Impact: My presentation time dropped from 45 minutes to 20 minutes, and stakeholder follow-up questions became more strategic instead of clarifying basic trends.


Hack #2: Color Psychology for Data Impact

Colors aren’t decoration; they’re communication tools. Most analysts use default color palettes that convey zero meaning.

The Framework:

  • Red: Problems, declines, urgent attention needed
  • Green: Success, growth, positive metrics
  • Blue/Gray: Neutral data, benchmarks, historical context
  • Orange/Yellow: Warnings, moderate concerns

Real Example: When presenting customer churn analysis, I colored churned segments in red, retained customers in green, and at-risk customers in orange. The executive team immediately focused on the orange segments — exactly where we needed intervention.

Pro Tip: Never use more than 4 colors in a single visualization. Your brain can only process so much before it gives up.


Hack #3: The Data-to-Ink Ratio Revolution

This hack alone improved my visualization clarity by 60%. Remove everything that doesn’t directly support your insight.

What to Remove:

  • Unnecessary grid lines
  • Redundant legends
  • 3D effects (seriously, stop this)
  • Multiple y-axes unless absolutely critical

Before/After Example: My original sales dashboard had 47 visual elements. After applying data-to-ink principles, I reduced it to 12 elements. The result? Stakeholders could identify trends 3x faster.

Implementation in Excel:

  • Remove chart borders
  • Lighten grid lines to 25% opacity
  • Delete redundant axis labels
  • Use direct labeling instead of legends


Hack #4: Progressive Disclosure for Complex Data

When you have complex data stories, don’t dump everything at once. Guide your audience through a logical sequence.

The 3-Layer Approach:

  1. Overview: High-level trend or summary metric
  2. Zoom: Segment or time-period focus
  3. Details: Specific data points or outliers

Case Study: Analyzing user engagement across our mobile app, I started with overall monthly active users (layer 1), then segmented by user acquisition channel (layer 2), and finally highlighted retention patterns for each channel (layer 3).

Result: Instead of one overwhelming dashboard, stakeholders could absorb insights incrementally, leading to more thoughtful discussions about each layer.


Hack #5: The Comparison Anchor Technique

Humans are terrible at interpreting absolute numbers but excellent at understanding comparisons. Always provide context.

Instead of: “We acquired 2,847 new customers this month” Try: “We acquired 2,847 new customers — 23% above our target and the highest in 6 months”

Visual Implementation:

  • Add benchmark lines to show targets or historical averages
  • Use small multiples to compare similar metrics
  • Include percentage change annotations

Python Example:

# Adding benchmark line to show context

plt.axhline(y=target_value, color='gray', linestyle='--', 

            label=f'Target: {target_value}')

Impact: When I started adding comparison anchors to our KPI reports, decision-making speed increased by 40% because stakeholders could immediately assess performance relative to expectations.


Hack #6: Interactive Filtering for Stakeholder Engagement

Static reports tell one story. Interactive dashboards let stakeholders discover their own insights.

Strategic Implementation:

  • Add filters for time periods, regions, or product categories
  • Enable drill-down capabilities from summary to detail views
  • Include hover tooltips for additional context without cluttering

Real Success Story: I built an interactive sales performance dashboard in Tableau where regional managers could filter by their territory. Suddenly, they were spending 30+ minutes exploring data instead of glancing at static reports for 2 minutes.

Tools Recommendation:

  • Beginner: Excel with slicers and pivot tables
  • Intermediate: Tableau Public or Power BI
  • Advanced: Python Plotly Dash or R Shiny


Hack #7: The Storytelling Arc Framework

Every great visualization follows a narrative structure: Setup → Conflict → Resolution.

  • Setup: Establish the baseline or normal state 
  • Conflict: Highlight the problem, opportunity, or change 
  • Resolution: Show the outcome or recommended action

Example Application: Analyzing customer support ticket volumes:

  • Setup: “Support tickets averaged 150/day in Q1”
  • Conflict: “Tickets spiked to 340/day after our product launch”
  • Resolution: “Implementing chatbot reduced tickets to 180/day within 2 weeks”

Visual Elements:

  • Use annotations to guide the narrative
  • Highlight the conflict point with contrasting colors
  • End with clear next steps or recommendations


The Career Impact: Why These Hacks Matter Beyond Pretty Charts

After implementing these 7 hacks consistently, my professional trajectory changed dramatically:

  • Promotion Speed: Advanced from Senior Analyst to Lead Data Scientist in 18 months
  • Stakeholder Trust: C-level executives started requesting me specifically for quarterly reviews
  • Project Success Rate: Data-driven initiatives I presented had 85% approval rate vs. industry average of 60%

More importantly, I stopped being the “chart guy” and became the “insights guy.” My visualizations weren’t just reporting data; they were driving business decisions.


Your Next Action: Pick One Hack and Implement It This Week

Don’t try to revolutionize all your visualizations overnight. Pick the hack that resonates most with your current challenges:

  • Struggling with stakeholder attention? Start with Hack #1 (So What Test)
  • Charts look cluttered? Apply Hack #3 (Data-to-Ink Ratio)
  • Audience seems confused? Try Hack #4 (Progressive Disclosure)

The goal isn’t perfection; it’s progress. Every small improvement in how you visualize data compounds into massive career advantages over time.

Remember: In a world drowning in data, the analyst who can tell compelling visual stories doesn’t just survive — they become indispensable.

What visualization challenge are you facing right now? Which hack will you try first?

----

Tired of wasting hours on messy data and interviews that go nowhere?

I created two resources that changed everything for me:

1️⃣ ChatGPT Prompt Bundle (15 Productivity Prompts)

🚀 Turn AI into your co-pilot — summarize data, build insights, and save hours every week.

2️⃣ Top 50 SQL Interview Questions

🎯 Real-world, scenario-based SQL challenges with clear explanations — built to help you ace any data role.

These aren’t cheat sheets. They’re career accelerators.

Make data work for you — not against you.

👉 Get them today and reclaim your time + confidence.


Source: Analyst Uttam

https://medium.com/ai-analytics-diaries/7-visualization-hacks-every-data-analyst-should-know-but-most-dont-e35954ef1102

Wednesday, August 13, 2025

台灣大罷免鬧出納粹黨風波,惹來德國在台協會譴責,何解民進黨陣營容納這種人物呢?

 

圖一、閩南狼披露419罷團運動的徽章來自納粹的老鷹。(圖:中天新聞)


這新聞觀察了很久,包括在我寫台灣大罷免的評論的時候。最近,在台灣宣傳大罷免的網絡紅人閩南狼和八炯,公開分裂,鬧出風波,令民進黨出醜,那麼就值得評論一下了。

閩南狼指控八炯,要走納粹路線,公開私訊對話,指八炯要學習納粹操控大眾輿論節方法,又用納粹鷹做大罷免集會LOGO。八月四日,德國在台協會表示,譴責所有任何崇拜、贊美或淡化納粹主義的說法或行為,八炯透過影片道歉,並表示只是私下自嘲,承諾未來會減少線下活動。早在四月十六日,德國在台協會譴責有關大罷免的宣傳行為,嚴正譴責大罷免運動其中的宋建樑此舉為「無恥行為」,並強調納粹象徵「對人類的鄙視與迫害」。

八炯利用納粹的老鷹代表419的造勢活動,而賴清德總統更高度肯定此場的造勢,被國民黨質疑:難道賴清德一直與納粹同行?


這是中共一向的做事方式了。在反對者的陣營安插最激烈的人物,強詞奪理,只求吸引群眾,按道理這些人物很快就會用完情緒動員力,然而給這些人的舞台、媒體報導和評論(包括極其激烈的抨擊和謾罵)不絕,只需要曠日持久,這些人就可以排擠疲憊的老敵人而䇄立敵營,令敵方不得不姑息甚至合作。這種方法結合現代媒體(以前的報紙雜誌電台電視台,現在包括互聯網和社交媒體),是現代政治技術,無以名之,可以稱為安插假敵之法,是間諜術的一種。

舊時在香港的民主中國論的締造者(來自台獨派的包錯石)、台獨論的締造者,近年港獨論的締造者,這些極端的言論締造者,都離不開這種套路。當然,有時候也有例外的、沒有政治背景的激進者,但他們只如流星,沒有持久的續航力,閃亮一時之後就離去,謀生要緊;那些可以持續取得資源、發布平台和輿論關注的,沒有什麼營生活動的,說他們沒有奇怪背景,是難以置信的。

圖二、今年四月十五日,台灣新北地方檢察署約談罷免李坤城案領銜人宋建樑(圖),宋建樑穿著象徵納粹的「卐」字臂章,手比納粹敬禮手勢,還拿著希特勒著作「我的奮鬥」進入新北檢複訊。圖片:經濟日報


近年在台灣和以前香港的所謂學術圈裏,樹立內亞史觀,說中國的文明器物和思想都是來自中亞內陸,美化蒙古等游牧民族的統治,並且持續出書,持續在網絡出帖文的,都是這種討論,目的是用這種排斥理性討論的激烈言論來吸引民眾,特別是年輕人的反叛者,掩蓋有意義的學術討論,也令外界認為反共陣營都是充斥這些思想偏狹的人物,敬而遠之

這是筆者從事海外民運以來的觀察,也是我遊學德國的讀書所得。我在本土運動的時候,稱這些人為混入陣營的粘貼式炸彈,等候時機引爆,企圖將陣營消滅


新聞詳情:

「閩南狼」於8月3日發佈影片,指控「八炯」曾私下研究納粹如何群眾動員「反共」,甚至想發明「新的手勢」、「把部分人當成猶太人」以及建立「納粹衝鋒隊」。

納粹政權領袖希特勒(Adolf Hitler,又譯希特拉)於1933年崛起,他奉行法西斯主義(fascism),推動以極端民族主義和反猶主義為核心的政策,「雅利安人」(Aryan)被塑造成「純正德國民族」的理想象徵。二戰期間,近600萬猶太人在大屠殺中遇害,其他少數群體也被迫害,包括吉普賽人(Gypsy)、同性戀者、共產黨人等政治異見者。

納粹衝鋒隊(Sturmabteilung, SA)是納粹黨的凖軍事組織,參與監控社會和政治鬥爭,用恐嚇和暴力手段對付被納粹黨譴責的對手。

影片還提及,「八炯」曾在4月19日的罷免集會上使用「納粹老鷹」的視覺符號,相關標誌在罷免運動中多次出現。


Source: 陳雲

https://www.patreon.com/posts/tai-wan-da-ba-na-136392138

Thursday, August 07, 2025

Mitschuldigkeit,一個百般沉重的德文字

(附上的短片是去年面書有人拍攝到,上水公園的外判除草工人在扮工,但用除草機傷害了樹根。面書帖文的連接一下子沒記錄,也許遲些找到。)


前日,同道在本欄問我:「這些本身無病的樹被風雨吹倒後,政府會重新栽種嗎?」

我的回答:「不會。即使原地有樹木種子生了樹苗,除草的工人也會將之打斷!此地實施的是極為恐怖的不生之政。

問題是:工人是可以放過樹苗的,但工人寧可將之用剪草機器打斷!這就是一般的香港庶民。」

我在德國遊學學時期,用了足足五年時間來理解納粹德國、東德、蘇聯和中共的政治運作。對於中共的政治運作,我也親自從民運圈子接觸的老幹部那裡學來很多經驗。

在德國的去納粹化的過程中,一個顯著的疑團是:平民、非納粹黨的人、不涉及納粹黨運作的人,是否無辜?

學界經過激烈辯論和案例研究之後,得出的一個概念,用德文來表示,就是:Mitschuldigkeit。中文沒有這種構詞,英文也沒有。德文的Mit是參與、與,schuld是罪,dig是形容詞後綴,keit是名詞後綴。Mitschuld就是同罪、共犯。在納粹德國,平民是共犯,不是無辜者

參考的案例,就是柏林圍牆射殺案。德國統一之後,一九九二年,法官審判當年射殺越過圍牆的偷渡者的案件,開槍射殺的士兵被判徒刑三年,不准假釋,故意瞄不準而放槍警告的士兵無罪釋放。律師辯稱這些衛兵僅僅是為執行命令,別無選擇,罪不在己。然而法官西奧多·賽德爾(Theodor Seidel)卻不這麼認為:「身為警察,不執行上級命令是有罪的,但打不準是無罪的身為一個心智健全的人,此時此刻,衛兵有把槍口拾高一厘米的主權,這是你應主動承擔的良心義務這個世界,在法律之外還有『良知』當法律和良知衝突之時,良知是最高的行為準則,而不是法律。尊重生命,是一個放之四海而皆準的原則。」(餐廳食飯要拍卡登記,餐廳員工是否要執行防疫惡法呢?——柏林圍牆射殺偷渡客的判案啟示 FEB 18, 2021 AT 12:38 PM https://www.patreon.com/posts/47696222

基於此,我反對特朗普的政治顧問余茂春將中共與大陸人分開處理,我認為兩者是同一回事。故此在二〇〇三年本土運動時期,當大陸人在自由行初期搗亂香港市面秩序的時候,我在明報寫了一篇短文,說他們不是孤單的,他們拖着長長的帝國的身影。於此,梁文道在報紙與我激辯了一回,說大陸人也是鄰居,成為本土運動的一時佳話。

英文一句政治諺語Every country has the government it deservesPeople get the government they deserve. 人民得到他們應得的政府。有什麼樣的國民,就有什麼樣的政府。此話雖然令人氣憤,也不大公道,因為有些政府得到境外資金和技術甚至軍事支援,用精銳的軍隊殺害大部分文人和反抗者之後,本國平民是無可選擇的。然而這個政府生存下來幾十年,而且壯大發展,那麼這些人民就有共犯的責任。

Every country has the government it deserves. 這句話我最早聽到的,是我從事德國民運的時候,有一次,大概一九九四年,去了瑞士的日內瓦開會,在火車站接車的是當地的老國民黨華祈石先生,是一名研究火車訊號系統的數學專家。他大概已經老邁甚至身故,故此可以寫出名字來紀念。在日內瓦街頭行走的時候,他拍了我的肩膀,說我不必為了中國民主來操心,說的一句英語,就是People get the government they deserve. 當時華老先生這句話,比起中國沒有憲政民主自由,令我更感到心情沉重。


Source: 陳雲

https://www.patreon.com/posts/mitschuldigkeit-135943860

Monday, August 04, 2025

AI Generated Feedback vs. Human Feedback

In July 2025, Canvas LMS announced integration with ChatGPT for instructor use. Features touted by the tech company include the generation of image descriptions, rubrics, and feedback for assignments.

It’s that last one that makes me pause. Many other educators, too.

Because how ethical is it to ask students not to use AI if instructors use it for feedback? Isn’t giving feedback part of the jobs we’re paid to do?

On the other hand, some instructors are bogged down with ridiculous student loads. AI-generated comments may be the best way to give timely feedback for formative assessments.

So what’s a teacher to do?

Research into this area is limited, but here are some findings to help you make an informed decision about relying on AI for feedback.


Human Feedback > AI Feedback

Steiss et al (2024) compared AI-generated feedback with feedback provided by trained instructors in five different areas: essay criteria, directions for improvement, accuracy, supportive tone, and prioritizing important feedback comments. Instructors scored better in four out of five areas. AI only scored better in criteria-based feedback. As much as AI has improved in essay feedback, human scorers still have the advantage in most areas.

A couple of things to note in this study. One, the instructors in the study received training. How many instructors receive training and professional development in giving feedback? How many colleges provide intensive work in feedback writing to their pre-service teachers? In my 25 years in education, I’ve received none outside of my own pursuits. This makes me wonder how a random selection of teachers would perform in this situation, not to mention the need for more education geared toward writing effective feedback to students.

Two, instructors outperformed AI in four areas, but AI wasn’t far behind. This wasn’t a slam dunk for instructors, merely a slight edge. This opens the door for other considerations, such as available time and student load. Timeliness is a key factor in effective feedback, but student loads of 150–200 students can take a teacher several days (or weeks!) to give fully developed feedback. Does the timeliness that AI provides outweigh the slight advantage instructors have in those other feedback categories?

Some teachers may decide the answer is yes.


Student Perceptions

We can’t forget the other key ingredient in this dilemma: students. After all, they’re relying on our expertise to guide their learning. What are their perceptions of teachers using AI feedback?

The results were mixed. According to Nazaretsky et al (2024), students generally preferred human feedback, even if those same students rated the AI feedback as higher in quality. In contrast, Zhang et al (2025) found that students preferred the feedback produced by AI or co-produced by AI with human modifications. Students in the Zhang study rated the AI feedback as less genuine after they learned that the feedback was given by ChatGPT. They didn’t lower “co-produced” (AI feedback modified by a human) ratings in the genuine category, though.

So genuineness is important. Students want human interaction.

To take advantage of both AI and human feedback, the answer may be what Zhang et al (2025) term “co-produced” feedback and Nazaretsky et al (2024) call “human-in-the-loop.” Instructors use AI to develop feedback for student work and then modify those comments by adding or deleting comments, prioritizing key suggestions, and adding encouragement.

Still, teachers need to be competent at feedback to effectively modify the comments that AI provides.

The research in these areas is limited. It was conducted with college students and instructors, not in secondary classrooms. Zhang et al (2025) also note that their findings may be different from Zaretsky et al (2024) due to increased time pressures that negatively affected the quality of human feedback.

The human component is a problem for any study. Many factors can affect the quality of human feedback, including time, training, experience, and stress levels. Different students will also prefer different approaches; some appreciate more encouraging feedback, while others want brutal honesty. These preferences will also affect how students rate AI and human feedback.

Knowing what’s best when using AI in student feedback is complicated. Both Zhang et al (2025) and Nazarestsky et al (2024) agree that instructors and schools need to consider the ethics of AI use in feedback.


Transparency in AI Use in Feedback

Instructors using AI for feedback need to be honest and explain their reasons for doing so. The need for a quick turnaround in feedback may be crucial for some assignments. Or a family emergency has taken over your life, and AI would provide better feedback than you’re able to give.

Otherwise, if we pass AI feedback off as our own, we’re just as guilty for using AI unethically. Students rely on their teacher's expertise to guide their learning, and we have a professional obligation to provide that guidance.

Like everything else in the AI and education world, opinions vary across the spectrum, and the best thing you can do is stay in touch with your principles and stay updated on the latest research on the effectiveness of AI.


Source: Melissa Pilakowski

https://medium.com/educreation/ai-generated-feedback-vs-human-feedback-639321d530b8