Author name: 胡思

UK Should Develop Sodium-Ion Battery Technology for Energy Transition

Beneath the grand slogan of net zero, the UK is being pushed towards a deeper and harsher transformation: from a nation reliant on burning fossil fuels to one that operates fundamentally on electricity. Heating, transportation, industrial processes, and even national security and geopolitics will increasingly depend on the stability, affordability, and autonomy of the electricity system. In this new electric nation, batteries are no longer merely components for electric vehicles or smartphones; they are critical infrastructure on par with the power grid itself.

The problem is that the UK has almost lost its leading position in the lithium battery race. Whether it is NMC or LFP, the industry focus has shifted entirely towards China, from mineral sourcing and material processing to manufacturing technology and large-scale production. This is a reality that cannot be reversed simply by a few sentences in policy documents about ‘reviving manufacturing.’ Even though the UK has recently begun to discuss a domestic battery industry, it is largely a defensive measure rather than a genuine leadership initiative. If the UK’s energy transition remains entirely based on lithium battery systems, its reliance on external supply chains will remain structural in terms of energy security and industrial autonomy.

However, technology never pauses for any nation. Sodium-ion batteries represent a rapidly evolving technology that has not yet been monopolized by any single country, with both its value and limitations clearly defined. Unlike lithium, sodium is abundant in the Earth’s crust, with a dispersed supply that does not involve the highly concentrated strategic minerals such as lithium, cobalt, and nickel. This gives sodium-ion batteries a structural advantage in terms of supply security, long-term costs, and geopolitical risks. Additionally, their chemical properties provide higher thermal stability, which translates to lower fire risks and more flexible site selection for grid-scale applications. However, sodium-ion batteries are not without their drawbacks. Their energy density is still lower than that of lithium batteries, requiring more materials and larger volumes for the same energy storage capacity, making it difficult to replace lithium batteries in weight-sensitive scenarios like long-range electric vehicles. Moreover, the industry scale is not yet fully mature, and short-term costs may not be lower than those of the highly commoditized LFP. Therefore, the most competitive applications for sodium-ion batteries lie not in pursuing extreme range, but in grid-scale energy storage, industrial backup, and balancing renewable energy systems—precisely the most vulnerable yet crucial aspect of the UK’s energy transition.

This is particularly important for the UK. The UK electricity system heavily relies on wind energy, while winter is precisely when solar energy is weakest and weather is most unstable. In the event of several consecutive days without wind and little sunlight, the electricity system would come under immense pressure. Sodium-ion batteries have already demonstrated practical feasibility in medium to long-term energy storage over hours to days. When combined with pumped storage, hydrogen, or other long-duration energy storage technologies, sodium-ion batteries could serve as a crucial buffer for the grid, significantly enhancing the UK’s energy resilience in winter and reducing dependence on natural gas and imported electricity.

On this path, the UK is not starting from scratch. Faradion, an early pioneer in sodium-ion batteries, was born in the UK, and the core knowledge and patents remain deeply embedded in the UK’s research system. The Faraday Institution, as a national battery research hub, has incorporated ‘post-lithium’ into its long-term research agenda, and several top universities have accumulated leading results in sodium-based materials, electrode design, and system integration. Compared to the capital-intensive, volume-driven lithium battery industry, sodium-ion technology relies more on fundamental research and engineering integration capabilities, which is a relative strength of the UK.

Thus, what the UK truly needs is a clear-eyed and pragmatic dual-track strategy. Lithium batteries will remain the mainstream for electric vehicles over the next decade, and the UK must continue to invest to ensure that its automotive industry and related supply chains are not marginalized. At the same time, it should clearly position sodium-ion batteries as a strategic technology for energy security and grid transformation, accelerating the entire chain from research and demonstration to actual deployment, particularly in grid storage and industrial applications.

In fact, this direction is not starting anew. Whether it is the national battery strategy, critical minerals strategy, or official research on long-duration storage, recent government documents have repeatedly emphasized the importance of technological diversification and supply chain resilience. Sodium-ion batteries sit at the intersection of these policies, representing an option that aligns with energy security logic and possesses industrial potential.

If the UK genuinely wishes to play a leadership role in the net-zero era, the key lies not only in installing the most wind turbines or solar panels but in mastering the core technologies that allow the electricity system to function even under the worst conditions. Sodium-ion batteries provide a realistic and rare window: a critical technology that has not yet been fully monopolized and is highly compatible with the UK’s energy structure. Missing out on lithium was a matter of structure and timing; if the UK misses out on sodium-ion technology, it will not be fate but a choice.

UK Should Develop Sodium-Ion Battery Technology for Energy Transition Read More »

Rethinking the Use of Escalators

Decades ago, the MTR Corporation in Hong Kong vigorously promoted the slogan: “Stand on the right, walk on the left, for the benefit of others and oneself.” This was an era focused on efficiency and order, and the message was simple yet powerful, achieving considerable success. Even today, many Hong Kong residents instinctively stand on the right side of escalators, leaving the left side for those in a hurry. This has become not merely a habit but an internalized public ethic.

Widening the perspective, this practice is not unique to Hong Kong. Japan, the UK, and several European cities have developed an unspoken agreement to “stand on one side, walk on the other.” Interestingly, Japan presents a contrast: in eastern cities like Tokyo and Osaka, people typically stand on the left and walk on the right, while in the Osaka and Kyoto areas, the opposite is true. London’s Underground has long displayed signs instructing passengers to “Stand on the right.” On the surface, this appears to be a globally accepted urban order, each city adjusting its own practices.

However, the problem is that this order may not be effective.

A widely cited turning point originated in London. In 2015, Transport for London (TfL) conducted a formal field test at Holborn station, a busy interchange between the Piccadilly Line and Central Line, particularly crowded during peak hours. TfL temporarily suspended the practice of “standing on the right, walking on the left” on one of the long escalators, instead instructing passengers to stand on both sides, with staff present to guide them.

The results were quite clear: the number of people passing through the escalator increased by about 30% per minute, the queue at the escalator entrance noticeably shortened, and platform congestion improved. TfL’s conclusion was straightforward—allowing passengers to stand on both sides in high-traffic environments is more efficient than leaving one side for walking. This was not a theoretical deduction but rather data from actual testing.

The reasoning is not difficult to understand. The number of people willing to walk on escalators is always a minority. Observations from various cities indicate that those who actually “walk” typically account for only 10% to 20% of passengers; the vast majority prefer to stand. The result is that one side, where people are standing, forms a long queue, while the walking side often remains empty, utilizing only half of the escalator’s designed capacity. During peak hours, this forces crowds to accumulate at the escalator entrance, exacerbating congestion on the pathways and platforms.

This reflection has also emerged in Japan in recent years. Railway operators in Tokyo, including Tokyo Metro and JR East, have begun to downplay or even remove instructions to “walk/stand” at some busy stations, instead encouraging passengers not to walk on escalators. In Osaka, there have been public calls to stop walking on escalators citing safety concerns.

Safety is another long-ignored cost. Escalators are not designed for walking; the height of the steps, speed, and synchronization with the handrail require a higher level of stability in one’s stride. Accident analyses in both the UK and Japan show that falls are more likely to occur on the walking side, particularly involving luggage, rushing, or looking down at mobile phones. When accidents happen, the escalator often needs to be shut down, affecting the entire flow of people rather than just a few seconds for individual passengers.

There is also a less obvious but equally real issue: wear and tear on machinery. The engineering assumption for escalators is that weight is evenly distributed across the steps. Long-term concentrated standing on one side leads to uneven stress on the steps, chains, and drive systems, accelerating wear and increasing maintenance frequency. What is purportedly convenient for a few actually raises the operational costs for the entire system.

If we genuinely consider efficiency and safety, the answer is clear: during busy periods, passengers should stand on both sides to ensure every step is fully utilized. Further, if passengers could alternate sides—standing one step left, one step right—this would increase stability and improve psychological comfort. This represents a rational use of limited space.

The issue is not whether the original promotion was well-intentioned, but whether we are willing to acknowledge that a once-successful practice may not always be correct. Cities change, traffic patterns evolve, and public order should be adjusted based on empirical evidence rather than maintained by memory and sentiment. The practice of standing on the right and walking on the left had its historical context; however, in today’s densely populated cities, it may have shifted from “benefiting others and oneself” to “harming both.”

A truly mature public culture does not rigidly adhere to habits but understands when it is necessary to let them go.

Rethinking the Use of Escalators Read More »

The Real Challenges of the Somerset Tidal Lagoon

The West Somerset Lagoon is a large tidal lagoon power generation project planned for the southwestern coast of the UK, situated between Minehead and Watchet along the Bristol Channel. The concept involves constructing a curved breakwater near the shore to enclose a body of water, utilizing the difference in water levels between the lagoon and the open sea during tidal changes to drive turbines for electricity generation. It falls under tidal range generation rather than tidal stream generation, relying on sea level fluctuations rather than water flow speed. The project’s proponents have attached an appealing label: an installed capacity of approximately 2.5 GW, an annual generation of about 6.5 TWh, zero carbon emissions, and independence from weather conditions, with a theoretical lifespan exceeding a century.

Such claims resonate because they hit the pain points of the UK’s energy transition. As the proportion of wind and solar power continues to rise, the grid is becoming increasingly unstable; the greatest strength of tidal energy is its high predictability. Tidal changes are determined by astronomical factors, allowing generation periods to be scheduled years in advance, unaffected by calm winds or cloudy days. For a grid that requires long-term planning for backup capacity, this certainty is inherently valuable.

However, when one examines the physical realities, the project’s role becomes clearer and less ideal. Tides do not follow a daily cycle but rather a semi-diurnal pattern, with a cycle of approximately 12 hours and 25 minutes. The lagoon generates electricity for about 10 to 14 hours each day, but not continuously; instead, it is divided into four segments and shifts backward each day. Today, it may generate in the evening, but a few days later, it may shift to late at night. This means it cannot naturally align with human peak electricity usage times, remaining fundamentally a high-quality but rhythmically fixed intermittent power source.

Some have suggested that by integrating battery storage, this rhythm could be ‘smoothed out’, making tidal power as stable as nuclear energy. While this is not impossible from an engineering perspective, it is economically prohibitive. To convert the lagoon’s output into nearly round-the-clock supply would require tens of GWh of storage, and batteries have a lifespan of only about ten years, far short of the lagoon’s claimed lifespan of 120 years. The result would be an already capital-intensive project compounded by another capital black hole that requires frequent replacements.

What truly weighs down this proposal is its financial structure. The characteristics of a tidal lagoon involve extremely high upfront costs, a long construction period, and indivisibility. This means it is nearly impossible to finance through competitive Contracts for Difference (CfD) and must rely on a Regulated Asset Base (RAB) model, which incorporates some construction risks into electricity prices to secure lower financing costs. In institutional terms, it resembles nuclear power rather than typical renewable energy projects. Discussions inevitably turn to Sizewell C, as both share similar risk-bearing mechanisms.

However, the key difference lies in necessity. The government’s decision to advance Sizewell C stems from the current scarcity of low-carbon power sources that do not rely on weather or storage and can provide long-term continuous supply. Nuclear power is not an ideal solution but is viewed as temporarily indispensable. In contrast, the Somerset tidal lagoon offers predictable intermittency, a function that could theoretically be replaced by a combination of offshore wind, storage, demand management, and transmission upgrades. This is not a matter of technical superiority but rather a question of policy prioritization.

Adding to this are considerations of environmental impact and irreversibility, making the issue even sharper. Once the lagoon is constructed, there is virtually no turning back. The impacts on sediment, ecology, and coastal dynamics must be rigorously verified in advance, rather than remedied afterward. The true risk of such projects has never been about whether they will generate power, but rather whether the costs of a mistaken judgment are bearable.

Therefore, the Somerset tidal lagoon is neither a castle in the air nor a panacea. It possesses unique value, but only if positioned correctly as a piece of the energy grid puzzle, rather than being expected to serve as a substitute for nuclear power. The energy transition has never been about choosing the most appealing narrative, but rather about selecting the least bad order under real-world conditions. If the tidal lagoon is to endure, it must first be viewed with a clear and rational perspective.

The Real Challenges of the Somerset Tidal Lagoon Read More »

The Dilemma of Surname Choices on Birth Certificates

Many people are unaware that Hong Kong’s current birth certificates do not legally require infants to have a surname. The certificate contains only a ‘Name’ field, and there is no legal stipulation that this name must include a surname. In theory, a child could have only a given name without any family surname.

However, the reality is not so straightforward. Although the system does not explicitly state that a surname is mandatory, it implicitly assumes the existence of a surname, which is most often understood to be the father’s. As a result, some birth certificates display full names, with surnames and given names clearly delineated, while others contain only a given name without a clearly presented surname. Both scenarios are legally valid, yet they create confusion in practice: How should forms be filled out? How should passports be processed? How will overseas institutions interpret this?

This confusion is not coincidental; it reflects an institutional ambiguity. While the legal text deliberately downplays the importance of surnames, it nonetheless perpetuates traditional familial norms, implicitly endorsing the patrilineal path as the standard. The father’s surname becomes an unexamined option, while the mother’s surname or other choices often require justification and face social pressure. Although the law does not prohibit alternative surnames, it shifts the burden of deviating from the paternal surname onto families to navigate.

The issue is not whether one can forgo the father’s surname, but rather why the father’s surname is still treated as the default. As long as this assumption exists, choices will never be equitable.

Reform is not difficult. It simply requires the elimination of any default preference and mandates that parents jointly declare the surname, clearly informing society that a surname is an option, not an unwritten rule. A birth certificate should not be a source of confusion, nor should it surreptitiously endorse a particular family order.

True equality does not allow for exceptions; it eliminates the need for exceptions altogether.

The Dilemma of Surname Choices on Birth Certificates Read More »

UK’s Third Largest Indoor Venue Under Construction in Bristol

The UK’s third largest indoor performance venue is taking shape in North Filton, Bristol. This is not just a blueprint; it is a project that has been approved and is currently underway.

The YTL Arena, located in northern Bristol, is designed to accommodate 20,000 spectators. In terms of seating capacity, it ranks third among indoor venues in the UK, alongside London’s O2 Arena, and is only surpassed by Manchester’s Co-op Live and Manchester Arena. In other words, one of the four largest indoor performance venues in the UK is not in London or Manchester, but in Bristol.

How significant is a capacity of 20,000? The Hong Kong Coliseum, when full, accommodates only 12,500 people, which is already considered a top-tier venue in Asia, yet it is still markedly smaller than the YTL Arena. This capacity is sufficient to host major global tours, large sporting events, and comprehensive entertainment activities, allowing Bristol to finally meet the criteria to become a ‘first-stop city’ rather than merely an alternative option on tour routes.

The YTL Arena is part of the broader redevelopment of Brabazon, spearheaded by Malaysia’s YTL Corporation. This is not an isolated venue; it is integrated into a long-term urban project that includes residential, commercial, educational, and public spaces. It is currently expected to open around 2028, aligning with the maturation timeline of the entire new district.

Transportation infrastructure is also crucial. The North Filton railway station, which is being developed in tandem with the venue, along with the nearby Bristol Parkway, will connect the national rail network directly to the core of the new district, ensuring that the influx of visitors for large events can be accommodated by the regular transport system rather than relying on temporary arrangements.

The economic benefits are also quite clear. During both the construction and operational phases, the YTL Arena will create thousands of direct and indirect jobs and, through performances, tourism, hotel stays, and dining expenditures, will generate hundreds of millions of pounds in economic activity for North Filton each year. This is not a one-off event; it represents a long-term enhancement of urban functionality.

North Filton is steadily positioning itself at the forefront of the UK’s cultural landscape through its capacity, transportation, and timing.

UK’s Third Largest Indoor Venue Under Construction in Bristol Read More »

Solar Energy and Battery Storage at Night

Many people still cling to an outdated notion: when the sun sets, solar energy disappears. This judgment is no longer valid in today’s context. The true transformation of energy reality is not solely due to solar panels, but also to batteries. When batteries become affordable, the sunlight that is not fully utilized during the day can be stored and released steadily at night.

The decline in battery prices is the starting point of this entire narrative. Since 2010, the cost of lithium batteries has plummeted by 90%, and there appears to be no end in sight. Several battery manufacturers and research institutions anticipate that, with process simplification, reduced material usage, and ongoing scale expansion, battery costs will continue to decline significantly.

As a result, solar energy combined with battery storage has become economically viable. Based on recent actual projects, the overall generation cost of such systems generally falls between $60 and $80 per MWh. In contrast, the comprehensive cost of newly built natural gas power plants, even without accounting for carbon taxes and other social costs, remains between $90 and $120 per MWh, and is entirely subject to international natural gas prices and geopolitical factors.

This transition is particularly crucial for subtropical regions. The disparity in solar energy generation between winter and summer is relatively small, and output is stable. With several hours of storage, it can adequately meet daily electricity demands. In high-latitude countries like the UK, batteries are equally indispensable, albeit for slightly different purposes. In addition to solar energy, they complement wind power: when strong winds generate excess electricity in the dead of night, causing electricity prices to drop to negative levels and leading to forced curtailment of wind energy, batteries become a key tool for storing surplus wind energy for peak demand usage.

Many still dismiss the transition with the phrase ‘renewable energy depends on the weather,’ but this statement overlooks the existence of energy storage. As battery costs continue to decline, energy systems are no longer constrained by immediate weather conditions; rather, they depend on overall resource availability and dispatch capability. In a world abundant with wind and sunlight, the truly unstable factors are the prices and supply of fossil fuels.

The idea that ‘there is solar energy at night’ is not just a catchy phrase; it is a conclusion naturally derived from cost curves and system design. When solar and wind energy are paired with long-duration batteries, they become cheaper and more controllable than newly constructed fossil fuel power plants. The question is no longer ‘is it feasible?’ but rather ‘why resist it?’

Solar Energy and Battery Storage at Night Read More »

The Dilemma of Local Governance in England

The primary issue with local governance in England is not the incompetence of local authorities, but rather that they are institutionally designed to fail. Power and resources are highly concentrated in Westminster, leaving local governments with responsibilities but no corresponding control. Over time, this systemic flaw has become evident.

One of the fundamental limitations of local councils is that they bear significant statutory responsibilities without having control over the relevant resources. Social welfare, adult care, children’s services, and special educational needs are all mandated by law, and demand continues to rise with an aging population and social changes. However, local governments lack sufficient financial tools to respond, relying instead on central government funding settlements, which have tightened over the past two decades. Consequently, local authorities are forced to ‘rob Peter to pay Paul’ among statutory services, sacrificing long-term beneficial investments—such as in transport, culture, and economic development—first.

In 2004, the Labour government attempted to establish a regional assembly in the North East of England, marking the first elected regional government in England. However, it was rejected in a referendum by nearly 78% of voters. This failure is often oversimplified as ‘the English do not want local autonomy,’ but the more direct reason was that the assembly lacked real power and stable financial sources while seeking to replace existing county councils. Voters saw no tangible benefits, only an additional layer of political structure, and their rejection was unsurprising.

Subsequently, the UK shifted towards promoting combined authorities, merging multiple local governments into larger administrative units with some functions delegated by the central government, along with elected mayors. While this arrangement appears pragmatic, it still fails to address the core power structure.

For instance, the North East is currently covered by two combined authorities: the North East Combined Authority and the Tees Valley Combined Authority. On the surface, they are no different from the regional assembly that was rejected years ago; the problem remains that they are merely administrative arrangements, not political entities. They lack their own councils, independent legislative powers, and stable, predictable financial foundations.

Even more perplexing is their decision-making mechanism. Major decisions within a combined authority often require repeated negotiations between elected mayors and leaders of all member local councils. If consensus cannot be reached, decision-making stagnates. This is neither parliamentary democracy nor a single executive leadership system, but rather a highly consultative, low-accountability hybrid. When policy failures occur, voters find it difficult to determine whom to hold accountable.

London is one of the few exceptions. In addition to having a mayor, London also has the London Assembly, which provides scrutiny, oversight, and public debate, at least forming a basic structure for democratic checks and balances. However, this system has not been replicated in other regions of England.

Another structural issue is that local and regional governments must continually ‘bid’ to Westminster. Whether for transport, housing, skills training, or urban renewal, local authorities must draft proposals to compete for centrally-led funding, akin to participating in a beauty pageant, catering to the policy preferences of the current minister. Resource allocation is not based on local long-term needs but rather on the central government’s current political priorities.

If local authorities truly controlled their resources, the issues could be much simpler. They could independently determine budget allocations, balancing transport, education, public health, and economic development, rather than passively executing Whitehall’s directives. The essence of local politics should be about trade-offs and accountability, not incessantly writing bids and awaiting approvals.

Therefore, the truly reasonable direction for reform is not to further patch up combined authorities but to complete the long-overdue constitutional arrangements in England. By establishing eight new regional assemblies along existing English regional boundaries, these would be placed on an institutional level equivalent to London, Scotland, Wales, and Northern Ireland. Clearly defining their legislative powers, financial rights, and areas of responsibility would allow for the devolution of both resources and power.

This is not radical reform but rather institutional catch-up. Only when England finally possesses a political structure commensurate with its scale can local governance truly mature, allowing Westminster to disengage from the minutiae of local affairs. The problem in England has never been that localities are incapable, but rather that the central government exerts too much control and detail.

The Dilemma of Local Governance in England Read More »

Users Abandoning Gas Risks a Costly Transition

The stagnation of gas pipeline networks is not a problem unique to any one country; rather, it is a structural dilemma faced by the entire developed world. Europe, North America, Australia, and Japan—all regions that laid extensive urban gas networks in the 20th century—now find themselves at the same crossroads. The question is not whether to dismantle these systems, but rather when, how, and who will bear the costs.

In a truly decarbonized energy system, the combustion of fossil fuels has no reasonable place. This is not an ideological debate; it is a matter of physical law. Regardless of whether gas is sourced from underground or repackaged as ‘low-carbon’, its combustion inevitably results in greenhouse gas emissions. Fortunately, mature and superior alternatives already exist for residential and commercial buildings: heat pumps can amplify one unit of electricity into three to four units of heat, while induction stoves eliminate indoor pollution, offering efficiency, safety, and health benefits that far surpass those of gas. Energy transition does not mean a reduction in quality of life; rather, it signifies the obsolescence of a technically outdated system.

Consequently, the trend of users ‘jumping ship’ is inevitable. As households and businesses gradually shift towards full electrification, they not only save on energy costs per kWh but also avoid the fixed charges embedded in their gas bills that pay for the entire network. The result is that as users decrease, the network costs per household increase; higher costs drive away even more potential users. This death spiral is not a market failure; it is the natural conclusion of infrastructure that has lost its justification for existence.

Some may argue that if this is the case, why not delay the transition as much as possible? However, this is precisely the most dangerous choice. If gas networks are not phased out, humanity must continue to rely heavily on fossil fuels, pushing the global warming trajectory towards 3 °C or even higher. This would not merely represent a failure to meet abstract climate targets; it would lead to concrete and brutal systemic disasters: extreme heat becoming the norm, reduced agricultural yields, disrupted water resources, coastal cities forced to retreat, and the economic and social costs far exceeding the expense of decommissioning any gas pipeline. In contrast, dismantling the network is not radical; it is rational.

The truly challenging issue lies in how to transition fairly. Gas pipelines cannot be shut down overnight, as many households will still depend on them for basic heating and hot water in the short to medium term. If left entirely to market forces, the last remaining users—often the most vulnerable with the least choices—will bear the highest costs. This is why the retirement of gas infrastructure cannot be merely a commercial outcome; it must become a part of public policy. The costs of stranded assets must be paid regardless; the only difference is whether they are distributed in a planned manner or explode uncontrollably later on.

Thus, the conclusion is clear and rational. First, the expansion of gas distribution networks should be halted immediately to avoid creating assets that are bound to be scrapped. Second, a predictable and enforceable decommissioning timetable should be established, synchronizing the sealing and dismantling of pipelines with the rollout of alternatives like heat pumps and building energy efficiency measures. Third, policy tools should be employed to ensure that the costs of transition do not disproportionately burden the last remaining households still using gas.

Gas pipelines must eventually be phased out; this is not an option but a prerequisite. The real choice left is whether to dismantle them in an orderly fashion now or to pay a heavier and more inequitable price after climate chaos ensues.

Users Abandoning Gas Risks a Costly Transition Read More »

Differences in Property Title Systems: Hong Kong vs. UK

In Hong Kong, the first thing buyers are often reminded of is not the duration of their mortgage, but rather the stack of “original contracts”. Lawyers solemnly advise clients to keep these documents safe, as losing them can lead to a complicated recovery process, making resale or refinancing difficult. Over time, the property title becomes not just a legal document but a family heirloom that must be carefully preserved.

In contrast, many Hong Kongers find it hard to believe that in the UK, after a transaction is completed, the lawyer typically hands over a thin printed document, just a few pages listing the property address, owner’s name, and basic rights information. There is no real “original” document, as anyone can purchase a copy of the land registry for just a few pounds. The pages in hand serve merely as a convenient record rather than the foundation of property ownership.

This stark contrast leads many Hong Kongers to mistakenly believe that the UK has no property titles. However, strictly speaking, the UK does have titles; it simply no longer requires a stack of documents to prove ownership. The real difference lies not in the quantity of documents but in the system itself.

Hong Kong employs a document registration system. The government is responsible for registering documents related to land, primarily determining the priority of registered documents without guaranteeing ownership itself. The validity of ownership depends on a complete and unblemished chain of documents. Thus, with each transaction, lawyers must meticulously trace back through earlier documents related to land grants, transfers, and mortgages to ensure there are no gaps or contradictions.

The consequences of this system are quite tangible in Hong Kong’s property market. Properties lacking complete original contracts, known as “no-title properties” or “copy-title properties”, do exist and are not merely theoretical issues. Banks tend to be more cautious when approving mortgages for these properties, sometimes even refusing to lend, which compresses the buyer pool to cash buyers and significantly reduces liquidity. As a result, these properties often require price discounts compared to similar units in the same area with complete contracts, with the extent of the discount varying by situation, but the impact is real.

Consequently, a uniquely local arrangement has emerged in Hong Kong: even if an owner has sufficient funds to pay the full price of a property upfront, they often deliberately take out a small mortgage just to have the bank safeguard the entire set of original contracts. The bank’s role is not merely to lend money but rather functions more like a systematic safe deposit box. This practice itself underscores the weight of documents within the system.

The UK, along with most developed economies today, operates under a property registration system. The state establishes legally binding land registries that clearly record who the owner is and the scope of their rights. Once registration is completed, the law directly acknowledges the registered result. Old documents may still exist but are primarily supplementary; even if individual old documents are not available, they typically do not undermine the registered ownership status.

It is important to note that not all land in the UK is fully registered; a small number of properties still belong to unregistered titles that rely on historical deeds to prove ownership. However, these cases are now rare, and the design of the system includes a crucial distinction: when selling or mortgaging, unregistered titles are usually legally required to complete registration first. In other words, document issues are absorbed and resolved during the transaction process rather than lingering in the market, becoming a common price-dampening label.

Globally, there are not many places that still rely heavily on document registration systems, and they are mostly concentrated in developing countries. India is often cited as an example, where many regions only register transaction documents rather than providing state guarantees for ownership, leading to widespread land disputes and a backlog of court cases. Similar situations exist in Pakistan and Bangladesh; in Africa, parts of Nigeria still experience a coexistence of documents, local customary law, and administrative approvals, resulting in insufficient clarity of ownership and frequent disputes. These areas continue to rely on document systems not due to their rigor but because of heavy historical burdens and high transformation costs.

In contrast, countries like the UK, Australia, Canada, and New Zealand completed their transition in the 20th century, with the state assuming responsibility for confirming ownership, significantly reducing transaction risks and costs. Documents still exist, but they are no longer the critical burden on owners.

Thus, the real question worth asking is not “why can property ownership information in the UK be obtained for just a few pounds,” but rather “why does Hong Kong still require that stack of original contracts?” The answer is not mysterious. Hong Kong is not without reform directions and has already legislated for a property registration system; the challenge lies in how to address the vast historical ownership issues and who will bear the risks during the transition process. The task is monumental, and lacking immediate political returns, the system has remained in a prolonged transitional state.

The weight of property titles often represents not security but rather a burden left by history.

Differences in Property Title Systems: Hong Kong vs. UK Read More »

The Real Reason Christmas is on December 25

Each December, as cities are adorned with lights and carols fill the air, many assume that Christmas is celebrated on December 25 because it marks the birth of Jesus Christ. Others suggest that the date is significant as it is close to the winter solstice, symbolizing the retreat of darkness and the return of light. While these interpretations hold some merit, a closer examination of church history reveals that the origin of December 25 resembles a gradually formed narrative of faith rather than a precisely recorded historical date.

In the worldview of the early church, time was not seen as fragmented or random. Jewish tradition and early Christian belief commonly held that God’s actions in history possess inherent harmony and symmetry. One belief that is less frequently mentioned today is the concept of the ‘full age’: significant figures chosen by God would have their earthly missions begin and end on the same day. Conception and crucifixion, beginnings and completions, resonate with one another in God’s design.

Thus, the early church’s primary focus was not on determining the date of Jesus’ birth but rather on pinpointing the moment of His crucifixion. All four Gospels record that Jesus was sentenced to crucifixion by the Roman governor Pontius Pilate around Passover. Historical records can roughly establish Pilate’s tenure from AD 26 to 36, while Passover, according to the Jewish calendar, always falls on a full moon. For contemporary Christians, this provided a rare and valuable timeline clue.

By the second and third centuries, the Western church gradually adopted a traditional view that Jesus was crucified on March 25. This date was not precise enough to serve as historical evidence but was seen as a complete, solemn, and theologically coherent day within the narrative of salvation. Following the belief in ‘full age’, it was also concluded that Jesus must have been conceived on the same day. Adding nine months leads naturally to December 25 as the commemorative date of His birth.

If we delve further into the question of the exact year of Jesus’ birth, history provides a clearer outline. The Gospel of Matthew states that Jesus was born during the reign of King Herod; historians generally agree that Herod died in 4 BC. Therefore, Jesus could not have been born in AD 1 but was likely born between 6 BC and 4 BC, with some studies even suggesting as early as 7 BC. This implies that the ‘AD’ dating system we use today is already several years out of sync with the actual timing of Jesus’ birth.

The ‘star’ mentioned in the Gospel of Matthew, often referred to as the ‘Star of Bethlehem’, has sparked considerable imagination and speculation over the centuries. Some scholars note that in 7 BC, Jupiter and Saturn had a rare triple conjunction in Pisces; in the context of ancient astrology, such celestial events were easily interpreted as symbols of kingship and the Israelite nation. Other studies mention that Chinese historical texts recorded a possible nova or comet phenomenon in 5 BC, which aligns closely with the estimated years of Jesus’ birth. While these speculations are certainly intriguing, they remain mere attempts by later generations and were never foundational to the church’s establishment of the Christmas date.

For early Christians, celestial bodies served more as a narrative language than as tools for calculating years. What truly mattered was how God entered the world through history, not the precision of a particular night. Consequently, the Eastern church employed the same theological reasoning, interpreting the dates of crucifixion and conception as April 6, which naturally leads to January 6, celebrated today as Epiphany. The methodology is the same, the dates differ, but the focus remains on meaning rather than precision.

Therefore, December 25 has never been Jesus’ ‘birth certificate’. It is a day that gradually took shape through prayer, contemplation, and theological understanding, later fortuitously aligning with the winter solstice, enhancing the symbolism of ‘light entering the world’. It serves as a reminder not of historical certainty but of how faith perceives time and discerns the rhythm of God’s presence throughout the ages. In this sense, Christmas transcends the date itself, becoming a celebration of deeper significance.

The Real Reason Christmas is on December 25 Read More »

Scroll to Top