Author name: 胡思

Why Does Trump Desire Greenland So Much?

From a resource perspective, Greenland is not particularly appealing. While it does contain rare earth elements, uranium, zinc, and nickel, the question has never been about availability but rather about viability. The polar climate results in a very short construction season, and the costs of transporting equipment, fuel, and labor across half the globe far exceed those of other mining regions. Furthermore, local political sensitivities regarding environmental issues have led to the shelving of several mining projects amid controversy. Even as the world seeks to diversify its rare earth supply, the market generally believes that Greenland will struggle to become a significant player in the foreseeable future.

The military aspect has also been exaggerated. Under a 1951 defense agreement between the United States and Denmark, the U.S. has long been permitted to deploy any necessary military facilities in Greenland. While Thule Air Base is indeed important, it is often overlooked that the U.S. has significantly reduced its troop presence there over the years. At the height of the Cold War, the base was staffed by tens of thousands; today, only a few hundred remain, with their roles primarily focused on missile warning, space surveillance, and radar operations. If Greenland were truly a critical military asset, the U.S. would have ramped up its presence rather than continuously downsizing it.

Given that resources are not economically viable and military urgency is lacking, the focus must shift to politics.

With Arctic warming, the strategic value of Greenland is indeed increasing; however, making sovereignty a public issue carries significant costs. Denmark is a NATO member, and although Greenland enjoys a high degree of autonomy, it remains part of the Kingdom of Denmark. Publicly discussing the ‘purchase’ of allied territory undermines the foundation of post-war alliance trust. If this logic is accepted, NATO would cease to be a partnership of equals and devolve into a stark power dynamic, potentially costing the U.S. the political trust of the entire continent of Europe.

This high-risk posture aligns perfectly with Trump’s political instincts. He favors simple, tangible narratives that can be immediately promoted, reducing complex geopolitical issues to ‘America wants a piece of land.’ This strongman image is particularly appealing to voters who lack a basic understanding of international politics but yearn to see displays of power—no need to understand the costs or calculate the consequences, just the perception of strength mobilizes their emotions.

Moreover, one cannot overlook the role of distraction. In the U.S., the ongoing controversy surrounding the Epstein case has been contentious. Even though the court has ordered the release of related documents, the actual content released by the Justice Department has been widely criticized as representing only a tiny fraction—reportedly less than 1%—of the total. In this context, shifting public attention to a grand external issue—Arctic sovereignty, territory, national power—naturally feels safer than confronting thorny and sensitive domestic issues. By occupying news space, the original problems become diluted.

Thus, the Greenland issue is less about a strategic blueprint and more about a political performance: crafting a strongman image while deflecting domestic pressures. Greenland may not hold treasures, but it reflects a reality—when politics appeals to emotion and posture, it often attracts those least willing to understand the complexities of the world; yet, it is allies, institutions, and the already fragile international trust that will ultimately bear the consequences.

Why Does Trump Desire Greenland So Much? Read More »

The Paradox of Population Decline and Immigration Policy in the UK

As we enter 2026, the population issue in the UK has shifted from a long-term concern to an immediate risk. The Daily Telegraph reports that net immigration is heading towards a two-decade low, with a reduction of over 100,000 foreign workers and students within a year. Applications for nursing and health visas have nearly halved, while skilled worker visas have plummeted by more than 30%, and family visas have also contracted. James Bowes, a researcher at the University of Warwick, warns that under the current policy trajectory, net immigration could realistically fall to zero this year, or even turn negative. This is not a case of ‘successful management’; rather, the immigration pool is being actively drained by policy.

As immigration declines, another pillar of population support is also beginning to collapse. Sky News cites an analysis by the Resolution Foundation, indicating that the UK is approaching a tipping point where natural population growth will turn negative, meaning that deaths will long outnumber births. The persistently low birth rate has become a structural problem that will not automatically rebound due to tightened immigration. When negative net immigration coincides with negative natural growth, the UK will not merely experience a slowdown in population growth but a substantive contraction. This point has received almost no positive response from government policy.

The first impact of population contraction is felt in the labour force. A decreasing working-age population makes it more difficult for businesses to hire, naturally slowing economic growth; simultaneously, the pace of population aging accelerates, with a rising proportion of retirees, meaning that pension and NHS healthcare expenditures will balloon more rapidly. The problem lies in the fact that there are increasingly fewer taxpayers, while the number of beneficiaries continues to rise, placing ever-greater structural pressure on public finances.

The Times cites the Resolution Foundation, stating that if net immigration remains approximately 200,000 lower than originally forecasted over the long term, the resulting gap in public finances would be equivalent to needing to raise the basic income tax rate by about 2 pence to compensate. In other words, tightening immigration does not ‘alleviate burdens’; it merely shifts financial pressure onto existing taxpayers, ultimately manifesting as a tax increase burdening workers.

Ironically, there exists a significant gap between public opinion and reality. A poll by The Guardian shows that despite a substantial drop in net immigration, around two-thirds of voters mistakenly believe that immigration is on the rise, and three-quarters lack confidence in the government’s immigration policy. The issue of small boat crossings has become a highly visible political focal point, yet it constitutes only a tiny fraction of total immigration numbers. By catering to these misconceptions and continuing to tighten legal immigration channels, the government will only further weaken the labour force and tax base.

If the UK simultaneously faces negative net immigration, negative natural growth, labour shortages, and rapid aging, yet still chooses to narrow the immigration pool, this is no longer prudent but shortsighted. The population issue is structural and requires a stable, predictable, and attractive system to retain people. Continuing to tighten immigration policy at this juncture will not only fail to resolve the problem but will also resemble a self-defeating cycle, pushing the UK towards a vicious circle of ‘fewer people, higher taxes, and weaker economy.’

The Paradox of Population Decline and Immigration Policy in the UK Read More »

The Truth About Electric Vehicle Fires

In recent years, news reports have frequently featured images of “electric vehicle fires” and “lithium battery explosions.” As these stories accumulate, many people instinctively wonder: Are electric vehicles particularly prone to catching fire? This intuition, however, is itself questionable. What you are witnessing is not the frequency of incidents, but rather the frequency of their exposure.

Let us return to the numbers. Different countries have slightly varying statistical methodologies, but the overall direction is highly consistent. For every 100,000 electric vehicles, approximately 20 to 30 cases of fire occur annually; in comparison, gasoline or diesel vehicles see about 1,300 to 1,600 cases under the same metric. Even when comparing based on mileage, the conclusion remains unchanged: the incidence of fire in fuel vehicles is significantly higher than in electric vehicles. When looking at statistics with a clear denominator, electric vehicles are not the “more fire-prone” category.

So why does the public perception seem entirely the opposite? The reason is fundamentally human. Electric vehicles are still a relatively new phenomenon, and with a smaller fleet size, each incident appears rare. When lithium batteries enter thermal runaway, the resulting smoke, flames, and potential for reignition create a dramatic spectacle, making for compelling headlines and videos. In contrast, fuel vehicle fires have become mundane occurrences—engine overheating, aging fuel lines, short-circuited wiring, and post-accident fuel leaks happen daily, yet most remain confined to local fire department records or insurance claims, seldom making the news. The result is that exposure rates are mistakenly perceived as incidence rates.

Some have raised another concern: even if fires are infrequent, are lithium battery fires more difficult to extinguish, thus making them more dangerous? This assertion has some truth but is often exaggerated. Lithium-ion batteries can indeed enter thermal runaway under extreme conditions, and their reactions do not depend on external oxygen. The focus in managing such incidents is on prolonged cooling rather than merely isolating oxygen, which means firefighters often require more water and time, along with monitoring for reignition risks afterward. This speaks to a difference in handling rather than an inability to extinguish. Electric vehicle fires can be controlled; the tactics simply differ from those used for fuel vehicles.

More critically, difficulty in extinguishing fires does not equate to a higher frequency of occurrence. When a fuel vehicle ignites, the flames often spread more rapidly and violently; once fuel leaks, the risk to passengers can be significant. However, because such incidents are so common, their handling has become institutionalized, and the public has become accustomed to them, leading to a false sense of security based on familiarity. Exaggerating the relatively few, low-frequency, but more complicated electric vehicle incidents into a widespread high-risk narrative is logically flawed.

Another frequently conflated source of concern has emerged in recent years: fires involving small lithium battery products such as electric bicycles, scooters, and power banks, which have indeed increased and often result in casualties within residential settings. Many of these incidents involve substandard battery cells, modifications, or improper charging. These accidents are often “conveniently” attributed to electric vehicles, blurring the lines between different levels and specifications of risk and amplifying fear.

In summary: if you say you “often see” electric vehicle fires, that is correct; but if you conclude that they “occur frequently,” that is incorrect. What truly matters is not how shocking the images are, but how large the denominator is. Mature risk discussions rely not on headlines and videos, but on statistics and systems.

The Truth About Electric Vehicle Fires Read More »

How Secondary Legislation Replaces Parliamentary Decision-Making

Secondary legislation in the UK has never been particularly complex in its original intent. It was meant to serve merely as an administrative lubricant for handling technical details: updating fees, adjusting procedures, revising forms. If every minor issue required restarting the full legislative process, it would be both inefficient and disproportionate. The problem lies not in the existence of secondary legislation, but in how its use has been progressively expanded, ultimately replacing policy choices that should be addressed directly by Parliament.

This slippery slope did not occur suddenly; it is a natural result of institutional incentives. When Parliament passes primary legislation, it often grants ministers the authority to ‘regulate details,’ citing the need for flexibility. The broader the delegated powers, the less political resistance there is at the moment; controversies are postponed to be dealt with later through statutory instruments. A successful instance becomes a precedent; as precedents accumulate, they become the norm.

During the COVID-19 pandemic, this model was pushed to its limits. Lockdowns, business restrictions, and bans on gatherings—measures that profoundly impacted personal freedoms and economic activities—were not debated as individual bills but were rapidly enacted through secondary legislation. Often, statutory instruments were submitted to Parliament only after the measures had already been implemented, rendering the debate a mere formality. While theoretically subject to rejection, in practice, it is nearly impossible to overturn a policy that is already in operation.

The welfare system follows a similar pattern. Eligibility thresholds, sanction mechanisms, and adjustments to amounts are often introduced under the guise of technical amendments, yet for those affected, they represent a critical juncture for maintaining their livelihoods. In terms of procedure, the time allocated for debate on these changes is disproportionate to their substantive impact.

The true exposure of systemic issues lies within immigration policy. The UK’s immigration rules are not legislated by Parliament on a case-by-case basis; rather, they are formulated by the Home Secretary under existing delegations and submitted to Parliament via ‘Statements of Changes’ before taking effect. These documents are neither bills nor statutory instruments in the conventional sense, yet they carry full legal force; Parliament cannot amend them line by line, nor is there an inherent mechanism for debate.

Upcoming changes to immigration policy will similarly follow this path. Residency thresholds, family reunion conditions, language requirements, and arrangements affecting the rights of BN(O) applicants can all be rewritten without comprehensive parliamentary scrutiny. Formally legal, yet in substance, they transfer highly political and personally impactful decisions to be handled unilaterally by the executive.

This arrangement is even more regressive in terms of oversight than typical secondary legislation. It is not bound by affirmative or negative procedures, and the political cost of rejection is exceedingly high, resulting in Parliament’s role being reduced to that of a bystander. The system has not been explicitly dismantled, but in practice, it has been hollowed out.

Proponents often defend this by citing efficiency, arguing that the government needs to respond quickly. However, efficiency has never been a justification for undermining democratic oversight. The real issue lies in the boundary: what constitutes execution details, and what are actual policy choices? When the latter is long packaged as the former, Parliament’s legislative function is supplanted by executive power.

The UK’s system has not collapsed overnight; rather, it has gradually morphed through repeated ‘reasonable arrangements.’ Secondary legislation was meant to be an auxiliary tool but has become a political shortcut; Statements of Changes were intended as technical pathways but now bear the weight of life-and-death decisions, including those of BN(O) applicants. When significant choices no longer require genuine discussion in Parliament, what remains of democracy is merely procedural legitimacy, devoid of substantive accountability.

How Secondary Legislation Replaces Parliamentary Decision-Making Read More »

The Future of High-Speed Rail and Its Alternatives

Every few years, someone declares that high-speed rail is on the verge of becoming obsolete. The arguments are often compelling: autonomous driving will make roads smarter, flying cars could alleviate traffic from above, and Hyperloop might propel people through vacuum tubes at incredible speeds. The question, however, is not whether these technologies can be developed, but whether they can operate sustainably, reliably, and affordably on a civilizational scale.

Let us first consider autonomous driving. Removing the driver does not widen the roads. The true limitation of transportation systems has never been response speed, but rather throughput. A dual-track high-speed rail can transport 10,000 people per hour in one direction during peak times, a standard performance for a mature system. To accommodate the same flow of people on highways, assuming autonomous driving is highly developed and each vehicle averages 1.5 passengers, a single lane of a highway would only support about 3,000 people per hour, all while ignoring the space taken by heavy vehicles, merging from side roads, deceleration at exits, speed differentials, and accident risks.

In other words, to match the capacity of high-speed rail, one would need to construct ten or more additional lanes, not just a few extra tracks. This is not merely a technical issue but also a matter of cost and environmental impact. The extensive land acquisition, bridges and elevated structures, sound barriers and drainage systems, along with long-term maintenance and management, all represent significant expenses and ecological damage. In contrast, high-speed rail requires only a controlled corridor, which occupies far less land and causes less environmental fragmentation than an equivalent highway network. If roads were to replace high-speed rail, the costs would not increase linearly but would spiral out of control.

The concept of flying cars requires a reality check. They are not competitors to high-speed rail but could only serve as ‘air taxis.’ This is evident when examining energy consumption. High-speed rail, relying on steel wheels on steel tracks with centralized traction, has an extremely low energy consumption of about 0.05 kWh per passenger per kilometer. In contrast, flying taxis must continuously counteract gravity, and vertical takeoff and landing are inherently energy-intensive activities. Based on existing eVTOL prototypes and public estimates, even at ideal passenger loads, their energy consumption is approximately 1.5 to 2 kWh per passenger per kilometer, which is 30 to 40 times that of high-speed rail. Such energy levels dictate that they can only be used for urgent needs or high-value transport, and cannot serve as the backbone of mass transit. Treating flying taxis as mainstream is merely institutionalizing energy waste.

As for Hyperloop, which seems the most advanced, it is actually the least viable. The issues lie not only in the high costs of vacuum tubes but also in the structural disadvantages regarding capacity and energy consumption. A high-speed train can carry between 800 and 1,200 passengers with trains running every few minutes, resulting in naturally high throughput. Most Hyperloop designs utilize small capsules that carry about 20 to 30 people. Even if they could run every two minutes, they would only transport 600 to 900 passengers per hour in one direction. To replace a high-speed rail line, one would need to construct over ten parallel tubes.

Moreover, each tube must maintain near-vacuum conditions over the long term. Considering a tube several hundred kilometers long and a few meters in diameter, the volume would be in the millions of cubic meters, meaning any minor leak necessitates continuous pumping to compensate. The more tubes there are, the more seams there are, making thermal expansion and contraction, ground subsidence, and material fatigue increasingly difficult to manage. Energy and maintenance costs will only accumulate, not offset. The result is that to allow a few people to travel faster, one would incur higher construction and operational costs than high-speed rail, yet still fail to match its capacity and reliability.

When these three ‘alternative solutions’ are assessed together, the conclusion is quite clear. High-speed rail will not become obsolete not because it is conservative, but because it has achieved an irreplaceable balance among cost, energy, capacity, and safety that remains unmatched. Autonomous driving is suitable for urban and last-mile transport, flying taxis are only appropriate for emergencies and high-value scenarios, and Hyperloop remains at a stage where engineering calculations do not add up. A truly mature transportation system does not replace infrastructure with fantasies but allows each technology to play its role. What will become outdated is not high-speed rail, but those future visions that refuse to confront scale and reality.

The Future of High-Speed Rail and Its Alternatives Read More »

Optimal Timing for Using Heat Pumps

In recent years, many new homes in the UK have standardised heat pumps as part of their configuration, and numerous households have replaced their existing gas water heaters with government subsidies. However, the issue arises when many individuals, after installation, remain on standard electricity tariffs and continue to operate under the habits established during the gas era. Consequently, their electricity bills appear more expensive than when they used gas, leading to doubts about the cost-effectiveness of heat pumps. This is not a problem inherent to the heat pumps themselves, but rather a result of incorrect tariff choices and operational methods. By selecting the appropriate time-of-use tariffs and adjusting the operational timings of the heat pumps, the cost structure can be completely rewritten.

The reasoning is quite straightforward. The actual energy efficiency of heat pumps during heating can often reach three to four times that of gas water heaters; in other words, the same unit of electricity can yield three to four units of heat. When combined with time-of-use tariffs, the price advantage becomes significant. For instance, with Octopus Intelligent Go, off-peak hours typically run from 23:30 to 05:30, with rates as low as 7p/kWh, while other periods can soar to 29.5p/kWh, a difference of over four times. By scheduling major electricity consumption during off-peak hours, many users can reduce their average electricity price to below approximately 15p/kWh, resulting in the actual cost of using heat pumps being roughly half that of gas water heaters.

Starting with hot water settings, during off-peak tariff periods, the target temperature for hot water can be set to 52°C, allowing the heat pump to heat an entire tank of water at the cheapest rate. During normal or peak periods, the target temperature can be lowered to 48°C, serving merely to maintain warmth rather than reheat. This ensures that the most energy-intensive part of the process almost entirely avoids high electricity price periods.

For heating, the ground floor living and dining areas can be regarded as the main heat storage zones for the entire house, as these spaces typically have larger volumes and the highest thermal capacity in their walls and floors. The settings can be quite simple: set the TRVs of all radiators on the ground floor to 5, allowing these areas to absorb heat fully when needed; while bedrooms and other rooms can be set to 2 or 3 to limit temperature rises at night, avoiding sleep disturbances. Simultaneously, during off-peak hours, the target room temperature for the entire house can be set to 23°C, compelling the heat pump to operate intensively and ‘charge’ the house with heat; during other periods, the target temperature can be lowered to 19°C, allowing the stored heat to gradually release, thus reducing the need to restart the heat pump during the day or morning.

Under this usage, there is no need to overly fixate on COP or SCOP. While these metrics are certainly important for comparing equipment, the differences in electricity prices often prove more decisive on actual bills. When off-peak rates are significantly lower than during other periods, allowing the heat pump to operate as much as possible during cheaper times, even at the cost of slight efficiency losses, the overall costs remain lower. For heat pump users, as long as they select the right tariffs and use the right timings, heat pumps will often be a far superior choice compared to gas water heaters.

Optimal Timing for Using Heat Pumps Read More »

The Economic Benefits of Replacing Printers Early

Many households are aware that inkjet printers become increasingly expensive over time, yet they continue to endure the costs, primarily because the machine is still functioning, making replacement seem wasteful. However, a thorough cost analysis reveals that the real waste often lies in prolonging usage.

To provide some context, ink tank printers have only recently become mainstream. Over a decade ago, the home and small office market predominantly offered cartridge options. In recent years, with improved designs, enhanced reliability, and simplified refill systems, ink tanks have rapidly gained popularity, particularly in multifunction devices that combine printing, scanning, and automatic document feeding. Many households today are still using products based on the previous generation’s logic.

For a clearer comparison, let’s examine the differences using specific examples. Take two all-in-one inkjet printers of the same brand and class: a traditional cartridge model typically costs around $900 in Hong Kong, while the corresponding ink tank version is priced at approximately $2,000, with nearly identical functionalities. At first glance, the ink tank model appears to be $1,100 more expensive, but the real distinction lies in the consumables.

In the case of the cartridge printer, a black ink cartridge costs about $120 and has a nominal yield of around 300 pages; the three color cartridges also cost about $120 each, with the same yield. The cost per black-and-white page is nearly $0.40, and when printing in color, with both black and color inks being consumed simultaneously, the cost per page rises to about $1.60. Moreover, if any one color runs out, many models will refuse to print, often resulting in even higher actual costs.

Conversely, the ink tank printer operates quite differently. A bottle of black ink costs about $120 and can print 6,000 pages, resulting in a cost of just $0.02 per page. The three color inks together cost around $300 and can also print 6,000 pages, making the cost per color page approximately $0.05. Even when accounting for black ink, the total cost for color pages is only about $0.07. This is not merely a slight reduction; it represents a significant difference in cost structure.

Many may wonder why ink tanks can be so much cheaper. The key lies not in the quality of the ink but in the operational design. Cartridges are not just simple ink containers; they are highly engineered consumables that incorporate nozzles, sensors, and chips to monitor usage, restrict substitutes, and even halt printing before the ink is entirely depleted. Each time a cartridge is replaced, it is essentially a purchase of a set of precision components, not just ink. The ink itself is a minor part of the cost; the rest comprises plastic, electronic components, packaging, and brand premiums.

In contrast, ink tank systems separate these components. The print head and control system are fixed within the printer itself, representing a one-time investment; only pure ink is replenished in simple bottles, devoid of chips, nozzles, or complex packaging. This separation of hardware and consumables naturally reduces costs to their most basic form. This design difference is the fundamental reason for the cost disparity of over tenfold per page between the two systems.

The break-even point becomes quite clear. The price difference between the two machines is $1,100. If primarily printing black-and-white documents, savings per page amount to about $0.38, requiring approximately 2,900 pages to break even. For frequent color printing, savings per page are around $1.53, leading to a break-even point of only about 720 pages. Considering a more realistic mixed usage scenario of 80% black-and-white and 20% color, the average savings per page is about $0.61, resulting in a break-even of approximately 1,800 pages.

Translating page numbers into time makes the conclusion even more intuitive. Printing 100 pages a month would yield a break-even period of about 18 months; for households with students or those working from home, monthly printing of 200 pages is not uncommon, reducing the break-even period to around 9 months. Remarkably, all of this often occurs before the old machine has even broken down.

The key factor is not the brand or a specific model but rather the system in place. Cartridges represent a design that packages high-priced hardware as consumables, compelling repeated purchases; ink tanks shift costs forward, resulting in long-term low consumption. When the market has completed this transition, remaining within the old system and continuing to pay is, in itself, a form of invisible waste.

Thus, replacing a printer early is not a blind chase for the new but rather a means of stemming financial losses. The real question should not be whether the machine is still functional, but rather how much longer one intends to pay for an outdated cost structure.

The Economic Benefits of Replacing Printers Early Read More »

The Cost of the Lower Thames Crossing Project

The Lower Thames Crossing project is ostensibly a straightforward infrastructure initiative: to construct a new road crossing between Kent and Essex to alleviate the already overloaded Dartford crossing. This is not a grand vision project, but rather a remedial construction intended to fix a long-failing transport hub. Yet, the UK has a notorious reputation for failing to expedite the completion of tasks that have been long acknowledged as necessary.

The project itself is not complex. Designed with six lanes in total, three in each direction, it falls under the national trunk road category, managed by the motorway system, and is not intended for urban commuting but rather serves as a backbone route primarily for freight and long-distance transport. Its function is unequivocal: it does not aim to increase traffic flow but to clear the existing, gridlocked traffic. This is a tunnel designed for logistics, not for political posturing.

Consequently, its economic impact is quite direct. The Dartford crossing has long exceeded its design capacity; even a minor incident can trigger a cascading paralysis of the entire southeastern road network. Delays for freight vehicles, inaccurate deliveries, and wasted driver hours force businesses to either absorb costs or pass them on. For industries reliant on ports, warehousing, and road transport, this is not merely inconvenient; it represents a daily structural loss. The Lower Thames Crossing promises more stable and predictable transport times, which is precisely what modern supply chains value most.

UK politicians frequently discuss the need to rebuild manufacturing and enhance export competitiveness, but if the most critical logistics bottleneck in the southeast remains in disrepair, even the most attractive industrial policies will remain mere words on paper. This tunnel may lack symbolic grandeur, but it represents a vital segment of the economic bloodstream.

What is truly striking is the cost incurred even before construction has officially begun. Public records reveal that the planning and consultation phases alone have consumed an astonishing amount of public funds. The cost of planning applications and related documentation approaches £300 million; preparations for the development consent order account for approximately £267 million; multiple rounds of public consultation, environmental assessments, and studies have consumed around £27 million. In total, just for preliminary documentation, research, and procedures, over £450 million has already been spent.

To date, before full-scale construction has commenced, the entire project has accumulated costs exceeding £1.2 billion. For comparison, the overall estimated construction cost of this project is around £9 to £10 billion. In other words, before the tunnel has even begun to be excavated, the UK has already spent more than one-tenth of the entire project budget on procedures and documentation. This money has not paved a single meter of road or dug a single shovel of earth, yet it starkly illustrates how the system consumes both time and resources.

Environmental assessments and public engagement are undoubtedly important, but when the system demands over a decade and hundreds of millions of pounds to repeatedly demonstrate that an already overloaded crossing needs to be alleviated, the issue transcends caution and veers into indecision. Ironically, during this period of delay, traffic jams, idling, and detours occur daily, with environmental costs never ceasing, merely dispersed across each silent moment of waiting.

Ultimately, the Lower Thames Crossing will likely be completed. The UK does not lack engineering capability, nor is it truly short of funds. What is genuinely unsettling is a governance model that requires even such a pragmatic six-lane tunnel to be ensnared by documentation for over a decade. While the tunnel may pierce the riverbed, if decision-making remains perpetually mired in procedures, it will not just be transport that suffers.

The Cost of the Lower Thames Crossing Project Read More »

The Lessons from Investing in Venezuelan Oil

A chronological examination of investments in Venezuelan oil reveals a pattern of oversight. The resources were already present, and the risks were apparent; yet investors repeatedly chose to ignore them, naively believing that everything would eventually turn out well.

The first to pay the price were American oil companies. In 2007, amid a wave of nationalization, ConocoPhillips was forced to withdraw from the Orinoco heavy oil project and subsequently sought international arbitration. The tribunal ultimately ruled that Venezuela must compensate approximately $8.7 billion, one of the largest investment arbitration awards in the history of the energy sector. However, a ruling does not equate to cash. Given Venezuela’s long-standing debts and uncertain restructuring prospects, the recovery of this compensation is highly uncertain and can only be pursued through piecemeal methods such as seizing overseas assets, resulting in actual recoveries far below the book figure.

Chevron’s choice reflects another investment mentality. It did not fully withdraw but accepted a passive minority stake, choosing to remain. The outcome was capital lock-up and restricted operational control, with cash flow entirely dependent on sanctions waivers and political winds rather than market performance. Even though it has recently obtained limited operational permits due to diplomatic considerations, it has only managed to maintain production at a minimal level, falling short of normal investment returns. This situation has ceased to be a commercial calculation and has become a political gamble.

After the retreat of Western capital, Chinese investment entered the fray. Beginning in 2008, China provided over $60 billion in loans and investments to Venezuela under a ‘loans-for-oil’ model, in exchange for long-term crude oil supplies and engineering contracts. While this arrangement appeared to hedge against systemic risks, it failed to guard against declining production, aging equipment, and managerial failures. Oil deliveries have consistently fallen short of expectations, and some debts have required extensions or renegotiation. Even after years of debt repayment through oil, estimates still indicate that Venezuela’s unpaid debts to China amount to tens of billions of dollars.

When Donald Trump declared that the U.S. would take over Venezuela and intervene in transitional governance, the uncertainty surrounding Chinese investments was heightened further. Whether existing contracts would be recognized, whether loan arrangements would need rewriting, and whether repayment mechanisms would change all depended on a new round of great power competition. Unfulfilled assets were once again exposed to political risk.

The entire timeline reveals a recurring error: overseas fossil fuel investors systematically underestimate geopolitical risks while naively believing that the worst-case scenarios will not materialize, or that even if they do, they can be mitigated through arbitration, diplomacy, or the passage of time. However, in non-free, non-democratic systems, law is a tool, contracts are merely temporary arrangements, and capital lacks genuine protection.

The structure of the industry further amplifies these risks. Fossil fuels are highly concentrated assets that can be controlled at a single point. Oil fields, mines, and transportation facilities are clearly visible and are the easiest to seize or disrupt. Once a regime shifts or international conflicts escalate, investments have almost no buffer space.

In contrast, domestic clean energy and storage infrastructure present a completely different risk structure. First, they are located within national borders, protected by local rule of law, regulatory frameworks, and defense systems. For external forces to directly disrupt them would require crossing sovereign red lines, which comes at a high cost. Second, and more crucially, they are decentralized. Solar panels are spread across rooftops and sites, onshore wind farms are dispersed over vast areas, and battery storage is deployed at multiple nodes and levels. To inflict substantial damage on these facilities without triggering a full-scale conflict would often require costs and time that far exceed any potential strategic gains.

Bringing the perspective back to national security, the conclusion is quite clear. In an era where energy transition and geopolitical tensions are accelerating, continuing to invest in overseas fossil fuel infrastructure poses not only investment risks but also strategic risks. It ties energy supply, capital security, and diplomatic maneuvering together; once the situation reverses, the costs will not only be borne by companies but will also return to the national level.

The truly rational choice is to gradually cease taking risks with overseas fossil fuels and to concentrate resources on domestic clean energy, grid systems, and storage infrastructure, or to collaborate only with allies that have similar systems and stable relations. This is not idealism but a pragmatic calculation of national security. The lessons from Venezuelan oil serve as a reminder to investors and decision-makers: the most dangerous aspect is not the risk itself, but the illusions surrounding it.

The Lessons from Investing in Venezuelan Oil Read More »

The Future of Agrivoltaics in British Agriculture

British agriculture is facing an unavoidable dilemma: hard work is no longer yielding returns. Recent data reveals that approximately one-third of farms in the UK recorded no profits over the past year. This is not merely a case of individual mismanagement, but a structural issue. Rising costs for energy, fertilizers, and labor, coupled with increasingly erratic weather patterns, have kept agricultural product prices under the thumb of market forces and large retailers, leaving farmers at the most vulnerable point in the supply chain.

In this context, the notion that “if you focus on farming, everything will naturally improve” has lost its persuasive power. The issue lies not in the farmers’ diligence but in the outdated agricultural model itself, which can no longer provide a stable livelihood. Agrivoltaics is being seriously discussed not because it is trendy, but because it addresses a direct and urgent question: how can farms survive in a highly uncertain environment?

Agrivoltaics refers to the practice of farming while simultaneously generating stable income from solar energy on the same piece of land. There are various methods to implement this. For instance, solar panels can be installed in a conventional solar farm layout while also being used for grazing; or dual-sided solar panels can be erected in an east-west orientation, allowing livestock to move between them; or panels can be elevated to create a structure that allows for continued farming underneath, with agricultural machinery able to move freely. In some experimental projects, multiple approaches are employed simultaneously, depending on the terrain, crop types, and business models. The essence of agrivoltaics lies not in its appearance but in whether the land can simultaneously generate agricultural and energy value.

This effectively dismantles the myth of “land grabbing.” For many farms in the UK, the real scarcity is not land but predictable income. The role of agrivoltaics is to introduce a revenue stream that is not entirely synchronized with weather, harvests, or market prices. Solar power generation is based on long-term contracts, providing relatively stable cash flow that can support operations during poor harvests or price downturns.

In practical terms, photovoltaics and agriculture need not be in conflict. For crops, moderate shading can help reduce water evaporation and alleviate stress from extreme heat; for livestock, grazing under panels can simultaneously address weed management and land utilization issues. These are not abstract theories but experiences gradually accumulated in various regions across Europe.

More importantly, there is a transformation in the structure of agricultural risk. Traditional agriculture often places all variables on a single line; if weather patterns deviate or prices drop, the entire year’s earnings can be wiped out. Agrivoltaics provides farms with an additional revenue curve, allowing operations to be less entirely dependent on natural conditions and market sentiment. For farmers who have long struggled on the edge of break-even, this capacity to diversify risk is often more practical than any subsidy.

Looking at the UK as a whole, the energy transition also faces the reality of limited land and significant resistance. If agriculture and energy are pitted against each other, both will suffer. Agrivoltaics offers a way to reconfigure resources: it does not require choosing between food and energy but allows the same piece of land to serve multiple functions.

When one-third of farms are already unprofitable, the question is no longer whether to try new models, but whether they can afford to remain unchanged. The significance of agrivoltaics lies not in its perfection but in its indication of a direction—if British agriculture is to endure, the land itself must begin to learn to do more than one thing.

The Future of Agrivoltaics in British Agriculture Read More »

Scroll to Top