What the former UK prime minister’s double appointment means for Big Tech, AI safety—and Westminster

Former UK Prime Minister Rishi Sunak engaged in discussion with colleagues, with a view of London’s iconic Big Ben in the background.

LONDON — Microsoft and Anthropic have appointed former UK prime minister Rishi Sunak as a senior adviser, a twin move that crystallises the increasingly porous boundary between political leadership and the companies shaping artificial intelligence. The part‑time roles, cleared by the UK’s Advisory Committee on Business Appointments (Acoba) with restrictions on lobbying and use of privileged information, will be internally focused and strategic rather than policy‑facing, according to people familiar with the appointments.

The timing is notable. Two years after Sunak convened the UK’s first AI Safety Summit at Bletchley Park and launched the AI Safety Institute, the global agenda he championed—responsible deployment, model evaluations, and cross‑border co‑ordination—has matured from vague aspiration to practical checklists for companies and regulators. Microsoft, a dominant cloud and AI platform provider, and Anthropic, a fast‑scaling model developer, now find themselves at the centre of that agenda as governments push for testing regimes, incident reporting and compute transparency. In that context, Sunak’s Rolodex and read on geopolitics are features, not bugs.

Both companies emphasise that Sunak’s remit is advisory and international in scope. At Anthropic, he is expected to contribute to discussions on macroeconomic and geopolitical trends that could shape model development, market access and standards alignment. At Microsoft, the focus will be on strategic insight and external engagement such as keynote appearances and high‑level briefings for enterprise clients. Acoba’s letters approving the jobs set out familiar guardrails: no lobbying ministers or officials, no privileged information, a cooling‑off period, and transparency around the nature of the appointments.

The former prime minister remains a Member of Parliament and has said he will donate earnings from the roles to The Richmond Project, a charity focused on numeracy and social mobility in his constituency. That pledge is designed to blunt criticism that he is monetising his time in office. But the optics are complicated. As chancellor and then prime minister, Sunak courted the technology sector, wooing cloud providers with data‑centre investments and positioning Britain as a convening power on AI safety. To sceptics, his swift pivot into advising two leading AI players fuels a sense that the Westminster–tech revolving door is spinning faster than ever.

For Microsoft, the hire extends a strategy of enlisting statesmen and former regulators as guides through a patchwork of AI rules. The company is navigating scrutiny over its AI partnerships, cloud market share and the downstream risks of foundation models. It has backed industry frameworks around ‘safety by design’ and incident response, while urging governments to shoulder the heaviest compliance burdens for so‑called systemic models. Having a former G7 leader who has sat across from his peers at summits is a subtle—but real—asset when stress‑testing positions before they collide with politics.

Anthropic’s motivations are different but complementary. The company has staked its brand on constitutional AI and risk‑sensitive scaling. As rival model labs push to increase capability and context windows, Anthropic is also campaigning for global baselines on evaluation, red‑teaming and compute thresholds. Government knowledge, especially of how standards travel through the OECD, G7 and UN, can compress the distance between technical proposals and adoptable norms. An adviser steeped in those processes offers signal amid the noise of proliferating AI taskforces.

Still, these post‑No.10 jobs pose real questions about governance on both sides. For public officials, it sharpens the case for consistent, enforceable cooling‑off rules across Westminster and Whitehall—not only for ministers but also for special advisers and senior civil servants. For companies, it raises the bar on transparency: what exactly do these advisers do, how are conflicts handled when their former departments are stakeholders, and when, if ever, is their advice decisive in product or policy decisions? The answer, ideally, is documented processes rather than handshakes and good intentions.

The politics are equally delicate. Labour, now in government, has signalled it will judge Big Tech by outcomes: consumer prices, innovation, labour productivity and safety. To the extent Sunak’s counsel helps companies meet those tests—say, by aligning model assurance with regulator capacity or by hardening incident reporting pipelines—the appointments may look prescient. If they are perceived as a workaround for privileged access, expect calls for tighter rules and more disclosures to grow louder.

There is also a global backdrop that makes high‑level advice unusually valuable. Trade routes for AI are being redrawn by export controls on advanced chips, investment screening, and emerging rules on cross‑border data flows. Markets from the EU to India are finalising accountability frameworks that will determine the cost of doing AI business for the next decade. Meanwhile, the economics of model training—from energy prices to grid access to water usage—now sit on the same briefing slide as go‑to‑market plans. Advisers who can connect those dots across ministries and markets are in short supply.

None of this guarantees impact. Advisory roles are most useful when they deliver unglamorous work: scenario planning that forces executives to consider second‑order effects; red‑team memos that pick holes in a product narrative; and briefing notes that translate political risk into engineering choices. The test for Microsoft and Anthropic will be whether Sunak’s input helps them operationalise responsible AI at scale—without appearing to short‑circuit the democratic process that must ultimately govern it.

For Sunak, the twin appointments are an extension of a post‑Downing Street portfolio that includes finance and paid speaking. His allies argue that few British politicians have a deeper understanding of the intersection between macroeconomics, security and frontier technology. Critics counter that the UK’s AI policy—like everyone’s—is still a work in progress, and that private counsel from a recent prime minister risks entangling public interest with corporate strategy. Both can be true. The measure will be whether the boundaries hold, the benefits are tangible, and the transparency is real.

In the end, the significance of Sunak’s move is less about celebrity hiring than about institutional maturity. As AI systems scale in power and reach, democracies will need more structured contact between industry expertise and public accountability. That contact should happen in the open, with guardrails that command public trust. If Sunak’s advisory roles accelerate that convergence—rather than erode it—this tech move could age well. If not, it will become another case study in why the revolving door keeps spinning, and why voters keep asking who, exactly, is in the room when the next big decisions get made.

Leave a comment

Trending