🔒 Boardroom Talk – Why some Big Tech stocks are high risk in the new world AI is creating

By Alec Hogg

Two things for you to consider today.

First is this week’s letter to shareholders from Palantir CEO Alex Karp. I was first exposed to the Malcom Gladwell lookalike in Davos last year and was hugely impressed.

___STEADY_PAYWALL___

Karp is forthright and smart. His perspectives on Artificial Intelligence are a warning not just for those who hold Big Tech stocks, but on the way AI is likely to be weaponised by undemocratic regimes. Key excerpts from his letter are republished below.

Second is the webinar I’m hosting at lunchtime today. It’s on a tax incentive opportunity called 12B which is the closest thing for investors since 12J. It’s driven by the government’s need to get renewable energy into the system. Fast. 

Click here to register for the webinar. Catch you later.    


Excerpts from Palantir CEO Alex Karp’s letter to shareholders…..

The arrival of the latest large language models, which have provided the world with the first real hints of more generalizable forms of artificial intelligence, will transform enterprise software. 

It has now become increasingly apparent that a platform that combines the capabilities of a foundational software architecture with those of the latest large language models is essential in order for the models to evolve into something of transformational value for large organizations.

The risks presented by an embrace of the latest and most advanced forms of generative artificial intelligence are real.

We have intentionally designed our software around the involvement and oversight of human operators before action in the real world, including, for example, targeting operations in the military context, can be taken. 

The machine must remain subordinate to its creator.

We understand that the extent to which democratic regimes ultimately attempt to curtail the development and more widespread deployment of these capabilities will be determined by the willingness and ability of the software industry to construct the guardrails that both constrain their use and harness their power.

We must also remember that undemocratic regimes will not allow their development of artificial intelligence to be restrained by such moral considerations.

Some of the largest incumbents in the technology industry, who are presently witnessing their businesses being disrupted by large language models, similarly failed to recognize the importance of building software with a moral imperative in mind.

The business models of such companies at their core required and depended on the monetization of our most intimate and private information.

To constrain the use of such data would be to weaken the competitiveness of their underlying interests. The access controls and auditing capabilities that many eventually developed were only grafted onto their offerings after years of systematic exploitation of our personal information.

We understood from the start, however, that the protection of privacy was essential to our success. As a result, the systems we have constructed, including AIP, incorporate protections at their core. 

The purported incumbents in the technology industry have now not only been outpaced in the technical development of these novel artificial intelligence capabilities but have also become marginalized in the ethical discourse about their use.

Read more:

Visited 60 times, 2 visit(s) today