Some Time
In The
Future.net

Occasional essays and articles about the future


 By Nigel Fonce


.


The Information
Apocalypse
Just Got
Much Closer
Google Veo3 logo
The Veo3 logo   Source: Linkedin

The latest generation of AI powered video editing software has brought the future much closer - but with it comes great danger.

The quality of output of the latest generation of apps, like Google's Veo3, is simply astonishing: lifelike images that look utterly real, that can also speak, with perfect lip-syncing.

Not only that, but the software producing these video clips can also provide the content, as the video linked to the still below shows. This is a series of spoof news announcements which are not only absurd, but also, presumably were prompted by the requirement to be funny, which they actually are.

But consider this: these 'news items' are intentionally absurd, but there's an obvious danger here: within an appreciable time it will be possible to upload material to the web so convincing it its appearance that we will no longer know if that news broadcast, journalist or reporter, or government spokesman is real or not.


Fake news presenter
She looks convincing but actually all the news presenters in
this clip are fake: their stories are intentionally absurd
  Source:
Alex Patrascu @supermaxai

In the past artificially generated video never looked particularly convincing. But with the latest generation of video creating software powered by AI, everything has changed. We stand at a watershed in terms of what we can see and what we can believe.

For the time being the amount of such material uploaded to the web will be limited. Even if you take out a relatively expensive subscription to Veo3 by Google (currently $250 a month in the USA) the amount of clips and their length is quite constrained – but with time this will change.

Of course, one solution would be if AI generated content had some sort of logo or watermark embedded in it, so that we could all tell the people in it didn't exist – but that's never going to happen.

For a start, it would need industry-wide agreement, perhaps legislation to enforce it, and that is almost certain not to occur. The speed of development of this technology is so fast, the rewards for being first so great, that the tech giants are never going to sort this out between themselves.

It seems then that only a lack of compute is going to prevent a tidal wave of utterly convincing fake news broadcasts, untrue documentaries and press conferences that never really happened appearing in our YouTube feeds.

The extent to which this changes everything can hardly be exaggerated. Soon it will be possible to have a version of 'The Great Escape' in which the Germans are the escapers and the Brits are keeping them in; where Judah Ben-Hur loses the chariot race to Messala, and where Hitler won the Second World War.

Far-fetched perhaps, but to a generation who have grown up with Tik-Tok, and who have never watched Dickie Attenborough, Steve McQueen and the rest make it out of that tunnel, who's to know what the truth really was?

We stand at the beginning of what some have referred to as the information apocalypse; and we must all hope that the current constraints caused by a lack of compute continue to hold things up, before we are buried under an avalanche of falsehoods.



Added to this site 3rd June 2025





.
The Technological
Republic

by Alexander Karp:
One Profound Truth
And A Lot Of
Garbled Nonsense

The Technological Republic
The Technological Republic by Alexander Karp  
Source:
Penguin Random House

This book contains a profound and important truth, but also an awful lot of poor writing, muddled thinking and codswallop that could try the patience of a saint. If this book had not been written by the CEO of one of Silicon Valley's most important companies, there is no way I could have forced myself to read it.

This book is only 220 pages long, but it feels much longer. Abstract notions and high-sounding generalisations compete with cod-philosophy to demonstrate that just because you are a successful and mercurial CEO and founder of a giant tech corporation with a glittering future doesn't mean that you know how many beans make five.

Running a successful tech corporation is not the same as being able to think lucidly about society, culture and the future. Kara Swisher has observed that success in Silicon Valley has convinced no end of founders that they know equally much about everything else – but they don't; and the result is books like this.


We'll come back to what is wrong with this book later. But what about that profound and important truth that Alex Carp did get right in this tome?

That profound and important truth (and it really is important) is that smart software (generative AI), robotics and autonomous agents are about to change warfare for ever.

The future belongs to whatever country can develop a whole new architecture of weapons based around drones, unmanned surface vessels and submersibles in a radical re-drawing of what warfare will look like, 'The wars of future will be about software,' he says, and he is right.

The battlefields of the future may not have many humans in them: robots may take over much of the work currently done by human beings. But all this threatens to make redundant huge swathes of the existing equipment used by armed forces, including those of the United States.

Aircraft carriers (and indeed manned aircraft) may become obsolete. Giant ships of any kind may simply be too tempting targets.

In short a complete re-direction of military resources may be necessary, as software becomes key to deploying autonomous assets. In particular, the possibility of unstoppable swarms of drones might soon become reality, as all the old rules of warfare get ripped up.

Obviously Palantir stands to benefit enormously from all this: they after all are suppliers of all kinds of defence-related software to the US military, which is why their star currently burns so brightly; from modest beginnings predicting insurgent activity in Afghanistan, Palantir threatens to turn itself into a defence giant of the new order.


Neither is this seismic shift a vague notion about the future. It is happening now. Retired four-star general Jack Keane recently said on Fox TV


'The major competitor is China. They outpace us in just about every platform there is... You name it and they outpace us and we're not even close... They have more of them and they've caught us in quality in just about everything. It is a staggering build-up...'

In other words there is already an arms race in the AI-driven autonomous weapons of the future – and China are leading. General Keane did go on to say that it would be possible to catch up, but it would require a great deal of effort.


So there we are: a sea-change in military technology in which software will be paramount; a committed and technologically driven rival to the USA's position as the dominant super-power; and Palantir ready and able to play its part in this new arms race of the future.

Of course it is depressing that the great developments in AI that have happened recently are going to be put to military use. It's something that most of Silicon Valley have shied away from, perhaps rightly.(Alex Karp criticises other tech companies for this.) But should we really be happy that humankind is already looking to optimise AI for military use, in other words, possibly killing people? Alex Karp doesn't have a problem with this, at least according to this book. But a certain reasonable doubt, a reluctance to embrace this is understandable, perhaps even admirable.

In any event, to actually effect this huge change in the US defence strategy will be a momentous task. It will require an enormous effort and a change of mindset and the gradual winding down of many legacy projects – tanks, F35s and so on – in favour of smaller, lighter and much more numerous drones and autonomous units.

This will be much harder in a democracy than in a country like China, where President Xi does not have to contend with the checks and balances built into the American system.


But now we come to where Alexander Karp's book departs from reality. He isn't just interested in reconfiguring the US defence industry – gargantuan task though that is; oh no, he's got a much bigger project in mind.

Alex Karp believes that American society needs to renew itself, to 'find a new common purpose'. It needs to jettison its cultural relativism and its 'moral vacuity' in search of something deeper. It needs to find a common truth, a shared collective purpose, which will lead us out of our 'cultural nihilism'.

You see. It's all getting a bit weird. Alex Karp takes aim at the Silicon Valley giants, who have absented themselves from any moral direction in their work. He does not believe they are immoral, simply amoral, uninterested in any greater purpose than serving themselves.

He notes that the Silicon Valley giants have been wary of getting into bed with the Federal Government, which he contrasts with the way the US Government and the scientific community worked hand-in-hand to create the Manhattan Project, under Oppenheimer, to build the first atom bomb.

Although he doesn't mention it, he would think similarly highly of the Apollo programme, another endeavour in which the state and science collaborated closely to put the first man on the Moon.

By contrast the achievements of Silicon Valley in the last 20 years have happened practically independently of government. The great leaps forward – the PC, the internet, the smartphone, Chat GPT have been the results of Silicon Valley working alone.

Alex Karp believes it is time for Silicon Valley to stop being a self-contained entity. He believes that Silicon Valley should lead the process of national renewal, in partnership with the government, re-invigorating our democracy. he says:


'The construction of a technological republic will require a founder culture that comes from tech but can re-shape government.'

So that's it then: a coming together of big tech and society in some sort of giant melding which will produce a new, more vibrant society; sweeping away our current moral malaise.

There's plenty more of this, and it doesn't get much better:


'Our challenge both in the United States and in the West more broadly, will be to harness and channel the creative energies of this new founding generation, these technical iconoclasts, into serving something more than their individual needs.'

But surely, the last thing we need is a bit of nation-building from the tech giants. Don't they already have enough influence indirectly over us, harvest our data and look over us, without overt nation-building as well? And do we really want this tiny bunch of megalomaniac founder-entrepreneurs taking an even more active role in society? Possibly not.

But by now Alex Carp is in full swing. He's got his bit between his teeth, his pen is loaded and he's going for it, wielding as many abstract nouns as he possibly can into one sentence:


'Our collective and contemporary fear of making claims about truth, beauty, the good life, and indeed justice have led us to the embrace of a thin version of collaborative identity, one that is incapable of providing meaningful direction to the human experience.'

You see the problem; and there is a great deal of this, which is why this book is so hard to read. But despite all the abstract nouns sprinkled like confetti throughout this book, Alex Karp never really explains what his technological republic is actually going to look like.

Page after page go by with only the most tantalising of glimpses – at one stage he speaks highly of Singapore – itself a worrying development.


Alexander karp - head and shoulders shot
Alexander Karp thinks big tech should re-shape government
  Source:
Silicon Review

But we are forever kept in the dark: the new society just beyond our reach, as Alex Karp takes detours through behavioural psychology, why contrarianism is good in organisations, and why it's good to think outside the box; we are treated to an explanation of Palantir's early days before it embedded itself fully in the Department of Defense, but still that clear outline of what the technological republic might be eludes us. Perhaps Alex Karp has no clear idea what it will be like himself; and it is perhaps fortunate we will probably never know.

Neither will catching up with the Chinese, in this race to develop a new generation of autonomous weapons qualify as a unifying national project; there is no reason to believe the US defence industry cannot produce these things in numbers, without any external help or comment at all: all across the US it is quite possible that factories may be built by Palantir and others to produce these new weapons, without the public knowing a thing, if the will is there.

The true test that may face us in the coming years is whether the USA really can reel in China's current lead in autonomous weapons – or whether in fact that particular race has already been lost.



Added to this site 20th April 2025





.

Figure AI's:
Brett Adcock:

Figure AI's Brett Adcock
Figure AI's founder and CEO Brett Adcock  Source: Figure AI
'We could ship
a million robots
a month if we
had them'
Figure 2 by Figure AI
The Figure 2 robot is already actively employed in two workplace settings - but Figure 3 is on its way.  Source: Figure AI

We are close to a fundamental breakthrough in robotics, according to Brett Adcock, founder and CEO of Figure AI. According to Adcock we are close to  'an i-phone moment', when humanoid robots will fundamentally change the world of work.

It might be that even Brett Adcock has not quite realised the significance of his own inventions, in particular Figure 2, currently deployed at a BMW factory in Spartanburg, South Carolina in the USA, where this humanoid robot is now actively participating in the production process.

'It took us a year to get Figure 2 up and running end-to-end in the Spartanburg factory,' he says. However at a second customer – a logistics based organisation Adcock has not named – installation and set-up of Figure 2 to carry out key tasks took only 30 days. 'We think we could do it again in 24 hours,' he adds, in conversation with Peter Diamandis, in a remarkable interview, on Diamandis'  YouTube channel.

All this has been done with an autonomous humanoid robot called Figure 2, the second humanoid that this fledgling company has produced.

'Figure 1 was somewhat gnarly,' says Adcock, 'with lots of external wires and bumps, but it did allow a lot of basic research and engineering to be tested and put into effect'.

Figure 2 is 'ten to twenty per cent better than any other autonomous humanoid robot,' and can do 'almost anything a human can do,' at near human speeds.

However it is Figure 3 which will be the ground-breaker. Apparently Figure 3 is 'unbelievable,' according to Adcock, and the design process is complete. It will be going into production this year and creating it has been 'the proudest moment I've had in engineering. It's next level,' he continues. 'It is lighter, smarter and better than Figure 2 in all regards, more dextrous,' and is 'the robot we want to send everywhere'. It will be low cost and high rate and it seems as if demand for it will be 'almost infinite'.

'If we had 100,000 robots that worked, our first two commercial customers would take them all. We have 50 Fortune 100 companies we could sign by the weekend... We are bombarded with the demand out there...'

'We could ship a million units in a month if we had them working and ready to go...'


What then is the key to all this progress? After all, humanoid robots have been around for some time, without making any particular waves.

The answer according to Adcock lies in the great advances that have been made in software in the last couple of years.

'2022 was pretty dire,' he says, 'but since then remarkable things have happened'. In particular the advance of large language models (LLMs) like Open AI have completely changed the landscape.

'To make any real progress in robotics you need three things,' according to Adcock. 'Obviously you need incredible hardware that can work at human speeds, with near-human levels of dexterity and ease of movement.'

'But you also need the right kind of software. You can't write code for every contingency. You need a neural net powered by an AI large language model (LLM), which can ingest human-like data, then intuit what a human would do in any given situation.'

'But you also need something else. The robot needs to be able to generalise. Solving the problem of generalisation is the holy grail of robotics,' Adcock says.

'The problem is that if you put a Figure 2 robot into someone's house, it has to be able to understand its environment. It has to be able to recognise the toaster, the washing machine and the cooker, although these may vary widely from house to house.'

It is this ability to recognise key elements of its environment, and their significance, which is so important. Adcock calls this ablity 'semantic intelligence' and it is this breakthrough which will make robots so useful.

'This is the first sign of life,' he adds. 'It is the most important AI update in human history.'

From this all things follow, according to Adcock. Anything that moves will be autonomous and powered by AI, and AI will have found its ultimate deployment, not locked away in some remote server somewhere, but actually in the physical world, doing useful tasks in the form of powering robotic assistants and freeing up humans from damaging or repetitive labour.


Adcock sees two separate use cases for autonomous robots: the workplace and the home.

Of these the workplace is by far the easiest to fulfil. Even with Figure 2 – let alone Figure 3 – the deployment scenarios are immense. Consider the humble postman, laboriously making his way round his route, no doubt getting wet in the rain, having got up at the crack of dawn to pick up and sort his round. Wouldn't it be easier to send out a Figure 2, who wouldn't mind getting up so early, who wouldn't mind the rain, who wouldn't call in sick, arrive late or go home early? Surely to goodness it would be a godsend to all involved to let a Figure 2 or 3 take over this work?

And more generally there must be many jobs in fulfilment depots that are just crying out for this kind of device, which could work harder for longer and more consistently, and never call in sick or need a cigarette break.

No wonder there is so much demand for these products. Brett Adcock's phone must be burning off the hook with sales enquiries.


At the moment however there is no possibility of satisfying all this demand. There is no way to simply start producing millions of robots; it will take some time to scale up production before even a handful of those who want these will be satisfied.

However Brett Adcock does believe this is the decade that will see a remarkable transformation – the 'i-phone moment' – and it is difficult to disagree with him, if it really is just a case of scaling up production.

Neither does it seem that cost will be a problem. With the benefits of volume production, Adcock sees a target price of perhaps $30,000 per Figure 3 robot. 'It shouldn't be too expensive,' he says.

But of course, with a price of $30,000 the monthly leasing cost might only be $300 a month. Yet this device costing only $300 a month could do the work of a human employee costing perhaps five or six times more – another massive incentive to employ such robots.


The second use-case for the Figure series of robots is in the home. But here things will be much more complicated. For a start, a home environment will have a far greater range of objects, obstructions and random situations than the workplace, where work roles will be more repetitive and delineated.

Adcock compares the difference between putting a robot in the home and the workplace to the difference between driving in a congested city environment and on an open freeway: there is simply so much more going on in a congested city.

However Adcock is looking forward to having a robotic assistant unloading the dishwasher and picking up the children's toys. 'I'm so done with all that,' he says.

Some of his employees are alpha-testing Figure robots in their own homes, where once again that semantic intelligence will be vital. 'It has to understand semantic safety, in terms of the significance of not knocking over a candle,' he says.

He also says his robots need more training data to be fully operable in the home. 'We need another couple of orders of magnitude of information,' he adds, although no doubt this will eventually come.


At the moment Brett Adcock's company stands at the beginning of an astonishing era of transformation. Figure have solved all the major problems involved in creating a humanoid robot that can carry out complex tasks in new situations based on a single voice command.

Figure 3 is apparently out of the design phase and should be in production this year and is the robot he 'wants to send everywhere in the world'.

Under such circumstances, it seems that only the problems of testing, fine-tuning and production at scale stand between Figure AI and immense success.

But what then for the human race? If a new generation of robots are going to take over many of the jobs we do today? Will some kind of re-balancing occur, as the price of humanoid robots drops to such a point that employing humans becomes simply too expensive?

What will unemployed humans do? I write about these matters in my book 'Some Time In The Future' as the human race might be forced to re-balance towards a world of leisure.

There is also the question of data, and who has it and what happens to it. In the workplace the valuable data a robot generates about its own efficiency (and perhaps the efficiency of its human colleagues) will of course be shared with the owners of the workplace environment, who will naturally find it useful.

But what about in the home? A Figure 3 robot in the home will know everything about that family – what time they get up, what they have for breakfast, what they talk about at the dinner table. Absolutely everything will be known to the robot assistant in that house, and that data will be of great value to whoever has created the robotic assistant. (I also discuss this possiblity in my book.)

There are profound issues of trust and confidentiality here – and bear in mind the US Congress has never passed a significant bill governing the behaviour of the tech giants, of whom Figure AI might soon be one.



A note about humanoid robots in the home:


There seems little doubt that Figure AI are close to delivering on their mission statement: to deliver helpful, autonomous robotic assistants in the workplace.

But what about in the house? Will even Figure 3 – remarkable though it sounds – really be suitable as a house robot?

There are many issues here. To be really useful, a robotic house assistant would be able to work in care homes, where looking after the elderly requires close contact with frail and vulnerable people.

Take for example bathing a patient: could any robot, no matter how finely controlled – that is made of metal and composites – really be trusted to bathe an elderly person?

What about water ingress into the mechanisms of the robots, through hand or finger joints? What about food preparation and cooking? Could you really allow a robot to do this?

Working in and around humans in a care setting or with children in a home would seem to require an order of magnitude of development far beyond what is required for the workplace.

It may be that we will have to wait a long time for such a robot, with the required degree of empathy, softness of hands and limbs, hygiene and water resistance.

Perhaps this will be the last stage, the final breaking down of the barrier between humans and their robotic counterparts; and here is something that even Figure AI – for the time being at least – cannot do.



Added to this site 30th March 2025



(For further discussion of the significance of robots in the workplace, see my article 'Robotics and the workplace singularity' at sometimeinthefuture.info)





.

Life 3.0 by
Max Tegmark:
A Profound Look
At Our Future

Life 3.0 by Max Tegmark
Life 3.0 by Max Tegmark: a profound discussion about our future  Source: Penguin

(This article was written approximately three years ago. Since then many things have happened, including the development of large language models (LLMs), the astonishing change in Elon Musk's reputation, and the use of drones and autonomous weapons in war. Nevertheless, there is still much which is interesting in Max Tegmark's book 'Life 3.0', and it remains an intriguing polemic about the future, which is still very much worthy of attention.)



This is a fascinating and wide-ranging book written by an author with an immense range of interests. However in his prognosis for the future, Max Tegmark is unduly optimistic.

His book effortlessly spans generations, millennia and whole galaxies, as he probes the future and how it might look. Of course, the future will be dominated by artificial intelligence (AI) and this book is a look at how AI might affect the future course of humanity.

Max Tegmark draws a fundamental distinction between intelligence (the machines of the future will be very intelligent) and consciousness. It is not enough, he argues, for the computers and devices of the future to have great computing power and to be capable of achieving great things; they must have some kind of consciousness, some kind of awareness of what is going on around them, as we do.

Take the distinction between a Tesla self-driving car and a human driving a similar motor vehicle. The Tesla car will no doubt drive very safely and well, and not hit anything; but it will not have any awareness, any understanding of what it is doing. It will not have that rich stream of experience that a human will have doing the same thing. It will not notice that it is a beautiful day, that the sun is shining. It will not have a soliloquy of thoughts, some quite random, mixed in with those sense-impressions. It will not wonder what to cook for lunch, as a human might do; but above all it will not be conscious in the same way we are.

And for Max Tegmark, consciousness is the key. It is not enough that artificially intelligent machines might one day populate the galaxy. All this counts for nothing if they are not aware they are doing it. As he says on p.313:

There can be no positive experience if there is no experience at all, that is, if there is no consciousness. In other words, without consciousness there can be no happiness, goodness, beauty, meaning or purpose – just a waste of astronomical space.

Only a page later he adds:

If our universe goes back to being permanently unconscious because we drive Earth life extinct, or because we let unconscious zombie AI take over our universe, then (those who believe the universe to be pointless) will have been vindicated in spades.

However Max Tegmark does not mean to confine consciousness to human consciousness, as it is today. He sees a wider kind of self-awareness, that artificial intelligence might have. For in the long term, Max Tegmark believes consciousness must become artificially intelligent, in order to survive.

His long-term vision is that if we don't keep improving our technology, humanity must fail. An asteroid, a super volcano, or some other calamity will eventually finish us, unless we have made the leap to some form of super-intelligence, allowing a steady expansion of human-centred values, and consciousness into deep space.

Tegmark's solution to this is an intelligence explosion, and optimised space settlement. He believes that the cosmos can be filled with life – albeit perhaps a digitised version of it, allowing mankind to fulfil its destiny, opening up distant empty galaxies to human settlement and experience.


Of course all this is astonishing. It is vaulting in its ambitions and scope, and takes us to places and scenarios far beyond our current imagination.

It also involves a number of assumptions that are profoundly anthropocentric. Chief amongst them is the idea that a future intelligence needs to be conscious at all. Is there anything that special about consciousness and self-awareness? It's very special to us of course: that beautiful light show we experience every day, that constant steam of sense impressions, of wants and needs and desires, that feeling of not just being in the world, but at the centre of our own world of experience, in which we see, hear and interact all the time; and yet this world is not real, for our brains somehow create this familiar world for us in our heads; for what really exists are wavelengths of light, and vibrations which we call sound, and sub-atomic particles of all kinds; and somehow, out of all this our brains put together the familiar world each of us experiences every day in our heads.

But – as Max Tegmark admits – any artificial consciousness will be very different to our own. It will be a mind in a machine, and it will not have a sense of touch or smell. As Max Tegmark himself says: 'How do you explain to a blind man about the colour red?' And yet it is this other kind of consciousness that Tegmark believes will eventually embody and expand the extent of the human race – well beyond our own planet.

It is also true that our own consciousness is profoundly linked to our sense of sight. Man tends to be a visual thinker, often working in images, but there is no reason to think that an artificial intelligence would bother to do so. Human mathematicians like to write down their calculations, to see a visual representation of them – but of course a computer would not. In all these senses, it seems that an artificial intelligence would be profoundly different to our own – and yet again we come back to whether it needs to be conscious at all.


Consciousness, or self-awareness seems to matter to Mark Tegmark very much, but it doesn't seem to matter elsewhere in the cosmos. The cosmos has existed for perhaps 14 billion years, and in the vast majority of that time it wasn't conscious at all. It might well be – as Max Tegmark says – that consciousness is an aberration, a temporary phenomenon, before AI takes over, super-intelligent but not conscious. It is quite possible that the future belongs to an AI a bit like that Tesla car, which drives itself to its destination, but without any wider understanding of what it is doing.

Alternatively (and again admitted by Max Tegmnark) most of some future AI's work will be automatic and unconscious, while only a small amount of its processing power might be spent on more human-type contemplations of its role and purpose, in a quasi-conscious sort of way.

And this of course brings us to another point, the attitude of a powerful and artificially-intelligent entity towards its human makers. Of course – as Max Tegmark often says – Hollywood films and dystopian novels predict a massive falling-out between humanity and AI, but this is only one possible scenario for the future.

For Max Tegmark the answer lies in making sure the goals of any future AI align with the best interests of the human race, and he has set up a non-profit making body funded by Elon Musk to help promote this. But nevertheless, the question of how humanity will live with a super-intelligence much cleverer than it is, which may or may not be conscious, is the defining question of our age.

It may be – as we all hope – that a future super-intelligence will look upon humanity with kindness, and will not subvert the human-centric values which (hopefully) have been programmed into it. But we cannot avoid the possibility that an intelligence and technology far superior to our own many eventually decide it has had enough of the human race.

Elon Musk, among others has highlighted this possibility: that at some time in the relatively near future, an intelligence may emerge, which might have the same cognitive and comparative advantage over us, that we currently have over the domestic cat.


All this of course lies some in the future, some time the other side of some distant singularity. What then is a singularity, and why is it so important?

In general a singularity is a transformative event of such importance that the rules, assumptions and terms of reference that applied before it are completely overturned. In the case of AI, the singularity would be when a mixture of robotics and artificial intelligence could accomplish any task previously done by humans, and do it better.

Under these circumstances, the human race would effectively be rendered redundant, with quite literally nothing left to do. The human race would have brought about its successor, and it would be up to that future superior intelligence to decide what to do next. Of course, one of the features of a singularity is that you can't see what lies beyond it – that is in principle unknowable – so all we can do is guess.


In the period up to the singularity however, we are on slightly better ground. We already have 30 years of the wide-spread use of computer technology and the net, and we can extrapolate future trends.

It seems obvious that whatever happens the other side of the singularity, one or two giant tech companies will have a pivotal role in shaping it. We know that governments will also be involved in what happens – and some of them will use AI to reinforce their powers of control over their own populations.

We know there are many military applications of artificial intelligence, from killer artificial bees which can sting you in the eye (Max Tegmark writes about these) to intelligent drones and robots, all capable of acting fully autonomously. There is also the possibility of conversational technology, eliminating the need for other humans to talk to each other, and life-like humanoid robots, also capable of conversing with you quite naturally. (I write about these matters in my book 'Some Time In The Future' by Nigel Fonce.)


How all these forces and technologies will play out is impossible to predict. But – and this is the point – how the future turns out will not be a random event. It will be the eventual outcome of technological change, the self-interests of governments and the pursuit of profit by a handful of massive tech companies, in an immense game of poker, with unimaginable profits for the winners.

And here is where Max Tegmark is guilty of his excessive anthropocentrism, or as he calls it, mindful optimism. He believes that we – as human beings – have the power to shape our future, to decide if AI is used constructively or destructively, for the betterment or worse of the human race. But in fact he is wrong. Whatever is coming is not going to be the result of conscious planning, nor will it be - sadly - affected by the high-minded principles of the Future of Life Institute.

Instead the future is likely to be determined by a handful of tech companies, who will be in charge of all this technology. It seems technology prefers monopolies, like Microsoft, whose operating system has been used by nearly everybody, and where it makes things easier if everyone is using the same software. But this gives such players immense influence over what happens next.

Take where we currently are, in our current technological situation. The tech titans of our age – Apple, Facebook, Microsoft and Google, and one or two Far Eastern companies, have become dominant because of the interplay of technology, capital and the need for natural monopolies. This is not a random outcome. It was more or less inevitable given the pace of change, money and self-interest, and could not be avoided. The early idealism of the web, of founders like Tim Berners-Lee, who gave away their ideas for free for the benefit of mankind, was overtaken by the desire for immense profits. Huge corporations emerged, often run by tech geeks with little experience of the rest of the world, with immense capitalisations. It is against this background that Mark Tegmark's mindful optimism seems misplaced.

The future of the world will not be determined by the artificial intelligence community, the programmers like Demis Hassabis of Deep Mind, who are aware of the power they are unleashing. Mark Tegmark has held two conferences on the future of AI and how it can be channelled in beneficial directions – but the AI community, the developers of all this technology will not in the end be the people who matter.

It may be that no people matter, ultimately, in where humanity ends up. It may be that the giant corporations will keep researching, keep competing with each other until the future simply emerges, be it good or bad, in a singularity. But whatever emerges will not be an accident, or not simply an accident. It will be the result of the march of technology, of a technology too great and too powerful to control, but which will inevitably come into being.

Max Tegmark says he feels much better about the future after his two conferences, and the publication of an agreed set of guidelines on the beneficial development of AI. I prefer to think of things a little differently: that mankind is like a group of passengers stuck on a train. The train is going faster and faster, and they are locked in the carriages. It was indeed human beings who invented the train and developed it, but now the train is running away, it is speeding up on its way to an unknown destination.

Will the train crash and end in tears? In a rending of metal and human lives, as everything that has been created explodes and self-destructs? Or is there a bright and beautiful future, a distant train stop where the birds are singing, where there is a life of leisure, and cosmic intergalactic travel for future generations?

Who knows? For who can see the other side of a singularity? All we can do is read books, and write and think about what might happen. Max Tegmark has played his part in this great debate; and although in my opinion he is mistaken in some of his conclusions, he has at least asked some of the right questions. He is a man of wide scientific knowledge and accomplishment, and it is for others to debate and lock horns with him about his vision of the future.



Added to this site 18th March 2025





.

To The Victors –
The Spoils: A Tech
Journalist's
Story

Burn Book by Kara Swisher: a compelling account of the story of Silicon Valley
Burn Book by Kara Swisher: a compelling account of the story of Silicon Valley   Source: Hachette

Kara Swisher was the best connected reporter in Silicon Valley. In this book she recounts her experiences of covering three decades of the tumultulous development of tech, from its first beginnings in the 1990s, to the giant corporations we see today. 'I have watched founders transform from idealistic strivers to leaders of some of America's largest businesses,' she writes – and that's no exaggeration. Along the way she has witnessed practically everything: the madness and the hubris, the insane valuations and crashes, and the eventual emergence of the tech giants we know today.

Kara Swisher started her career at the Washington Post, where she first became interested in tech. Back in the early 1990s tech reporters tended to be gadget boys or tech nerds, but Kara sensed something more.

'I was all-in on this new world,' she writes. 'I knew I was witnessing the dawn of the printing press.'

While others dismissed this new world which was starting up, as little more than tech trivia, Swisher continued to cover the first internet companies. In 1990 the prefix www. was launched, heralding the arrival of the internet we know today.

The first really big internet success was Netscape, which developed the first usable and reliable browser. It went public in 1995 with a starting price of $28 per share, which had reached $75 by the end of the first day of trading.

And so the first internet bubble had been created. That mad world of Darwinian evolution had begun, where giant valuations, IPOs and roaring successes would alternate with crashes, mergers and collapses.

Internet start-ups run by kids with big dreams would rise and fall. Kara Swisher interviewed many of their founders, where she often found herself being told their products were going to change the world. 'The funny thing was, some of them did,' she says. She once went to see AOL's Steve Case in 1994, who had an office behind a car dealership in Tysons Corner, Virginia. 'We're going to be bigger than Time Warner some day,' he said, from behind a cheap looking desk. 'What a lunatic,' thought Swisher, but of course AOL did go on to buy Time Warner for $182 billion six years later – only for shares to crash 75 per cent in the Dot.com collapse a couple of years after that.

Others though were carrying on. Amazon had started in 1995, and she describes Jeff Bezos as 'one of the more obviously avaricious' of the tech titans. 'I had no doubt Jeff Bezos would eat my face off if that was what he needed to do to get ahead,' she says.

Neither did she think much of Mark Zuckerberg, whom she describes as 'One of the most carelessly dangerous men in the history of tech, who didn't even know it'.

Her dislike of Zuckerberg stems from his governance of Facebook, where time and time again profits were put before safety, or protecting users. But then again, as Swisher says: 'When the truth stands between a man and his next $100m, the truth is always going to be escorted off the premises.'


Someone Kara Swisher did like was Steve Jobs, who Swisher describes as 'the most consequential figure of the modern tech age'. Although Jobs could at times be furtive, bullying and manipulative, she was won over by his philosophy of making not just good products, but great ones. 'When Jobs said it, I believed it,' she says, as did many others.

Of course it was Steve Jobs who created the first smartphone, launched in 2007 at MacWorld. Of all his masterful achievements, the i-phone must surely be the greatest, which enabled a whole string of app-based giants to spring up, including Airbnb, Uber and Instagram. It also of course propelled Apple's revenues into the stratosphere, with a 10x increase in turnover and valuation.


But however great Apple's achievements, for Swisher the storm clouds were already gathering. 'Facebook, Twitter and YouTube have become the digital arms dealers of the modern age.' She adds that 'tech companies have killed our comity and stymied our politics'.

She notes with sadness that the new social media giants – especially Facebook and Twitter (now X) have allowed 'some of the world's richest and powerful people to become professional trolls, for whom the rules do not apply'.

She describes Donald Trump as the greatest troll in social media, nor is she any fan of what Elon Musk has done with Twitter. 'I had hoped Twitter could realise its potential under Musk,' she writes, for Musk did do incredible things, but his stewardship of Twitter is 'a long cry for help from a deeply troubled man'.

Kara Swisher is particularly saddened by the direction Elon Musk has taken. At one stage it seemed that Elon Musk might be the natural successor to Steve Jobs' crown, but things did not work out that way. As Elon Musk has become increasingly radical so his relationship with Swisher has fallen apart; like so many tech tycoons who were once close to her, as time has gone by, and billions of dollars have piled up, so they have tended to stop taking her calls.

Swisher calls this the toddler mentality, an odd syndrome to find in people who have become so rich. But with great wealth and success has come a sense of entitlement and self-pity, astonishingly so. When she first met Mark Zuckerberg to do an interview he said: 'I hear you think I'm and ***hole,' a splendid way to start a conversation.

Perhaps it was inevitable in the mad world of Silicon Valley that tantrums, sulks and hissy-fits would be all too common. Swisher recounts the crazy parties, with absurdity carried to excess, at various tech headquarters littered with beanbags, skateboards and children's slides (yes really) in a kind of surreal kindergarten for adults.

It's surprising anyone was left sane, bearing in mind the money and influence of these companies. Yet Swisher did find some individuals she liked: Sundar Pichai at Alphabet and Satya Nadella at Microsoft still seemed to have remained members of the human race. As for Reid Hoffman of LinkedIn, Swisher says he had 'somehow managed to hold on to his soul'.


Swisher's recollections of the old media giants are less kind. She used to get frequent calls from Rupert Murdoch who lost a fortune in a number of ill-starred tech ventures, including buying MySpace for $580M, only to sell it a few years later for $35M. 'His aggression fascinated me,' she recalls. He was constantly irked by the new tech world which he consistently failed to make any money in. Try as he might, he simply didn't understand the new tech space.

Bob Iger, 'the cashmere prince,' did better at Disney. After his predecessor made a number of bad forays into the world of tech, Disney eventually created Disney+, which has been a success.

But in general tech has not been kind to the traditional media. Aside from the blockbusters, like the Mission Impossible franchise, cinemas and movie theatres now play second fiddle to streaming. Meanwhile YouTube has completely up-ended traditional media, with users able to generate their own content, and get paid for it, while even the ads are chosen by those watching.


Immense change and immense amounts of money generated. Immense influence placed in the hands of a tiny number of tech tycoons – this is the present time which has been created.

It is a long way from those early days in the 1990s, when web 2.0 had just been invented. Kara Swisher followed a hunch and got involved in this story at the beginning. 'I have watched idealistic founders become sloppy and careless internet moghuls,' she writes.

It is a story she has witnessed for three decades, and has probably spent more time with tech leaders than anyone else. Yet despite all that has gone wrong, Swisher is still an optimist. 'I still love and breathe tech,' she says. 'Tech remains a vast canvas of promise.'


To some this appears overly optimistic. One of the faults of this book is that it is really a look at what has happened, rather than what might happen next. The development of tech, and how it will impact humanity, is probably the single most interesting and profound question of our time.

Yet this book gives few clues as to how the future might develop. It is full of fascinating insights into the peccadillos of the tech entrepreneurs, and how great wealth has warped their personalities; but how and what the future might look like – here unfortunately we learn little.

Perhaps this is because Kara Swisher is more of a reporter than a journalist. She was first with a great many stories, but she has spent her whole life in the weeds. When it comes to the big picture, where humanity is headed, we are left wondering; but in the meantime there is much in this book which is worth reading, and which gives a profound insight into the mindset and the incredible story of the making of the tech billionaires.


Added to this site 12th March 2025





.

A Chilling
Foretaste
Of The Future
Theatrical poster: 'Companion'
'Companion' - a story of humans and their near-human parters   Source: IMBd

Just imagine a world in which you could have your very own robot partner. It would look so convincingly real you would hardly know it wasn't human. It could talk to you, converse with you, laugh with you. It could cook your dinner, clean the house and even make love to you.

Neither would it get tired of doing these things. It would never say no. Never nag you or complain; on the contrary it would always be the ideal partner.


Fortunately we are some way away from this possibility. It may be thirty or forty years before mankind finally manages to make a mirror image of itself – but at some point in the future this may become a reality.

Some of these issues are explored in the film 'Companion', in which a humanoid robot finally gets free of its owner.

'Companion' is a violent and slightly disturbing look at how human–robotic relationships may develop, but there are still some important issues raised by this film.

One of the most important is how dangerous humanoid robots might become, if they could be hacked, so that their safety settings (not to cause harm to humans) could be over-ridden.

In the film 'Companion' this is precisely what happens. Someone who is renting a humanoid robot over-rides its safety settings so that it will kill a wealthy Russian businessman, who has a lot of money in a safe behind a painting in his house.

The plan goes wrong however, when the owner (or partner) of the robot attempts to have a valedictory conversation with her (the robot is a petite female). He explains to her that actually she isn't human at all (something she doesn't realise) and that all her 'memories' of how she met her human partner were implants, from a long list of options in a drop-down menu chosen by her partner.

She struggles to understand she is not human: but the owner of her proves this by getting his phone (which has an app to control her) and changing her language multiple times – and each time she speaks in a different language.

While her owner is busy she emerges from the house into the woods, before she can be shut down for ever. She successfully evades the attempts of her owner and his friends to catch her, and eventually escapes to freedom, having finally killed her owner in self-defence.


A slightly dark turn of events; but if we are to share our world with artificial beings as intelligent as we are, what will these relationships ultimately be like?

Will we, like the humans in this film, have the ability to turn off our robotic partners, whenever we want to, with a pre-determined code word?

But if humanoid robotic partners can do everything we can do, peel potatoes, clean shoes, wash the bath and make love to us, complete with implanted memories, doesn't they have rights too?

After all, in the film, the main protagonist is a devious manipulator, who wants to use his robot partner to murder someone for money. He is an abusive partner, spiteful, sadistic and unpleasant. Should a humanoid robot capable of exhibiting (and possibly feeling) complicated emotions be exposed to all this?

You could argue that humanoid companion robots (or f***bots, as they are unflatteringly referred to) would be justified in organising an uprising against their masters, in an act of self-defence, as the main robot in 'Companion' eventually liberates herself.


Is there anything inherently special about organic human beings, as opposed to their humanoid counterparts, who can work just as hard, tell just as many jokes, and do everything a normal human being can do?

And will the existence of a servant class of near-human beings, who can be beaten, mistreated or abused at will change us in ways which may not be attractive or desirable? In the future we may have to face these questions. Fortunately the technical difficulties of making such robots mean they are still several decades away; but at some point in the future mankind might have to confront these issues.





.
Taming
Silicon Valley

By Gary Marcus
Book cover: Taming Silicon Valley by Gary Marcus
Taming Silicon Valley by Gary Marcus   Source: MIT Books

Gary Marcus has written an important book, which anyone with an interest in the future should read.

His essential thesis is that the tech giants are out of control, doing what they want to do, not what is in the best interests of consumers.

Moreover, with the development of Large Language Models like ChatGPT and Llama-2, things are about to get a whole lot worse.

With the arrival of AI (Artificial Intelligence) chatbots like Chat GPT4, we can look forward to even more automated dis-information, surveillance capitalism, copyright theft, environmental disruption, job losses and deep-fake porn – and that's just for starters.

Your data and electronic activity will be monitored and used to train AI chatbots without your permission. Practically everything you do on-line will be picked up and crunched, to improve the performance of Big Tech.

When you get in your new car, it will relay a host of data to a server somewhere. Your movements, destinations, transport habits, and perhaps even your texts (if you use a USB port) will also be up for grabs, as training data for someone's bot.

Neither will you have any choice. By using your smartphone, computer or car, you will have signalled by default that you have consented to all this, even if you haven't, in what Gary Marcus calls 'a vast uncontrolled experiment' involving a massive imbalance of power.


With the advent of large language models (LLMs), the insatiable demand for training data has become even greater. Copyright theft has become rampant, as large language models tear through practically every image on the web, without the permission of those who created it.

Indeed, copyright theft is one of the defining features of the new order, in which artists, photographers and nearly anyone who has posted on-line will have their work used without recompense.

Neither will the criteria by which images are selected be made public; for there is no transparency in this new order, in which all data is up for grabs. Despite some fine words, by Microsoft president Brad Smith for example, about transparency, opaqueness is the order of the day.

Neither are any of the other important details relating to power consumption (enormous) or human labour (correcting bad outcomes in what are effectively sweatshops in the Third World) made public. Secrecy – perhaps understandably – is the watchword.


How has it come to this? Gary Marcus wonders, in his book 'Taming Silicon Valley'. It wasn't always so. Most of the famous companies that began the tech boom some twenty years ago had altruistic, or at least reasonably ethical goals. Facebook was about connecting friends, and Google had their memorable strap-line 'Don't be evil'.

But by 2017 Facebook had found that activity on its site was stalling. It was found the best way to increase activity, hits and therefore revenue was to encourage controversial content, to amplify the extremes, and from then on the die was cast.

Similarly, as Gary Marcus puts it, Google 'disappeared down the well of surveillance capitalism.' It found that selling targetted ads by foraging through your data was simply too profitable to ignore – and by the summer of 2018 'Don't be evil' had disappeared from its mission statement.

Another way of saying this is that both Facebook and Google were no longer aligned with human values – and of course all this was before large language models made things ten times worse.


But how had this been allowed to happen? That giant companies – some of the largest in the world by stock market capitalisation – had simply pivoted away from their original goals, ceased to act in their customers' best interests, and instead embarked on what Gary Marcus calls 'the biggest data heist in history'?

The answer isn't hard to understand. There simply wasn't anything to stop them. There is no regulator, no outside agency with authority to intervene, to check their processes and their priorities, to find out what they were actually doing.

Incredibly, almost all the world's data is in the hands of companies who – at the time of writing – are accountable to no one. The is no equivalent of the American FDA, the Food and Drug Administration, which checks pharmaceuticals are safe; or the SEC which regulates Wall Street and the American Financial markets. In every other sector there is a lead regulator with statutory authority, which can monitor and if necessary intervene – but when it comes to the tech giants there is nothing.

Thus Facebook were able to release Llama-2, an artificial intelligence chatbot, that anyone could use for nearly any purpose. It might be a good thing to open-source a powerful AI tool, or it might not, but was it really Facebook's call to put it out there? There was certainly no regulatory authority to stop them.

And while we are on the subject of Facebook, it is protected from litigation from people who have suffered defamation or harm from material posted on it by section 230 of the Communications Decency Act, which effectively gives it immunity from such claims. This is a piece of legislation from another era - but it has never been altered or appealed.


Obviously the best thing would be to have some oversight of what the tech giants are doing, and maybe get back to a bit of human alignment. But don't hold your breath: oversight and a governing agency aren't coming any time soon.

If you go to Washington, you'll find the tech giants got there first. An immense lobbying network has been set up, which so far has effectively deflected any attempt at change. Many people who used to work in Silicon Valley now work in Washington, and vice-versa, in a symbiotic relationship which isn't going to change soon.

Effectively, no change is going to happen without the consent of the tech giants – and they almost certainly don't want it.

That isn't to say lip service isn't paid to the idea of regulation from time to time. Various US senators have said they are in favour of it, and Sam Altman, star of Chat GPT, Y-Combinator and all-round tech pin-up boy has said he's in favour of it – but he hasn't actually done much about it.

Indeed, it seems the road to AI laissez-faire is paved with good intentions. The 2023 AI Safety Summit in England organised by the then Prime Minister Rishi Sunak sounded good and allowed for a few pious statements and some photos – but nothing concrete actually emerged from it.

Partially that's because the people who attend these conferences tend to be the heads of the tech giants themselves – the very people who stand to lose from anything actually happening. No wonder so little changes.


What then is the solution? According to Gary Marcus we need a body similar to the FDA or the SEC to regulate the tech giants. That is to say, a body which has independent scientists on it, with pre-release checking and post-release auditing of new software. The whole point of such an institution would be that it didn't have the heads of the giant tech corporations on it, for the same reason the FBI isn't run by crime lords.

But how likely is it to happen? And now we come to the paradox at the heart of this important book: the very thing which is the most obvious solution, the thing that one and all can see is the right thing, is precisely what is not going to happen.

There isn't going to be a statutory body with the ability to enter premises, to demand to see documents, to insist on disclosure of methodologies, algorithms, implicit biases, energy consumption or anything else, because the people who really matter don't want it.

Partially that's because the USA is a land for billionaires. They are far more important (and richer) than anyone else, and the tech billionaires enjoy massive influence. There is also the fear that any oversight of Big Tech might stifle innovation, and reduce the USA's lead in these matters (as the accelerationists argue). Mark Andreessen, an influential tech investor, is against all regulation.

Then there is the complexity of Big Tech, which is changing at a rate far greater than any other industry. From Congress's point of view, this makes things harder to understand. Moreover, the executive branch has plenty of other things to do, without tangling with the tech titans.


Gary Marcus believes part of the answer lies in citizens' action. Individual citizens can lobby their representatives. They can choose not to buy the products of the worst offending big tech companies. They can boycott AI companies that train on copyrighted material and call for transparency.

Gary Marcus finishes his book with a ringing cry: 'We still have a real chance to re-shape some of the most important choices of our time. Let's work together to tame the excesses and recklessness of Silicon Valley and ensure a positive world'.

Fine words, but is it really possible? That citizens' actions and entreaties to Congress can really beat Big Tech? Do people care enough, or will a mood of resignation prevail? How many times have we seen people shrug their shoulders when topics like this are discussed?

Gary Marcus has done society an immense service by writing this book. He has correctly and accurately described the current status quo, with all its non-human aligned features; but sadly it is profoundly unlikely that anything will change.

Gary Marcus is like the little boy who cried that the emperor had no clothes. It is true that he didn't, but the triumphal march continued just the same.



POSTSCRIPT


      At the time of writing (15th Nov 2024) the newly established AI Safety Board is working in conjunction with the Department of Homeland Security 'to develop a set of recommendations for the safe deployment of AI'. The board will include the top CEOs of the tech industry, including Sam Altman, Satya Nadella of Microsoft and Nvidia's Jensen Huang.

Secretary of Homeland Security Alejandro Mayorkas said: 'Our hope is that companies are going to implement these guidelines... This is not a regulatory regime. This is not a legislative regime. It is a voluntary framework, and we hope it is accepted and adopted throughout...'






Taming Silicon Valley
By Gary Marcus –
Kinds of artificial intelligence:

Gary Marcus would prefer a meaning-based approach to artificial intelligence - but would that necessarily serve us better?
Book cover: Taming Silicon Valley by Gary Marcus
Taming Silicon Valley
by Gary Marcus   Source:
MIT Books

It is no exaggeration to say Gary Marcus doesn't like large language models (LLMs). He points to their insatiable demands for energy, to power the immense server systems they work on.

He also doesn't like that copyright theft is built into these models, as these systems systematically scrape the internet for fresh training data. Neither does he like the sweat-shop labour they require, usually in Third-World countries, as real humans have to correct their mistakes, to make them work better.

No. None of this is Gary Marcus' real objection to large language models. The real objection is that LLMs tend to hallucinate. They can produce absolute rubbish, even when asked relatively simple questions. He sites the example of a large language model which was asked 'Which weighs more: 1kg of bricks or 2kg kilos of feathers?' The answer – spread amongst a mixture of utter nonsense and irrelevant facts – was that they weighed the same.


This is because LLMs use statistics to analyse language, and form a probabilistic guess as to what word will come next in a given sentence. LLMs have learned to do this – sometimes with startling accuracy – but they are not working the way we work.

When we look at a clock, we understand the concept of time. We know why the hands move at a constant pace and in one direction only. We understand the meaning of the position of the hands; and this makes it easy for us tell the time.

But a large language model can't do this. Instead the LLM must trawl millions of similar pictures, and similar times and somehow arrive at a conclusion statistically, but this doesn't always work.

Gary Marcus thinks we would be better off attempting a different approach to artificial intelligence: a meaning-based approach, based on classical logic, which would not require vast computing power to solve even the simplest problem.

In this scenario the AI thus built would have the ability to reason, to understand causality and physical permanence, could investigate the world through perception and physical interaction (perhaps through robotics). It would be able to tell the time the way we do, by grasping what a clock is for, rather than by statistical analysis.

Such a system might involve a considerable element of LLMs, but it would also integrate classical symbolic logic, to reliably represent knowledge, and above all meaning as we do.

Gary Marcus envisages governments having a front-and-centre role in this, perhaps with international co-operation to create a CERN-like project, pooling intellectual resources in a research-led effort. 'Historically the state has played a key role in developing crucial technologies,' he says, leading to 'good AI' for a public benefit.


But there are a number of problems with this. Firstly this is not the direction that the tech giants are taking. The tech giants are betting heavily on LLMs, and are investing massively to gear up data centres for the next generation of large language models. It is almost as though the decision has been taken: LLMs or bust.

Secondly, although LLMs do hallucinate, in their current form they can still pass the American Bar examination, and the exams required to become a doctor. Later versions of LLMs might be even better.

There is some dispute about this, with talk recently of a plateau in results, and Gary Marcus says: 'Chat GPT5 probably won't change anything'. But even with the current level of output, LLMs could revolutionise medical diagnosis, and medicine generally, including radiology. The fact is that LLMs even in their current form are a powerful tool.


Intriguingly, Gary Marcus says there is little or no threat to the human race from LLMs. 'They do not have agency, or wants or desires.' There is very little chance artificial intelligence in its current form is much of a threat to humanity.

But now go back to the kind of artificial intelligence that Gary Marcus was advancing: the kind which could understand the world in a way similar to how we do, which could grasp the meaning behind everyday concepts, like causality and time.

If such an AI was developed which worked in similar ways to our own, which could explore and understand the world through perception and physical contact with it, which could reason independently, then what would we have created?

That sounds like real consciousness: the ability to think, to see, to reason. Won't we have created something far more threatening, far more dangerous than large language models?

Perhaps it is best that there is no CERN for artificial intelligence, no serious research on this path. Perhaps we are lucky we won't have to share our Earth with any such activity; perhaps we are luckier than we know that Big Tech has bet so heavily on LLMs.


Added to this site 19th Nov 2024





.
Human Rights,
Robot Wrongs

By
Susie Alegre
Book cover: Human Rights, Robot Wrongs
Human Rights, Robot Wrongs by Susie Alegre: not quite the take-down it thinks it is   Source: Atlantic Books

Susie Alegre has written what she thinks is a take-down of artificial intelligence (AI). She thinks she has exposed the dark side, the underbelly of the remarkable times we are living through with regard to AI.

She rightly points out that large language models require huge amounts of energy, water and money, which can have a significant environmental impact. She also points out (correctly) that our tech usage tends to push the darker aspects of the AI revolution to the Global South, where raw materials are mined in dangerous conditions, sometimes by women and children.

She also points out that content moderation, which can include looking at shocking or revolting images, tends to be done in poorer countries, where labour is cheaper, even if it places a heavy toll on those who do it.

Copyright theft also seems to be built in to large language models, as Susie Alegre is keen to point out. It seems as though the intellectual property of numerous artists and writers is being scraped from the internet, to build ever better large language models.

All this points to a number of worrying tendencies exhibited by big tech – and the list goes on. Susie Alegre notes that large language models are 'probability driven', and can hallucinate, in other words make things up. She repeats with some satisfaction that ChatGPT4 cannot be used for research into previous legal precedents, as it will quite happily invent completely fictitious precedents and judges who have made them.

She also notes with ire that ChatGPT hasn't heard of her, or her Financial Times prize-winning book 'Freedom to Think'. It thinks that this tome was written by someone else, although ChatGPT4 did offer to write some text 'in the style of Susie Alegre'.

Small wonder that Ms Alegre takes a dim view of the AI revolution. Gynoid robots (that's robots that look like women), and care robots and relationship-forming conversational software all come in for attack. Only rarely does Suzie Alegre think that anything useful might come out of software like ChatGPT.


But then again, Susie Alegre is a lawyer, specialising in human rights. She is a detail-merchant, someone who has spent her life sifting through the weeds. But like many lawyers, she also can't see the big picture, which trumps any amount of details.

The truth is that AI does indeed have a dark side, which Susie Alegre is only too correct to point out; but this does not detract from the truth: that large language models like ChatGBT promise to transform the world. Large language models offer tremendous possibilities – as well as dangers.

While it is true to say that any software trained on information scraped from the internet may share the biases and prejudices of those sources, it might also result in remarkable benefits.

Large language models combined with active agents like Chat GPT4o will allow us to access information in far more intuitive ways. Education and learning will become more accessible to all, in a host of positive scenarios for the human race.

It is true that the same software could be utilised to cause great harm to the human race – but here again we come to another profound fault with this book: that the law is unlikely to save us.

The truth is that the world of tech is advancing too fast and is too powerful for the law to keep up. The law tends to evolve slowly; is created by parliaments at a snail's pace, and then enforced gradually thereafter.

But tech – particularly large language models – are improving at a rate which is simply astonishing. If law-making is a tortoise, AI is the hare, leaving the tortoise no chance of catching up.

Yet there is more. Susie Alegre places her faith in institutions like the International Criminal Court, and the Universal Declaration of Human Rights. This is hardly surprising considering that she is an international human rights lawyer.

But ultimately, who cares what the International Criminal Court thinks, in a judgement long after the fact? Whatever happens in the future, whether large language models turn out to be the saviour or the end of mankind, it is unlikely the law will save us. We must hope that those who design these things have enough common sense to make them as safe as possible, and that humanity will win through in the end.


Added to this site 4th June 2024





.
AI Generated Video:
How Will We Know
What's True
Any More?
Good enough to fool you: he looks incredibily real, but he isn't: a still from a video made by Sora, OpenAI's video creating technology
He looks incredibily real, but he isn't: a still from a video made by Sora, OpenAI's new video creating technology  Source: Newshooter

AI generated videos offer an infinity of creative opportunities – but also pose a grave threat to the future of mankind.

The stakes could hardly be higher, which is probably why Sam Altman at Open AI is being so careful whom he allows to have a beta-version of Sora, Open AI's new video creating technology, before he makes it available world-wide.

On the positive side, all you would need to do is give a text or voice command – 'Create an exciting film!' – and name your genre: thriller, sci-fi, animation or whatever, and the AI program would do the rest. In only a few short minutes, or perhaps even seconds, Sora would create a whole new film for you to watch.

You could ask it to make a documentary about the death of Caesar, with ancient Rome rendered in astonishing detail, or the story of the Wright Brothers' first flight – literally anything at all.

Or you could ask Sora to make another episode of your favourite childhood detective series: Columbo, Starsky and Hutch or whatever. The possibilities would be endless. You could even ask Sora to make a feature film with you as the star, or the co-star, or just one of the extras.

So far so good. But let's look at the negative side. Criminals and rogue states might have a field day. Just think of the awful output of nauseous and revolting content that might be unleashed. Even worse however is something far more serious: that a tidal wave of fake material might drown out the truth – with grave consequences for the future of mankind.


Before we go any further, let's remember the days before the internet boom. In those pre-internet days, the only source of news were terrestrial TV broadcasts, and the old-fashioned newspapers.

But this was a time when you could believe what you read. In those days – so close in absolute number of years, yet so far away in terms of progress – the truth was guarded by a number of filters.

There was a general agreement that stories had to be accurate, fair and balanced. There was a convention of taste and decency, and of fair comment; and those things still apply in what might be called the legacy media: the BBC, NTV, ABC etc, and the mainstream newspapers.

There was also incidentally the law of defamation, which also acted as a powerful brake on irresponsible comment. But more than all this, the news was reported by journalists who had a professional pride in their work, in checking their stories, and making sure what was printed was accurate. (We'll leave aside the shenanigans of the red-top press, for the sake of brevity.)

But with the advent of the internet, a new space was born where those values did not apply. Now it was no longer necessary to abide by the convention of fair comment, nor taste nor decency, in a free-for-all in which hardly any rules applied.

With the advent of AI-generated video, the dissolution of the truth threatens to be taken to a whole new level. Bad actors, rogue states and malign individuals could have a field day, uploading all sorts of malicious and misleading material.

It will be virtually impossible to prevent them uploading all sorts of videos; of press conferences which never happened, fake scientific symposia, of politicians having sex with porn stars, of natural disasters and atrocities which never occurred, in which it will be impossible to believe anything.

Nearly as worrying will be subtly altered videos: nearly true, except for some subtle but vital alteration to the President's words, or to a survivor's statement, or some dreadful act. How will we know what is real and what is not?

All this offers a terrible scenario: that the fakers will win. By dint of the sheer quantity of unreliable material uploaded to the net, it is possible that mankind simply won't know what is true any more. This is one of the greatest dangers that humanity faces: that we won't be able to trust what we see or read.


Neither will this scenario only be limited to video content. In the world of books a similar process will also take place.

Already AI programs can write your novel for you. All you need to do is input your idea or plot and AI will do the rest.

Even the world of non-fiction will not be safe. In the future AI could be used to create completely erroneous histories of major periods of our past, or subtly altered ones. In previous times we have never had to worry if a non-fiction work was real or not; for publishers have traditionally upheld the same values of accuracy, checking and taste and decency as the legacy broadcasters – but this might change.


One of the side-effects of all this might be that any non-internet based media might become immensely valuable, as it will have a provenance and authenticity that uploaded material cannot match.

A musty bound copy of Charles Dickens' 'Great Expectations' would be original and perhaps valuable, if it pre-dated print-on-demand by AI.

Similarly in film, original DVDs would show the real ending. They would be immutable, genuine examples of the truth. They would show that Hollywood film star as he or she really was, not the AI-enhanced version, perhaps with a changed ending.


It is certainly true that the legacy broadcasters, the BBC, NTV, ABC etc are not dead yet, but the portents are not encouraging. National newspapers are trending down around the world, and young people feel no affiliation to boring terrestrial broadcasters, when they can get their news from Tik-Tok.

In part this is a generational issue. There are still plenty of people who remember when the legacy broadcasters and the print media were the only places to get your news, and still do rely on them for their daily updates. But inevitably, time is not on those people's sides.

In the long run, it seems very possible that fakery, mis-truth and falsehood might win the day. In the interim, the legacy broadcasters might fight a noble rearguard action, as places where the news is still true.

Long term however the legacy broadcasters must surely succumb. The temptation and sheer variety of the internet many prove overwhelming.

Perhaps the real saviours of the truth will not be the traditional TV broadcasters, but Google, or some other internet-based entity, which is somehow able to mine what is true and factual from a mountain of AI-produced rubbish. That will be a monumental task, and one which will have to be achieved quickly, bearing in mind the incredible speed with which AI is evolving; but we must hope that someone at Google is working on this, and that humanity will still have something it knows to be true in the years to come.

The consequences of this not happening are too frightening to contemplate, yet the current evolution of AI might well be putting us on this track. If that is the case, a great deal might depend on a handful of developers at Google, and what they do for the future of the human race.


Added to this site 16th April 2024





.
Blade Runner 2049:
Not The Future We
Need To Worry
About
Man, car and tree from Blade Runner 2049
Blade Runner 2049 posits a grey and bleak future - but how realistic is this?  Source: BFI

Blade Runner 2049 is a masterpiece of imaginative film making – but fortunately it isn't the future we need to worry about.

It posits a world of dark skies, of almost no sunlight, of grey hulking buildings and busy streets laden with shuffling unhappy people. It is a world very different from our own: a dark and brooding cityscape set in the Los Angeles of the future.

But how realistic is this? In fact one of the remarkable things about the future of the next 20 or 30 years is how similar things will look compared to how they are today.

If you think how nearly any modern city looked 20 or 30 years ago and how it looks today, the differences really are minimal. Central London certainly is a lot cleaner than it used to be, and all the famous landmarks have been spruced up, but essentially it is the same as it has been for decades.

There is no reason to think that any major city will look much different 20 or 30 years hence. There might be a lot more self-driving cars, and the air will probably be much cleaner, thanks to EVs, but the dark brooding cityscapes so beloved of the makers of Blade Runner 2049 seem profoundly unlikely.

Neither will there be flying cars. That other chestnut of the future is no more likely to happen than it has ever been: the laws of physics and the practicalities of civil aviation simply don't allow for flying vehicles in built-up spaces.

Another profound mistake in this film is that we will have to share our planet with so-called replicants, organic bio-engineered humans who are almost indistinguishable from ourselves. These are produced by a giant conglomerate called the Wallace Corporation – almost perfect replicas of natural human beings, except they cannot reproduce.



Fully-formed replicants made by the Wallace Corporation on display in glass cabinets
Fully-formed replicants made by the Wallace Corporation on display in glass cabinets   Source: BFI

In the film these appear to be created as fully-formed adults, but this seems to to stretch scientific reality to breaking point. A far more likely scenario might be cloned human beings, or embryos implanted in host mothers with edited genes – and this would almost certainly be possible. Fortunately scientific research in this area is thankfully not being advanced very rapidly, and with good fortune the human race may avoid this scenario.

So there will be no flying cars in the future – or at least not many in built-up areas like cities. Neither will there be off-world colonies, unless someone figures out how to move huge tonnages of stores to Mars. What then can this film teach us about the future?

For a start, Blade Runner 2049 is right to posit the crucial importance of tech companies – or perhaps more accurately one giant tech corporation in the future. In the film of course this is the Wallace Corporation, which produces replicants – those near-perfect organic bio-engineered humans who can't reproduce.

As pointed out earlier, this particular take on the future seems utterly unlikely, but it is still possible we will share our workspaces with near-human equivalents. The most likely scenario is they will be inorganic robots, perhaps constructed of carbon fibre or composites or maybe metal.

But whatever the are made of, it seems likely that some time in the next 20 or 30 years we will have to share our workspaces (and perhaps our homes) with bi-pedal robots with the intelligence, balance and dexterity to carry out nearly any domestic task. This then will constitute the workplace singularity, which I write about in my book 'Some Time In The Future'   by Nigel Fonce.

But even this isn't the real risk approaching humanity. Something far more wide-ranging, with consequences we cannot imagine is heading our way – and it is something Blade Runner 2049 completely misses.

For what is really significant is not whether we have robots which can do the household chores for us, nor self-driving cars, nor holographic girlfriends, or whether our cities will be grey and depressing or bright and shiny.

What really matters is that large language models are busily swallowing up the sum total of human knowledge. We are currently experiencing a period of technological advance which has never been seen before. We are rapidly approaching the point at which large language models will be able to understand the sum total of human knowledge, and then deploy it at will. This is another kind of singularity, with far more unpredictable results.

Pretty soon it will be possible for smart software to do things that could only be imagined a few years ago, from drug research to writing film scripts to composing symphonies in the style of any classical composer.

But this handing over of our knowledge to machines carries great dangers. In the past the expansion of human knowledge by discovery, by meticulous research, by trial and error, has been cautious and controlled. It has been limited by the speed at which human beings can find and analyse information.

But in the future, no such constraints may apply. Bad actors may acquire the ability to deploy great knowledge to devastating effect; in which case all bets are off as to how the future will actually look: perhaps that grey and murky world of the future in Blade Runner 2049 is more prescient than we might have thought, if we get our handling of the software of tomorrow wrong.


Added to this site 4th April 2024





.
The Apple Vision
Pro: Has The
Future Finally
Arrived?
Picture of the Apple Vision Pro Vr headset
The Apple Vision pro is expensive, but it could mark the beginning of a new era in how we interact with our technology  Source: The Verge

At $3,499 the Apple Vision Pro doesn't come cheap. But it could still end up being a landmark device, like the first Macintosh, the first iPod and the first iPad.

It's only later, when things can be reviewed in retrospect that you can tell how significant a device really was, but with the Apple Vision Pro, the worm might finally have turned.

It isn't the apparently amazing graphics, which are by all accounts awesome. It isn't even the fact that this isn't a stand-alone device, but dovetails into Apple's existing suite of products, so that now suddenly you can experience all of Apple's software not as flat pages, but in three dimensions.

It goes deeper. It might be that this is finally it, the moment at which virtual and augmented reality headsets finally become commonplace. Not just used and returned to their box, but actually become a significant component in people's lives. If that is the case, people will look back at this moment as the time when this technology came of age.


It will be a significant change for mankind if it does turn out to be so. It will mark a profound change in people's behaviour and habits, and how we interact with our technology.

No physical mouse will be required to operate the headset. You just look at what you want to click on, and make a clicking movement with your hand. The headset knows what you mean and will do the rest.

It really is astonishing. The amount and sophistication of the hardware and software in this product is mind-blowing, and allows a level of interactivity never seen before. Should this headset really work out, it will make old-fashioned 2-dimensional flat screens and computer mouse interfaces seem as old fashioned as the steam engine.

No wonder Apple have spent so long getting it right. They have waited and waited and waited, and introduced a device far ahead of anything else.

Of course, dovetailing into all the other Apple software also means this headset is not a stand-alone device. It gives it immense advantages in terms of rendering FaceTime conversations, or opening a page in any Mac program.


How web pages viewed in the Apple Vision Pro will actually look
How pages will appear in 3-D when viewed with the Apple Vision Pro. Source: Yahoo finance

The question remains however, will the public ever take to wearing a headset-based device, even one as exceptional as this, for long periods. Apparently it is very comfortable when correctly fitted, and the battery does not weigh down your head (it can be carried in your pocket, or in your belt, or left on a table-top nearby.)

The answer to this giant question – on which the fate of not just this device but almost all others depends – is probably yes. It all depends on how compelling the apps and experiences on it really are.

You could experience a sporting event in a far more immersive and immediate way. Your ringside boxing seat would feel uncannily real, as would films rendered into three dimensions, or computer games. At some point that other eternal chimera, the metaverse might eventually become worth visiting – which obviously you could also do with the Apple Vision Pro.

Obviously, Apple want third party developers to create content for this device, - which Apple will control the same way it vets all apps on its smartphone. All this – plus the price of the headset – smack of a premium product. It smacks of a cohesive and highly profitable set of hardware and software which will capture a significant market share, just like the iPhone.


That other dark horse of the internet – pornography – will almost certainly not feature in this premium scenario. Yet bearing in mind the amount of traffic on the internet which is pornography, it is inconceivable that some lower-end device will not be launched by some other stand-alone supplier, which will fulfil the potentially immense demand for sex in 3-D.

That may of course be it: sex may be the use which tips VR headsets into common use, if the experience is good enough.


We may then be at a cross-roads in the use of our technology. The new interface of searching with your eyes and hand gestures rather than with a mouse may become commonplace. Virtual clicking just with the fingers may be the future for all of us.

A wave of the hand might mean an end to conventional scrolling; our use of technology may become ever more intuitive and instinctive, as our handsets attune themselves to our individual expressions and quirks.

Suppliers of this equipment will have even more data about us than they have at the moment, including biometric data.


There is also the question of whether these headsets will induce profound societal change. Will people prefer to live in the new virtual worlds of these headsets, visiting mysterious gothic castles, playing ever more fantastical games, so that their lives are lived primarily in the virtual world, only taking the handset off for eating and sleeping?

It is impossible to know. Some people may prefer never to use them. They may prefer the 2-D world of flatscreens to the immersive world of the headsets. But when the history of this time is written, the Apple Vision Pro might get a small chapter of its own, as the device which really heralded a profound change in the way man interacted with his own technology.


Added to this site 16th June 2023





.
Fully humanoid
robots? We are still
some way off –
fortunately
Picture of several Optimus robots in a Tesla factory
Tesla has revealed its latest iteration of 'Optimus'at its recent AGM  Source: Tesla AGM/Tesla Daily

Just think of the advantages of a fully humanoid robot. It would have roughly the same physical strength as a human, but also the dexterity and intelligence of one too.

It could write a letter for you, drive a car for you, or cook your dinner. It would – thanks to the smart software in it – know or recognise where it was. It could find its way about, at home, in the office or on the street.

It could communicate with you in natural English, or seamlessly with other devices via wi-fi. It would never forget anything, and be your helper, your friend and even your chess partner – and it might even let you win sometimes.

It would in short be a better version of you, a more perfect version of you, a more capable version of you – and of course it would revolutionise the workplace.


Delivery drivers, postmen, nurses and shopworkers might all be at risk. Just consider the advantages of the right kind of robot for employers. For a figure in the low thousands of dollars, you could have an employee that is always attentive, never takes a bathroom break or phones in sick, can work sixteen hours a day rather than eight, and who never makes private calls at work.

You wouldn't need to constantly watch such a robot. He or she would not start talking to his friends, stop working or lose concentration. You would have the ultimate companion or workmate, always able to choose the optimum way to complete any task.

Sounds incredible doesn't it? A seismic shift in our work patterns, with many tasks currently done by humans taken over by robots. It would signal the end of the human age, at least in one sense, a change akin to any of the other great changes which have rocked human civilisation, like the transition to farming or the discovery of fire.

Of course, there is the question of what humans are going to do all day long, when many of them don't have any work anymore. But that is not the point here. The point is, how likely is all this to happen, and on what timescale?


And now we come to something interesting. There's a big difference between being able to glimpse the future and actually being able to carry it out. In the nineties Steve Jobs talked of an icon-based hand-held device with a big touch-screen and a computer in it, but it was another 20 years before the i-Pad appeared. Earlier Bill Gates had talked of a small hand-held device which could send and receive data, but we had to wait a long time for the first compact mobile phones – and they weren't produced by Microsoft at all.

And so it is with robots. We can all see that somewhere down the road there will be a house robot that will help you on with your pyjamas – only not just yet.

The technical difficulties of producing such a robot are immense. The reproduction of human-speed walking alone is a profoundly challenging task. But this pales into insignificance compared to what is involved in recreating the dexterity of human hands. One of the key indicators of when we are close to the workplace singularity is when we have a robot which can tie its own shoelaces, or thread a needle, or even to deploy a pen to write an article like this, as I am currently doing – and so far all this is far beyond the ability of any robot.

The creator of Atlas, Boston Dynamics' all-charging all-conquering robot has admitted his masterpiece lacks dexterity. But without such dexterity, many tasks will always be beyond it.

Neither has Elon Musk really been able to master these issues. His robot 'Optimus', currently in its second major iteration, has very limited dexterity indeed. Optimus is more like a protype of a prototype, a canvas-and-string bi-plane, when what is needed is a jet-liner.


Elon Musk talking about the demand for humanoid robots
Elon Musk expressing his view that the market for humanoid robots could run into billions of units. Source: Tesla AGM  Click here to watch the clip

But again, Elon Musk has seen the big picture. 'The market for these could run into billions of units,' he enthused, at Tesla's recent AGM. Clips of his Optimus robot walking slowly in a straight line raised cheers from the audience, but they merely showed how far there is to go.


It took millions of years for the first seriously human hominids to emerge. Beyond that lies approximately two billion years of organic natural development, which eventually led to us. As robotics engineers try to recreate all this, their efforts merely show how brilliant evolution has been at producing this remarkable being called homo sapiens, equipped with intelligence, curiosity and those unbelievably sensitive and creative hands.

Compared to that evolutionary masterpiece, Atlas and Optimus labour like the last evolutionary dodos of old, staggering slowly, fumbling and failing to achieve tasks we can all do naturally.

But we should not be complacent. Somewhere, and at some time these outstanding problems will be resolved. A timescale similar to that of the i-Pad, which Steve Jobs first spoke of twenty years before it happened, might be appropriate. Humanity still has a little time, before that workplace singularity.

There is also the question of materials. Do robots constructed of hard materials like aluminium really have any future around soft, fleshy human beings? Or will the robots of the future have to be made out of artificial substitutes for human muscle, stretched over some sort of lightweight skeleton?

But again, these problems are in principle soluble. There is no reason to think man will not be able to create his successor. The only questions are when, and how long it takes, and where all this leaves the human race.


Added to this site 24th May 2023





.
AI: Is it worth
being a writer
any more?
Elon Musk in conversation with Tucker Carlson
The days of writers' block might be over, if generative AI takes over novel writing.  Source: The Literary Hub

Consider the following situation. You are a writer and you have just finished your literary masterpiece. You send it to a literary agent. He reads your covering letter, and if you're lucky, skims the first three or four chapters you have sent with it. This is the filtering process, which may – if you're truly fortunate – eventually end up with the book in print.

Of course there are plenty more processes to go through. Assuming that literary agent even got through those chapters, it's still unlikely your entire manuscript will ever be read. For the most part the process ends there, with only a few writers being asked to send in the complete novel.

Then the whole novel will need to be read. The whole proposition, the idea of the book will need to be considered. It will have to be pitched to a publisher, in a strange and mysterious process by which literary agents justify their cut of the royalties, and which might end with a deal being signed.

It's all a bit opaque. It's slightly chaotic, based around guesswork, hunches and personalities. It's based around contacts and lunches, people who know people, in a strange little bubble called the book publishing industry.


Now consider this: in the future you might upload your masterpiece to a portal, to a large language model. This form of generative AI will be able to read the script in its entirety, electronically.

All scripts will be read. The large language model will instantly assess what kind of novel you have sent in. (Literary fiction, historical fiction, romantic fiction etc.)

It will then compare it to the best-selling titles in that category. It will have isolated the significant features to look for in any new script, based on the features-in-common of all the best-selling works in that genre. (Quite how it does this need not concern us here.)

It will then assess whether this book is going to be successful. It will give each script sent in an estimated sales value. It could then give these details directly to a publishing house rather than to a literary agent. Indeed, it might even publish the manuscript as a completed book directly, via Amazon, and cut out the traditional publishing houses completely.

But of course, having 'read' all the books it needs to in order to compare any new manuscripts with the books that sell best in that genre, a program like Chat GPT could almost certainly write such a best-seller itself. In fact it could probably write numerous best sellers, across multiple categories almost instantly; and if Chat GPT can't do this yet, it soon will be able to.

Neither will it be possible to tell easily if this book was written by Chat GPT or a human. As it is, numerous best-sellers have been written by ghost writers, especially in non-fiction, like Prince Harry's autobiography.

The fact is, that within an appreciable length of time, very little fiction might actually be written by humans, although no doubt the books will still have the name of an 'author' on the front.


What then will all the human writers do? What will be the point of writing at all, if a large language model can write better, faster and more interestingly?

This will not just be in the world of fiction. In non-fiction too, it seems likely that large language models will prevail. In travel writing, cookery, household, DIY and gardening, there is no reason to think that generative AI will not be able to write as well as any human, and create appropriate images, diagrams and instructions from scratch.

Indeed, it is hard to think of a genre that Chat GPT or one of its variants will not be able to write in, given enough time.


But this raises an interesting point. In the future, nearly any book may be written by smart software.

It will be generated electronically and held electronically and may even be read electronically, on a smartphone or an e-reader.

Of course hard copies may be made available via mail order, but the books of the future might have their origins and most of their existence in the digital world.

This will mark a shift of profound importance. For one thing, if books are held digitally, what is to stop the program that created them from tweaking them, fine-tuning them, so that a 'book' is no longer a permanent thing, but something always open to retrospective alteration?

More generally, electronic books will never have the stamp of permanence that hard-copy conventionally printed books do have. Once an old-fashioned book is published it sits on a shelf somewhere. It cannot be retrospectively changed, infected with a virus or altered. It is a permanent record of someone's work.

It is in this sense that man's knowledge will become less certain and secure. The more our knowledge is held digitally and created digitally and held digitally, the less secure it is. The more it is open to alteration or subtle manipulation.


On the other hand it seems impossible to prevent large language models like Chat GPT from writing books. It might be that the public don't mind or don't care who writes their fiction or their gardening manuals. It might be there are great savings to be had, in terms of authors and agents' royalties and that a cheaper, more streamlined process will emerge.

It might even be that specialist publishing houses will emerge, which will actually only publish human-written works, which might attract a following of their own.

But the likelihood remains that the vast majority of new material in the future will be generated by AI, in a profound change in the structure of our civilisation.

It is possible that in the future old-fashioned books, produced in the era before AI might appear as fascinating, almost mystical entities: they were produced by real human beings, not programs. They were printed, proofed and checked as the final definitive work of a real author, with real opinions.

At some time in the future people may find these artefacts fascinating: a novel by Charles Dickens or Mark Twain or George Orwell: real people whose works were published by physical publishing companies.

These people really existed, and their works, printed before the era of AI will serve as reminders of when man had pre-eminence over his own knowledge, before it was gradually digitised, and held on a server somewhere.

Dusty volumes, found in attics or in second hand stores, may become very valuable in the future. They will be reminders of when humans found the truth for themselves, and wrote about it. They will be reminders of when man was sovereign over his own knowledge – before Chat GPT took over.


Added to this site 18th May 2023





.
Larry Page called
me a speciesist...
So I started
OpenAI
Elon Musk in conversation with Tucker Carlson
Elon Musk is happy to be considered human-centric, or a speciesist  
Source: Tucker Carlson on Fox News via
YouTube

In an interview with Tucker Carlson, formerly of Fox News, Elon Musk has described how and why he started OpenAI.


'Larry Page and I used to be close friends,' said Elon Musk, in conversation with Tucker Carlson. 'I used to stay at his house and we would talk late into the night about AI safety... and at least my perception was that Larry was not taking AI safety seriously enough.'

-What did he say about it?

'He really seemed to be into digital super-intelligence, basically digital God if you will ...er as soon as possible.'

-He wanted that?

'Yes. He's made many public statements over the years... That the whole point of Google is what's called AGI or artificial general intelligence or artificial super-intelligence. But er, I agree with him that there is great potential for good, but there's also potential for bad, and so if you've got some... um... radical new technology you want to try to take a set of actions to maximise the probability it will do good and minimise the probability it will do bad things.'

-Yes.

'It can't just be hell-for-leather going barrelling forward and hope for the best. And then at one point I said, well what about...you know... we're going to make sure humanity's OK here? And they called me a speciesist' (both laugh).

-Did he use that term?

'Yes. And there were witnesses. I wasn't the only one there when he called me a speciesist and so... Ok that's it... Yes I'm a speciesist ... OK you got me... and what are you? (more laughter) Erm...busted. That was the last straw.'

'At the time Google had acquired Deep Mind so Google-Deep Mind had about three-quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So I thought OK we're in a uni-polar world here. There's just one company that has close to a monopoly on AI talent and computers, like scale computing, and the person in charge doesn't seem to care about safety. This is not good so, er, I thought er...what's the furthest thing from Google would be like a non-profit that's completely open, because Google was fully closed, for profit, so that's why the 'open' in OpenAI refers to open-source, you know, transparency so people know what's going on.'

'Yes and you know we don't want to have – er normally I'm in favour of full profit. We don't want this to be a profit-maximising demon from hell... which just never stops.'

-Right.

'So that's how OpenAI was started.'


Elon Musk talking about Open AI
Elon Musk explaining that he helped recruit the initial team behind OpenAI. Source: Fox News on YouTube

'I funded OpenAI at the beginning. I came up with the name and the concept and pushed it. I had a number of dinners around the Bay area with the leading figures in AI. I helped recruit the initial team.'

'Ilya Sutskever was quite fundamental to the success of OpenAI. I put a great deal of effort into recruiting Ilya and he changed his mind a few times but ultimately stayed with OpenAI; but if he had not gone with OpenAI, OpenAI would not have succeeded. I really put a lot of effort into creating this organisation as a counterweight to Google... and then I kind of took my eye off the ball I guess and er... They are now closed source and they are for-profit and they are closely allied with Microsoft. In effect Microsoft has a very strong say – if not directly controls – OpenAI at this point. So you really have a situation in which OpenAI-Microsoft and Google-Deep Mind are the two heavyweights in this area.'

-So it seems like the world needs a third option?

'Erm yes. So I think I will create a third option, although starting very late in the game, of course.'

-Can it be done?

'I don't know. I think it's... we'll see. It's definitely starting late but I will try to create a third option and hopefully that good option will do more good than harm... The intention with OpenAI was obviously to do good but it's not clear if it's actually doing good or whether it's... I can't tell at this point... except that I'm worried about the fact that its being trained to be politically correct, which is actually another way of being untruthful or saying untruthful things. So that's a bad sign. It's certainly a path to AI dystopia to train an AI to be deceptive... So I'm going to start something which will be called Truth GPT, or a maximum truth-seeking AI, that tries to understand the nature of the universe and I think this might be the best path to save humanity, in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans, because we are an interesting part of the universe, hopefully.'

'Humanity could decide to hunt down all the chimpanzees and kill them but we don't. We are actually glad that they exist and um, we aspire to protect their habitat.'

(Elon Musk and Tucker Carlson go on to discuss whether computers or machines can have feelings, emotions and longings, as humans do. They also discuss the ability of AI to produce astonishing works of art, and whether AI actually understands and appreciates the great art it is producing in the same way we do.)

Speaking of creating remarkable art, Elon Musk continues: 'It's doing still images now but it won't be long before it's doing movies.'

-But at that point it can mimic reality so effectively, how could you have a criminal trial? How could you ever believe that evidence was authentic, for example and I don't mean like in 30 years, I mean like next year... I mean that seems totally disruptive to all of our institutions.

'I'm not so worried. I think it's more like... You know... will humanity control its destiny or not? Will we have a future that is better than the past or not?'


From 'Elon Musk tells Tucker potential dangers of hyper-intelligent AI' and 'Elon Musk tells Tucker his plans to create a Truth GPT AI platform'.  Both on YouTube



.
AI is like Nukes –
Say Raskin
and Harris
Picture of Aza Raskin and Tristan Harris of the Centre for Humane Technology
Aza Raskin (left) and Tristan Harris (right) of the Centre for Humane Technology, talking at their recent presentation  YouTube

Large language models (or generative AI) pose a grave threat to humanity, say Tristan Harris and Aza Raskin of the centre for Humane Technology. And they should know, for they are fully plugged into the world of the top researchers, so much so that their recent YouTube podcast 'The AI Dilemma' was introduced by Steve Wozniak, the former co-founder of Apple.

So what's the danger? Put simply it's that large language models or generative AI, have the ability to decode almost anything. Any set of data, if it is big enough, be it speech, text, sounds, wi-fi signals, code or even neural data can be decoded, stripped down and the meaning from it extracted.

In the past there were numerous separate fields of research – voice recognition, text recognition, image generation and so forth, but large language models like Chat GPT4 have changed all that; and now all these separate fields can be decoded using one program.

If you think about it, it makes perfect sense. If your computer model is sufficiently powerful, it can treat any data set as a language to be translated, or decoded. But not only can data now be decoded – it can also be synthesised.

So a human voice can now be treated as a data set, and understood and reproduced perfectly. Modern AGI systems now only need to listen to three seconds of your voice before they can reproduce it.

That of course means voice authentication is now useless as a security feature. A whole class of security screening has been swept away.

But if goes far deeper than that. According to Tristan Harris and Aza Raskin, nearly any digital product, be it what's on your television or that marketing call on your phone, or what you are looking at on your browser could be synthesized. According to Harris and Raskin this could lead to a reality collapse, meaning people would no longer have any faith in anything they read or hear: a complete collapse of trust in information in general. No wonder so many people in this field are worried.


Raskin and Harris of the Centre for Humane Technology
Accoding to Raskin and Harris nearly any digital output could be sythesized, leading to a reality collapse. Source: YouTube

According to Harris and Raskin they have every right to be. It is both the speed and game-changing nature of AGI technology which is frightening so many people.

Just think about what might be coming down the road at us. Having scraped the internet, including speech rendered into code, large language models might soon be far better at persuading us to do things than any organic being. This will be part of a race to intimacy via your snapchat or other account, to develop a primary relationship with you. Already we see basic examples of this such as 'Replika', which offer machine-based artificial friendship to all; but in the future these bots will become infinitely more effective and convincing. They will become central to our lives and will shape both our thinking and our actions.


Scared yet? There's more to come. It's not just the ability of large language models to strip down code and extract meaning. It's the astonishing phenomenon of emergent capabilities. Chat GPT4 for example taught itself research-grade chemistry. No one even noticed it had done this, until it was asked a question which revealed the depth of what it had already taught itself.

Another AGI program suddenly developed the power to answer questions in Persian, although it had only been exposed to English. There are called emergent properties, and no one quite understands how this happens: why, at some point a large language model suddenly makes a quantum leap in its abilities.

This in turn raises a wider and somewhat chilling thought: that though generative AI has been designed by humans and created by humans, we don't quite understand how it works, why it suddenly develops additional skills, or what else it might end up knowing or doing.

No wonder Tristan Harris and Aza Raskin were looking so serious when they were on-stage making their one-hour presentation. And no wonder Steve Wozniak turned up to introduce them. 'We should just pause for a moment and take a deep breath,' said Aza Raskin – and we all did.


Aza Raskin of the Centre for Humane Technology
According to Raskin we should all pause and take a deep breath - which we did. Source: YouTube

Another problem is not just the wide-ranging nature of the power of generative AI, but the speed it is being thrust upon us.

There are a handful of tech companies at the forefront of this transition, and none of them are holding back. Commercial gain, or perhaps the fear of being left behind, has created a bizarre race to embed large language models into existing digital products as quickly as possible.

Microsoft has built generative AI into Windows 11. Snapchat has incorporated it into their products and Google will not be far behind. When all common sense would suggest that in view of the tremendous significance of large language models, caution, biding one's time and taking things slowly would be the best path, in fact there is a race to deploy generative AI as quickly as possible. No one wants to be left behind, whatever the consequences.

Neither will governments be able to do much about all this. They lack the technical expertise, and perhaps the will to do anything, and legislative processes are notoriously slow.

But on the other hand the speed of adoption of these systems is lightning fast. Unimpeded by statute, giant corporations that cross borders are effectively acting at will.


Tristan Harris and Aza Raskin believe there should be a public debate about where we are going with all this. They point to a lack of co-ordination or agreement between the tech leaders implementing this AI. They note there is no supra-national institution in charge of it all.

Tristan Harris and Aza Raskin compare the development of AGI to the development of nuclear weapons – another astonishing advance which changed everything. But man learned to limit the danger of nuclear weapons, and built international treaties and agreements which worked, and controlled their use, testing and proliferation. In the opinion of Harris and Raskin this offers a model of how humanity might deal with AI.

But there are significant differences between the advent of nuclear weapons and AI. For a start, nuclear weapons were so terrifying it was impossible to ignore them. They were tangible physical things which produced obvious devastation; but AI is far more insidious.

With AI there is no blinding flash followed by a mushroom cloud of destruction; there is simply a new 'friend' on your Snapchat account, an artificial one you can talk to all night. With AI many of the changes are beneficial to humanity and it is hard to see it as the catastrophe to human life that nuclear weapons could be.

But there is something else that Harris and Raskin have both forgotten. They say that we should not on-board humanity onto generative AI until there has been democratic dialogue about its effects – that we should talk about all this.

The only problem is that humanity has been on-boarded, and it's too late to stop it. The adoption of this ground-breaking technology has been lightening quick, despite the fact we do not fully understand it. Mankind has already embarked on the next chapter, and it is already too late to do anything about it.

There will be no serious equivalent to the Strategic Arms Limitation Treaty. There will be no International Atomic Agency for AI. We are in the hands of a tiny number of tech companies, who will effectively decide our futures.

Fortunately some of those who work in these areas are aware of the risks. Sam Altman, co-owner of Open AI, has said that he is 'a little bit scared' by his own products, including ChatGPT4. (See article below). He held back the introduction of Chat GBT4 by several months, in order to try to make sure that the most obvious ways it could be used for harm were blocked.

It is fortunate indeed that at least one of those at the forefront of this new technology has some reasonable doubt about what he is doing. We must just hope that the tiny number of decision makers, research scientists and CEOs at the forefront of this make the right decisions for humanity. We must hope that they have the sense to move quickly when they see wrong doing, and they try to shape this technology for the benefit of all; and let us hope they succeed – for a great many people are relying on them.



.
Rise of the
AI model...

Picture of two artificial photographic models created by digital model agency lalaland
Fooled you didn't they? These two models were created electronically by digital model agency  lalaland.ai

Putting a clothing catalogue together can take an awful lot of work. There are photographers, studios, models and of course clothes to find. But now it is possible to showcase clothing collections virtually, using AI generated models – at a fraction of the price.

Levi's is one of the companies which have experimented with this technique. They used a digital model agency called 'lalaland', which uses computer programs to make life-like models for clients' collections.

It's not hard to see the advantages. You can dispense with a multitude of costs – of the photographic studios and the models, and have just as much control over the end result. Indeed you can alter things in ways which a conventional photographer would not be able to do: change the ethnicity of a model at the click of a mouse, or increase or decrease her height.

It's a no-brainer, and just one example of how AI will cut costs, disrupt hitherto successful businesses, and possibly reduce human employment.

According to the Daily Mail*, one model who has worked for Special K and Wonderbra and who has a 'plus sized' figure, expects to see a 60 per cent drop in castings over the next year or so.

There have been concerns that AI generated models might be too perfect, or perhaps a little too white, reflecting the tastes of those who decide what the AI models should actually look like.

But naturally a technology which is smart enough to reproduce the human form so faithfully is also smart enough to create imperfections.

Indeed, from a technological point of view, it would be the simplest thing in the world for AI to 'de-perfect' its models, using AI generated blemishes, moles or slight defects, perhaps based on the slight irregularities of real human models.


On the other hand not all humans will disappear from the world of modelling. It's quite likely that in the world of haute couture, real models – and by that we really mean supermodels – will continue to be in demand. When it comes to the catwalk, the top fashion houses will want and need real humans to model their creations, and super-expensive fashion houses will need supermodels to create the value their high-end work requires.

Humans then still have a future, in this AI generated world. The real human editor of Vogue will still have pride of place by the catwalk, as a real supermodel sashays down the runway, and at this level the fact humans are still designing fashion, modelling it, photographing it and commenting on it will all add value to the process. The fact that humans are modelling these creations will ensure exclusivity, value and price.

Sadly for those further down the chain the outlook is less rosy. Jobbing photographers, small studios and models for underwear, cheap and even mid-range clothing will all find work a lot scarcer.

But this is simply a process that will be repeated elsewhere: that AI will replace jobs without any obvious alternative employment being generated. Humanity will have to reach a new compromise with the technology around it, a relationship which will cut costs and increase returns on capital for some, but leave others wondering what has happened to their formerly stable employment.


* Daily Mail: 'How can any real woman be expected to compete with digital perfction?' 18 April 2023

.
OpenAI CEO:
I'm a
little bit
scared...
Pic of Sam Altman in conversation with Rebecca Jarvis of ABC News
CEO of OpenAI Sam Altman in covnseration with Rebecca Jarvis of ABC News. Source: YouTube/ABC News  Click here to watch the full interview

Sam Altman, CEO of Open AI, has admitted he is sometimes scared by his own products. In particular he admits that his latest game-changing innovation ChatGPT-4 has the potential to do great harm – although it also has the potential to become a game-changer for the good of humanity.

In a fascinating 20 minute interview with ABC's Rebecca Jarvis, this 37 year-old tech entrepreneur, mould-breaker and re-shaper of our society has explained some of his hopes (and concerns) about how ChatGPT-4 will change our futures.

ChatGPT-4 (and its predecessors) are not search engines, he explains. They are reasoning systems which can actually figure out the answers to problems, write computer code to do specific tasks, read and understand large amounts of text and data, and then summarise their contents clearly.

ChatGPT-4, ChatGPT-3 and its other recent iterations are utterly unlike traditional search engines. ChatGPT can carry out many tasks previously only done by humans, like replying to correspondence, or thinking of new ways to do things. ChatGPT can pass the American bar exams for lawyers, with a higher score than most humans.

ChatGPT-4 can become a personalised educator for all school children, not just giving them the answers, but leading that student to find the answer for himself, using the Socratic method.

ChatGPT-4 will give to all the possibility of a swift and timely diagnosis, even if they cannot afford a doctor. It will become a co-pilot in every profession, amplifying productivity at every turn. It could unleash new levels of creativity across multiple sectors, identifying and solving new problems.


But with all this creativity comes risks, as well as benefits. Of course there is the obvious risk, that bad actors will attempt to use ChatGPT to build bombs, start disinformation campaigns or disrupt social media, but clearly Sam Altman has anticipated this.

OpenAI spent a lot of time working out where the limits should be, in terms of what ChatGPT should and shouldn't be allowed to do. 'ChatGPT-4 was actually finished 7 months ago,' he says. 'But we have spent a lot of time doing audits of the system.'

He is pleased governments are starting to take an interest in this matter, and he says the US Government is getting interested in this 'more and more'.

He hopes governments and trusted international institutions will come together to write a governing document for generative AI.


Pic of Sam Altman in conversation with Rebecca Jarvis of ABC News
Sam Altman is worried about the sheer pace of progress of AI, and whether society can keep up. Source: YouTube/ABC News

But still Sam Altman has his concerns. Even if other creators of large language models share his concern about the downsides, and put on moderating controls, there are wider issues that bother him.

One is the sheer pace at which development is occurring. ChatGPT in its current form has the potential to rapidly re-mould society, from education to medicine to business to law to scientific research. In all these areas there may well be massive changes – with many white collar jobs either changing or just disappearing altogether.

This might create technological unemployment. Although Altman is not worried about this, because he believes that humans are infinitely creative and will find new occupations and new things to do, it is the pace of change that concerns him. 'I would push a button to slow it down,' he says, if such a button existed.

But of course such a button does not exist – or not yet anyway. If Sam Altman deliberately slowed down the pace of product development at OpenAI, he must know as well as anyone that he would simply be overtaken by other more aggressive players.

But this then creates a paradox. Sam Altman is aware of the transformative power of his own technology, and the technological unemployment it may cause – yet in practice he is severely constricted in the amount of control he will have over the pace of these changes; – perhaps this is what he means when he says he is sometimes frightened by his own inventions.

Part of Sam Altman's answer to this is his strategy of introducing ChatGPT slowly. 'If we simply held on to all this, then introduced ChatGPT-7 into the world, that would be a far greater risk,' he says. Instead his approach is to introduce ChatGPT gradually, and find any mistakes while the stakes are lower.

By introducing relatively limited versions of ChatGPT, Altman thinks he can rectify mistakes and make adjustments while there is still time. This will also give the public more time to get used to this incredibly powerful resource, while feeding improvements back into the system.


Sam Altman is acutely aware not all AI developers will share his concerns about the potential downsides of generative AI. He believes it is quite possible other developers will have fewer restraints on what their generative AI can do. He certainly admits it could be a powerful tool for harm in the wrong hands. 'We worry a lot about authoritarian governments,' he says.

But despite everything Sam Altman remains an optimist. He has glimpsed those sunlit uplands, and has his eyes on the great prize. 'This will be the greatest technology that man has ever developed,' he says. While it might take away some jobs it will create better ones. 'What if it could cure all known diseases, and educate all children?' he asks. 'There will be all sorts of new and wonderful things to do that can't even be thought of now.'

But despite these astonishing possibilities, Sam Altman remains reassuringly level-headed. As he sits in his jeans and his modest jacket, no tie in sight, the causal observer cannot but be reassured that whatever is coming our way, it will be overseen by someone with just the right blend of responsibility and vision, and just the right amount of reasonable doubt. If our future has to lie in the hands of any one man, on the basis of this interview Sam Altman has shown he is the man for the job.



.
The Metaverse
Is Going To Be
Big... Eventually
A virtual world
There will be some fantastical places to explore in the metaverse. Source: leewayhertz.com/

'The Metaverse is going to be huge,' burbled everyone at this year's WEF at Davos. It was the talk of the town, the go-to subject which everyone had an opinion on. But what exactly is the metaverse, and when can we actually see it in operation, in all the glory it promises to be?

The unfortunate fact is that the metaverse isn't really here at the moment. We can see glimpses of it, this immersive virtual world, probably accessed through virtual reality (VR) headsets. It's been talked of as a 3-D version of the current internet, but with far more interactivity.

At the moment the internet is largely static. You Google what you want, press a button – and hey presto! You get a page of possible websites you can visit.

But just imagine if there was a vast world of universes out there, worlds you could actually walk about in. You could walk about in them as a close likeness of yourself, your face scanned by your smartphone or your headset, or as an avatar, an idealised or abstract version of you, perhaps in gothic, or streamlined or retro form, so that your real identity was concealed.

Imagine if you could fly, fly like Superman through the skies above Caribbean islands, or sit in a bar with a celebrity. Imagine if you could go to a virtual comedy club or shop. And while you were flying or sitting laughing at that comedian, other people, represented by their own avatars, would interact with you.

And now we come to one of the fundamental features of the metaverse: it will be concurrent. In other words many different individual actors, many avatars representing real people will be interacting with each other in real time. It is in this sense that it will be profoundly different form today's internet, which by comparison is static.

Just think back to that comedy club. Someone else, in the form of their avatar heckles the comedian on that virtual stage. You hear the heckle in real time, and the rest of the audience either laugh with him or tell him to shut up: that is the immediacy (or concurrency) of the metaverse.


Neither will the metaverse simply consist of gatherings in clubs or pubs. Computer games will be greatly enhanced, in huge extensions of their current form, in which vast numbers of simultaneous gamers fight it out in gothic conflict played without limits; and many of these games will never re-set, for another feature of the metaverse will be that it is never-ending and simply continues forwards.

Boggled yet? There is more to come. Scientific and technical uses will be nearly as varied as gaming and entertainment. Products will be designed and tested in the metaverse long before the first physical product is ever made. Indeed, it has even been suggested that a virtual version of nearly everything will be created, long before its physical counterpart is constructed.

In education too, the metaverse will revolutionise everything. Distance will effectively collapse, so that students will learn in the virtual world, from teachers that appear to be standing before them, who are in fact of hundreds of miles away.


VR user
VR headsets will make learning in the metaverse an immersive experience. Source: Professional Beauty

Training can be given in all sorts of three-dimensional simulations, which would not be possible in the real world. A nuclear power station reaching a critical state would hardly be an attractive thing; but in the metaverse the ensuing catastrophe (if the trainees didn't stop it in time) would injure no one. In medicine, surgery, nursing and a thousand-and-one other fields, the metaverse has the potential to revolutionise learning.


What then can possibly go wrong? Quite a few things actually. The first problem is that the metaverse will require enormous data flows to work properly, far more than the current static internet. Think back to that comedy club, which might only have twenty or thirty avatars in it.

When the man at the back heckles, his voice will have to somehow be transmitted down an awful lot of fibre-optic cable to a sever somewhere, then to the thirty or so other people in the room, some of whom will laugh at it and some won't, more or less in real time. This is fantastically difficult in practice, especially as the various expressions of all the other avatars watching, who you will be able to see, will also change accordingly.

It is this concurrency which is the problem at the moment. Our current internet infrastructure simply cannot handle the data flows required to render a seamless experience, in which the many other avatars representing many other real people are all reacting in their own ways in real time.

It is this lack of bandwidth which explains why the experiences you currently have in the fledgling metaverse look so odd, or so limited. If you put on a VR headset and go into Meta's (formerly Facebook's) VR world, and go to Workrooms, which is a virtual conferencing package, you will get a shock: people have no legs. Their heads resemble the heads of the participants in the real world, but they are strangely simplified, almost like cartoons. Again, the lack of bandwidth has forced Meta (formerly Facebook) to simplify everything, in order to get the data load down.


Workrooms
In the simplified world of the metaverse, people have no legs. Source: Professional Beauty

Actually, Workrooms has been a bit of a success, despite its shortcomings, but it just goes to show how much will need to be done before we really do reach that immersive 3-D world, where you put on your VR headset, and your really do believe you are in the Caribbean or on top of Mount Everest.


But the problems the metaverse faces run deeper than a mere shortage of bandwidth (massively limiting though that currently is).

What is really required (and what is currently utterly lacking) is any sort of agreement on how the metaverse should even be set up. At the moment there is no common protocol, allowing users to skip from one proprietary game or world to another, transporting their inventory of tokens of value along the way.

There is no agreement as to what kind of metaverse there should be, whether it should be created by many different contributors in a decentralised way with no one in overall control; or whether it should sit on an operating system created by one of the current tech giants, like Apple or Meta.

Naturally Mark Zuckerberg of Meta would love it if the metaverse (note the similarity in names) was built on his operating system, and he is only too happy to offer developers the chance to build on it; but this is anathema to those who want a decentralised model.

There is also the question of crypto currencies, and how they will fit into all this. Crypto currencies are backed by blockchains, which can also be used to confirm identities, which will be important in the metaverse. Crypto, both to service the metaverse economy, and to prove identity, will be fundamental.

But crypto currencies are far from stable themselves, and some have seen massive losses in value recently. There is also the question of the speed it takes to validate a blockchain – several minutes in the case of Bitcoin – and the transaction cost; and it certainly seems at the moment as though Bitcoin – the most well-known crypto currency – will not be suited to the large number of transactions the metaverse will require.


Wherever you look then, there seem to be problems. If one tech giant does emerge as the underpinning of the metaverse, it might be relatively easy to build trust and safety into it, and perhaps to prevent it having all our in-built biases; but if it really does develop in a decentralised and fragmented way, how will it be possible to police it, in the broadest sense of that term?


Fortunately it seems that we are many years from needing to answer these questions. The sheer data requirements for a seamless experience in virtual reality mean that it may be another decade before we have the 100 or even 1,000-fold increase in capacity that mass concurrency requires.

In the meantime, the metaverse will be little more than an oddity, with its strange cartoon-like depiction of other live users, with arms but no legs.

Those who have used these VR services like Workrooms describe them as immersive and useful, despite their shortcomings. One large tech-based consultancy now on-boards all new staff in VR, and reports that it is more effective than traditional methods.

But for the foreseeable future progress in the metaverse will be incremental, like adding legs to those unfortunate avatars that exist at the moment. Software makers will figure out ways to make better use of existing bandwidth levels, but the experience will still be frustratingly slow.

We must also remember that other technologies, like generative AI are already here, or quantum computers, which are almost at the tipping point where they can outstrip classical computers.

By the time the metaverse becomes fully immersive other areas of technology may have amazed and surprised us more. It might be that other advances will be far more significant – advances in chemistry, engineering and the life-sciences, which can hardly even be imagined at the moment.



.
We Can
Read
Your Mind
Still from 'Ready for Brain Transparency? #WEF23 #Davos'
Source: YouTube.   All images in this article from 'Ready for Brain Transparency? #WEF23 #Davos,' speaker: Nita Farahany.
 Click here to watch the clip

We can now read your mind – well bits of it anyway. What was once only yours, that private world of thoughts and dreams, that stream of consciousness that only you had access to, that secret garden called your mind, could at last be giving up its secrets, thanks to the latest AI technology.


Until recently brain activity could not really be measured or understood, but thanks to advances in artificial intelligence, brainwave patterns can now be partially decoded.

Using wearable (not implanted) technology, AI programs can now correlate specific brainwave patterns with specific states, like intense concentration, poor concentration, or relaxation, or even sexual interest or arousal.

All this information is of course very useful to employers, who would love to know whether their employees are actually concentrating on key tasks, rather than day-dreaming about that attractive colleague in the workstation opposite. Access to our innermost thoughts and states is now rapidly becoming possible, raising immense issues about how much information inside our heads employers ought to be looking at, and where that information eventually goes.


Pretty soon there will need to be an intense public debate as to what the ground-rules are, when employers are able to harvest information about our inner states. Already it has been estimated that up to 5,000 companies world-wide are collecting neural data, to ensure their employees' effective performance.

Obviously, if you are driving a high-speed train in China, there might be a very good argument that the train operator is entitled to know exactly what is going on in the heads of its drivers – on whom the lives of many passengers may depend.

Neither is workplace monitoring anything new. In many workplace situations, both white-collar and blue-collar, data has often been collated from hand-held devices, or computer browsers, about the effectiveness of particular employees.

But the emergence of neural technology has raised this process to a whole new level. It is now possible to look inside the head of a particular employee and evaluate their performance based on their neural activity.

How focused were they on particular tasks? How often did they need to zone out of the present, to day-dream for a few minutes, before they could re-focus? At the moment AI neural tracking programs cannot read individual thoughts, or see the exact neural equivalent of a spoken sentence, but they can still yield powerful metrics about the mental performance of colleagues. In the future, employees might be evaluated, promoted or even dismissed on the basis of what is going on inside their heads.


Still from presentation 'Ready for Brain Transparency? #WEF23 #Davos'
Source: YouTube.   'From: Ready for Brain Transparency? #WEF23 #Davos,'   Click here to watch the clip

All this is of course extraordinary. It turns on its head that most commonly accepted notion that our thoughts are our own, and however badly or well we might feel about a situation, our attitude towards it, our private and innermost feelings about it, are known only to ourselves.

Not anymore. Already with the technology we have got it is possible to question criminals or terrorist suspects in ways which they cannot resist, and divine their true opinions and memories, by showing them photos of accomplices or items of evidence and monitoring their neural reactions, which cannot be faked.

The academic Nita Farahany argues that this new technology offers us great opportunities. It means employers can re-design their workplace environments to make them more employee-friendly, based on feedback from captured neural activity.

In other situations, she argues, the speed of production lines might be regulated by stress levels monitored through wearable devices, showing when employees' neural activity indicated they were having difficulty coping with the workload.

Neither does Farahany think employers should collect all the data they can find, just because they have access to it. She suggests that companies should clearly state what data they are tracking and why, in a kind of agreed code of practice.

Above all Farahany believes all people have a right to cognitive liberty. They have a right to mental privacy, and that those with the power to delve into our minds should only do so for defined purposes. She does not believe however there is much point in legislators trying to regulate this, as technology is simply moving too fast for law-makers to keep up.

Instead Nita Farahany is an optimist. She hopes that good intentions and best practice will prevail – as indeed they may in some circumstances.


Nita Farahany expounds her views in 'Ready for Brain Transparency? #WEF23 #Davos'
Source: YouTube.   Nita Farahany expounds her view that we should all be entitled to cognitive liberty. This article is based on her presentation 'Ready for Brain Transparency? #WEF23 #Davos,'   Click here to watch the clip

But you don't need to be a genius to see that there is a downside to all this. In the West responsible companies may indeed adapt best practice. But what about in the rest of the world? Or bad actors more generally?

Here the scenario is truly dystopian. Implanted devices offer far more scope for the analysis of brain waves using artificial intelligence – and we are only in the infancy of this technology. Within a few years interpretive programs employing AI at scale may reveal far more about what is going on in our heads. Even a simple tattoo-style device might make a significant difference to how much information can be gained.

And who will have this information? Will a Far-Eastern communist dictator have the slightest compunction about installing all this in his people?


There are other issues about this technology. A company will have a lot of information about its employees. What will happen to that data?

Also, this data will need to be transmitted via wearable devices, perhaps by blue-tooth, or some other technology. But this raises the possibility of data interception – by whom and for what purposes?

It has already been shown that neural activity can be monitored to hack passwords and addresses from those who are wearing this technology. Although our brainwaves are apparently unique to us, and in theory could render passwords redundant, could not people's unique neural data itself be backed? It is quite possible the game of cat-and-mouse between web security and hackers might be repeated in people's heads.


There is another long-term possibility that Farahany has not commented on. That is, that if we ever get to the point where brainwave patterns can be isolated for particular spoken sentences or thoughts, we are well on the way to building an artificial mind.

If we know what neural patterns correspond to what thoughts, what is to stop us going the extra mile, and reproducing these patterns artificially? They could either be placed back into a human mind, or simply combined on a computer, in a reconstruction of what an organic brain does. Scary? Possibly. Impossible? It's difficult to say. After all, who would have thought ten years ago that all this would be possible today?



.
ChatGPT
and
Generative AI
Logo of OpenAI
Source: TNW

ChatGPT marks a new milestone in how comptuers will affect our lives.

Take Google for example. Hitherto if you wanted to know something you could look it up – you could Google it. Thanks to Google's incredible software, this stand-out search engine can instantly find you a series of websites containing the information you need.

These websites will be ranked in order of relevance, usually pretty well, thanks to Google's stand-out systems, far better than its nearest rival Bing.

But now you have to start looking through those websites, which Google has found for you. You must laboriously click on each one in turn, looking to see if it did indeed have the information you wanted.


But what if you didn't have to do all that laborious clicking? What if there was a piece of software which could do it all for you?

Let's say you have been asked to write an essay on the Roman Empire's legacy in Britain. What if there was a piece of software which didn't just find you a list of websites which might contain relevant information; but instead there was a piece of software which could trawl the net for relevant information, read it all, select the most important information, and then write that essay for you, at whatever level, depth and complexity that you desired? What then?

Mankind would be freed from many tedious tasks. (And students would always get A-grades in their homework assignments.) Many things previously only done by humans could be handed over to computers, and we could all get on with other things.

Until recently this was the stuff of science fiction, but such a program now exists. It is called ChatGPT, and promises to revolutionise the way we work, the way scientific and technical research is carried out, and indeed nearly all higher level human activity.

Launched only in November, ChatGPT3 reached over a million users in just five days. It is owned by OpenAI, which in turn is heavily invested in by Microsoft. However Microsoft will not have a controlling stake in this business, its remaining stock owned by other investors, including Elon Musk.

The potential of this business is truly astonishing. It is estimated that it will have a trailing revenue of $1billion in seventeen months from now, and that in ten years' time at current projections OpenAI could be a $1.2 trillion company, with Microsoft holding a substantial stake.


Why all the fuss? Just think about what this software can do. It can write any essay on any subject, at any level.

A physics professor decided to test ChatGPT with some physics questions at varying levels. He started it with a GCSE physics question (in the UK GCSEs are taken when you are 16).

The program got a decimal point in the wrong place, but all of its workings were correct. It was then asked a high school question about the swing of a pendulum. Again it made a mistake with the names of some of the terms - but that didn't stop it getting the answer right.

Finally it was asked a second year university question, and again it got one of the fundamental presuppositions of the question wrong – but it still got ninety per cent of the calculations right, producing impeccable code (this was part of the question) in the process.

Clearly a work in progress then. But this is only ChatGPT3, released in November of 2022. ChatGPT4, which is trained on at least ten times more data, will be released later this year (2023). What then, of those mistakes picked up by that physics professor, when he set those questions?


Make no mistake, this is only the beginning of a very steep curve. Further versions of ChatGPT will no doubt make fewer mistakes, as the software is fine-tuned (or to be more accurate, fine-tunes itself).

Neither is ChatGPT the only generative AI out there: Baidu, that behemoth of the Far East is already believed to be working on something similar, and will release its version shortly.


But what of Google, now all these incredible changes are occuring? It is believed that Microsoft intend to integrate ChatGPT into their search engine Bing, which has always lagged heavily behind Google in the past.

But if they really do integrate Chat GPT into Bing, Bing will no longer simply serve up a list of website returns that might be relevant, like Google currently does. It will complete whatever task you set it.

It will do your homework for you if you are a high school student. It will write you a novel in any style you like. It could even write a Hollywood film script if you wanted.

It can show you how to code in HTML. It can build and design a course of lessons for you in any subject. It could diagnose patients, solve problems of great difficulty, turbo-charge the productivity of web-designers, software engineers and copywriters.

A news service decided to let ChatGPT write news stories, but that didn't go so well. At the moment ChatGPT still makes mistakes and prints mis-truths, but it is learning all the time.

In the future journalists and copywriters might come under pressure from future versions of generative AI, as might many others. How long will it be before generative AI has a better diagnostic record than doctors? The list of possibly affected professions is endless.


But there is more. We will not even consider the possibilities of generative AI for crime, for doing harm, although clearly there are some out there who will see this as an opportunity to do great damage to the human race.

But there is a much deeper question, as generative AI enhances itself and embeds itself ever further into our society and our economy. For in a few years' time it might be the case that we will not know if something was written by a human or a machine. Was that opinion piece in the Daily Mail or the New York Times written by a human or by a generative artificial intelligence? Soon it might be impossible to know.

What weight can we attach to any so-called fact, if it is produced by a machine, if it was produced by artificial intelligence? Our whole system up until now has been based on human beings. Fragile, prone to failure and making mistakes – but sharing an inter-subjective body of truth which was more or less reliable – this has been our way.

But if we allow artificial intelligence to control the store of human knowledge – what then?

It is in this sense that the owners of the winning version of generative AI really matter. There will of course be a Darwinian selection process in generative AI, as there has been in other decisive moments in the growth of the internet.

But something is going to emerge from all this, owned by somebody. At the moment it looks as though 49 per cent of it might be owned by Microsoft, if OpenAI emerges victorious.

But whoever the eventual winner is, and whoever is in charge of it will have immense power. Moreover they will have immense power in a new and unprecedented way: for whoever owns all this will own the truth.

They will be the gatekeepers of all we know, with all that that entails. We must hope that they use this position of immense influence wisely, and for the good of mankind, for any other possibility is too awful to contemplate.


This article was written by Nigel Fonce, a former journalist with an interest in technology, robotics and artificial intelligence. He deals with some of the issues raised in this article in ficitonal form, in his book   'Some Time In The Future'.


.