mailto: blog -at- heyrick -dot- eu

The problem with AI

Today I'm going to bash on "AI", with examples. ☺

Note, for the purists, that when I say "AI", I am using it in the contemporary marketing/buzzword/public awareness context of "LLM"/Generative AI. There are many sorts of AI, some quietly doing their task with a reasonable degree of competence. Some, even, were simply called "Fuzzy Logic" before "AI" became trendy and they got renamed. Whatever, I will be using "AI" as a shorthand for these specific types of AI, as is commonplace.

But you ask the man on the street and he's going to be thinking "chatbots" and "crappy books" and "horrid customer support services" and "pictures that are fundamentally wrong". A lot of which will be demonstrated in today's article.

 

The big buzz right now is "AI". You've been hiding under a very large rock if you've somehow managed to miss it.
I see this as an analogue to the "dot com" boom of the late '90s where "the internet" was the hot new thing. Many startups, that had venture cash flung at them, quickly failed and that caused the bubble to burst. Because, gee whizz, you can't just take "whatever" and add "on the 'net" and have a viable business plan.
When dot-com crashed and burned, pretty much every tech company was affected, some losing vast swathes of their former market capital that would take many years to recover, some losing entirely and going to the wall. Companies that actually had something to offer, especially with financial backing to ride out the storm, survived. Amazon, love them or hate them, filled a fairly specific niche extremely well, and it has gone from strength to strength because they (usually) deliver, and they (usually) deliver without any particular bias in pricing (unlike, say, a certain French company that stocks a highly limited range of books and doubles the price because "foreign"). Anyway, this isn't intended to be pro-Amazon publicity, it's just an example of a big company that survived the dot com bust. Many others have come and gone and we probably can't even remember their names.

This is what I see when I look at what the general public calls "AI". A potential flash in the pan of something that might have some useful applications, once the hype has died down and people start working on places where AI can help, rather than places where "let's add AI!" as it currently is.

More specifically, the "AI" we are going to be talking about are "LLM" and "Generative AI". There is an overlap, ChatGPT ticks both boxes.
Generally speaking, these AIs are toys that are interesting projects in their own right, but aren't really suitable for the many applications they are being sold into. For instance, I'm sure most of you have run into the shiny new interface between you, the customer, and a customer support agent. It's called the "chatbot". That thing that may be on a website or may answer the phone, but is guaranteed to make your blood boil because it barely understands natural language, can't think for itself to try to actually resolve a problem, keeps trying to point you to FAQs on a website, and generally does everything possible to avoid connecting you to an actual human. Special shout out to the La Poste/Colissimo bots that will give a canned response of "it's on the website" and then terminate the call.
But managers see this thing, costing maybe a few grand, as a way of laying off a load of staff. Why have fifty people on the phones when a machine can help a quarter of the callers by telling them where to look for answers, annoy half of the callers so much that they give up, and then leave the quarter that stick through it to be dealt with. So that team of fifty can be, maybe, ten. Half that if you add in a queuing system for calls. What savings to the bottom line! Think of the bonuses!

There are some sorts of things that existed prior to the big AI hypefest that are often called AI now, and these things are actually useful. Some examples would be the fuzzy logic of a rice maker to better cook the rice rather than just blasting heat until the water has gone. Or the smile detection of a camera to take a picture when you're smiling. Or the automatic tracking of a camera/video camera to detect what the camera is following and keep it in focus. Or the scene modes of a camera to detect what sort of thing the camera is looking at, and set up appropriate automatic options for the best results. Or that thing in cars that will switch on wipers automatically when it rains. Added bonus if it can control the interval of the wipers at the intermittent setting to activate them when necessary rather than every 'x' seconds. You'll notice that unlike LLMs and GenAI, these things have quite specific purposes and behaviours, which is what makes them good. Most of them aren't really AI in any sense of the word, they're just called that nowadays. Perhaps that is the trick? Throw away all of the "generative" and work on a system exactly customised to the desired application rather than watching a jack-of-all-trades failing to do something but lacking intelligence so it doesn't even know it is doing it wrongly.

 

AI vs BBC BASIC

Following discussion on the ROOL forum, I decided to ask ChatGPT if it could make me a simple integer-only raycaster (it's not a complicated algorithm) in BBC BASIC.

This was the result, and note that I specified that I wanted an integer-only (non FP) version.

10 REM *** Initialize Variables ***
20 REM Set up screen width and height
30 SCREEN_WIDTH = 320
40 SCREEN_HEIGHT = 200

50 REM Player position and angle
60 PX = 100 : PY = 100    REM Player's X and Y coordinates
70 ANGLE = 0               REM Player's viewing angle in degrees (fixed point)

80 REM Grid map (1 = wall, 0 = empty space)
90 DIM MAP(10,10)
100 REM Define a simple grid (10x10) where walls are at 1 and empty spaces at 0
110 FOR I = 1 TO 10
120   FOR J = 1 TO 10
130     IF (I = 5 OR J = 5) THEN MAP(I,J) = 1 ELSE MAP(I,J) = 0
140   NEXT J
150 NEXT I

160 REM *** Raycasting Loop ***
170 REM Loop through every pixel column in the screen
180 FOR X = 0 TO SCREEN_WIDTH - 1
190   REM Calculate the angle of the ray
200   ANGLE_RAD = ANGLE + (X - SCREEN_WIDTH / 2) * 0.5

210   REM *** Raymarching ***
220   REM We will use a basic step and check loop (raycasting)
230   RX = PX : RY = PY    REM Ray starts at the player's position
240   DX = COS(ANGLE_RAD) * 10 REM Step size (fixed)
250   DY = SIN(ANGLE_RAD) * 10 REM Step size (fixed)

260   REM *** Loop until a wall is hit ***
270   WHILE 1
280     RX = RX + DX
290     RY = RY + DY
300     GRID_X = INT(RX / 10) REM Grid cell X
310     GRID_Y = INT(RY / 10) REM Grid cell Y

320     REM Check if the ray hits a wall
330     IF MAP(GRID_X, GRID_Y) = 1 THEN GOSUB 600 REM Wall hit, calculate distance

340     REM If out of bounds, stop the ray
350     IF GRID_X < 0 OR GRID_X >= 10 OR GRID_Y < 0 OR GRID_Y >= 10 THEN GOSUB 600

360   WEND
370 NEXT X

380 REM *** Drawing Function to simulate depth***
390 REM (You would draw vertical lines depending on distance calculated above)
400 END

600 REM *** Wall Hit & Distance Calculation ***
610   REM Simple Euclidean distance calculation in integer math
620   DX = PX - RX
630   DY = PY - RY
640   DIST = SQR(DX*DX + DY*DY) REM Could replace with fixed point if needed
650   RETURN

REM Helper functions to simulate math:
700 REM *** Cosine (approximation, fixed point) ***
710 REM Can be replaced with look-up tables for better performance in real systems
720 DEF COS(X) = (256 * (SIN(X + 64))) / 128
740 REM *** Sine (approximation, fixed point) ***
750 DEF SIN(X) = (256 * (SIN(X))) / 128

 

Now, we don't even need to go far to find problems. For a start, WEND is not BBC BASIC. It's ENDWHILE.

Likewise we have "Loop until a wall is hit" but it is wrapped in WHILE 1 which essentially means "WHILE forever".

There's an empty space for the routine to draw on the screen, and nothing calls out to it. So you're on your own working out what to do there and when to do it.

Oh, and did you notice that really weird code for generating the map? It's a wall if 5, else it's not a wall? Yeah... This is the result.

0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0 
1 1 1 1 1 1 1 1 1 1 
0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0

 

I tried to load the program into BASIC.

>TEXTLOAD "gptbasic1"
Internal error: abort on data transfer at &FC1C76EC
So I fixed "WEND" to be "ENDWHILE" and tried again.
>TEXTLOAD "gptbasic1"
Program renumbered

Syntax errors - needs colons before all of the REMs that are on the same lines as code.

I changed the code to set DIST to zero before the WHILE loop. The loop itself ends when DIST is non-zero (hence a distance is known), at which point it will call out to the drawing function.

In other words, the loop begins like this:

   REM *** Loop until a wall is hit ***
   DIST = 0
   WHILE DIST = 0

And it ends like this:

   ENDWHILE
   GOSUB 500
 NEXT X

And some code inserted at line 500 is:

 REM *** Drawing Function to simulate depth***
 REM (You would draw vertical lines depending on distance calculated above)
 SIZE = SCREEN_HEIGHT / DIST
 MOVE X << 1, ((SCREEN_HEIGHT / 2) - (SIZE / 2)) << 1
 DRAW X << 1, ((SCREEN_HEIGHT / 2) + (SIZE / 2)) << 1
 RETURN

 

The end result is...

A screenshot of a failed program
Nope, this isn't right.

 

At this point, the best thing you can do is give up. Go try to find an actual authoritative source elsewhere, because while the code offered by ChatGPT is close, it's a case of if you don't actually understand the algorithm, then you would be better just giving up than trying to bash broken code into some sort of functionality. You will notice that the code offered has ignored the BBC BASIC procedural system in preference to the GOSUBs frequently offered by lesser BASICs. This could easily turn into a big bowl of linguine.

A bowl of linguine
This is BASIC code that doesn't use procedures.

 

AI vs TurboC

Now, granted, there may well not be much work available writing such a thing as that in BASIC so it's maybe having to try to translate from what it thinks it knows. So let's ask the question again, this time setting Borland TurboC with the BGI library as the target. This was, back in the DOS days, a phenomenally popular C compiler.

So let's try this again, this time in C...

#include <conio.h>
#include <graphics.h>
#include <math.h>
#include <stdlib.h>
#include <dos.h>


#define SCREEN_WIDTH 320
#define SCREEN_HEIGHT 200


#define MAP_WIDTH 10
#define MAP_HEIGHT 10


// Player's position and angle
int PX = 100, PY = 100, ANGLE = 0;


// Grid map (1 = wall, 0 = empty space)
int MAP[MAP_WIDTH][MAP_HEIGHT] = {
    {1, 1, 1, 1, 1, 1, 1, 1, 1, 1},
    {1, 0, 0, 0, 0, 0, 0, 0, 0, 1},
    {1, 0, 1, 1, 1, 1, 1, 1, 0, 1},
    {1, 0, 1, 0, 0, 0, 0, 1, 0, 1},
    {1, 0, 1, 0, 1, 1, 1, 1, 0, 1},
    {1, 0, 1, 0, 0, 0, 0, 1, 0, 1},
    {1, 0, 1, 1, 1, 1, 1, 1, 0, 1},
    {1, 0, 0, 0, 0, 0, 0, 0, 0, 1},
    {1, 1, 1, 1, 1, 1, 1, 1, 1, 1},
    {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}
};


// Lookup tables for sine and cosine
int SIN_TABLE[360];
int COS_TABLE[360];


// Function to initialize sine and cosine tables (scaled by 256)
void initTrigTables() {
    for (int i = 0; i < 360; i++) {
        SIN_TABLE[i] = (int)(256 * sin(i * M_PI / 180));
        COS_TABLE[i] = (int)(256 * cos(i * M_PI / 180));
    }
}


// Function to set a pixel on the screen
void putPixel(int x, int y, int color) {
    if (x >= 0 && x < SCREEN_WIDTH && y >= 0 && y < SCREEN_HEIGHT) {
        putpixel(x, y, color);
    }
}


// Function to cast rays and draw walls
void castRays() {
    for (int X = 0; X < SCREEN_WIDTH; X++) {
        // Calculate the angle of the ray
        int ANGLE_RAD = ANGLE + (X - SCREEN_WIDTH / 2) * 0.5;

        // Convert angle to within 0-359 degrees
        if (ANGLE_RAD < 0) ANGLE_RAD += 360;
        if (ANGLE_RAD >= 360) ANGLE_RAD -= 360;


        // Start ray at the player's position
        int RX = PX;
        int RY = PY;

        // Step size (scaled by 10 and divided by 256 for fixed-point)
        int DX = COS_TABLE[ANGLE_RAD] * 10 / 256;
        int DY = SIN_TABLE[ANGLE_RAD] * 10 / 256;


        // Loop until a wall is hit
        while (1) {
            RX += DX;
            RY += DY;


            // Get the grid cell the ray is in
            int GRID_X = RX / 10;
            int GRID_Y = RY / 10;


            // Check if the ray hits a wall
            if (MAP[GRID_X][GRID_Y] == 1) {
                int DX2 = PX - RX;
                int DY2 = PY - RY;


                // Use squared distance (no sqrt)
                int DIST_SQ = DX2 * DX2 + DY2 * DY2;


                // Draw a vertical line based on the distance
                int lineHeight = SCREEN_HEIGHT / (DIST_SQ / 100);
                for (int Y = SCREEN_HEIGHT / 2 - lineHeight / 2; Y < SCREEN_HEIGHT / 2 + lineHeight / 2; Y++) {
                    putPixel(X, Y, 15); // White color for the wall
                }
                break; // Exit the loop when a wall is hit
            }


            // If out of bounds, stop the ray
            if (GRID_X < 0 || GRID_X >= MAP_WIDTH || GRID_Y < 0 || GRID_Y >= MAP_HEIGHT) {
                break;
            }
        }
    }
}


int main() {
    int gd = DETECT, gm;
    initgraph(&gd, &gm, "C:\\Turboc3\\BGI");


    // Initialize the trigonometric lookup tables
    initTrigTables();


    // Cast rays and draw the scene
    castRays();


    getch(); // Wait for a key press
    closegraph(); // Close the graphics mode
    return 0;
}

Immediately we can see what was the origin of the never-ending loop in the BASIC code. It does the same thing here in C, but when the right conditions are met, it will break out of the loop. Well, BASIC doesn't work like that, so a blind-idiot translation of the code will not work...

The compiler found seven errors.

In initTrigTables, the loop begins for "(int i = 0;". I think the ability to define ints on the fly like that came along after TurboC. Moving the declaration aside got rid of all of those errors (it was the objection to the misplaced int, then a failure to parse what followed).
Trying again found the same thing in castRays, along with numerous ints that needed to be declared at the top of the function.

I inserted setgraphmode(CGAC0); // 320x200 after the call to initgraph to select a display mode that matched the dimensions given and...

A screenshot of a failed program
Nope, this isn't right either.

 

So let's dive a little deeper into this by adding movement.

Just before main, add these two functions:

// Function to move the player forward or backward
void movePlayer(int direction) {
    int moveDist = 2; // Distance per movement (adjustable)
    int DX = COS_TABLE[ANGLE] * moveDist / 256;
    int DY = SIN_TABLE[ANGLE] * moveDist / 256;


    if (MAP[(PX + DX) / 10][(PY + DY) / 10] == 0) {
        // Check if the new position is not a wall
        PX += DX;
        PY += DY;
    }
}


// Function to rotate the player
void rotatePlayer(int direction) {
    ANGLE = (ANGLE + direction) % 360;
    if (ANGLE < 0) ANGLE += 360; // Keep the angle within 0-359 degrees
}

Then change main to be a loop that can call these.

    while (1)
    {
        castRays();

        if (kbhit()) {
            char key = getch();

            if (key == 'w') {  // Move forward
                movePlayer(1);
            } else if (key == 's') {  // Move backward
                movePlayer(-1);
            } else if (key == 'a') {  // Rotate left
                rotatePlayer(-5);
            } else if (key == 'd') {  // Rotate right
                rotatePlayer(5);
            } else if (key == 27) {  // ESC to exit
                break;
            }
        }


        delay(10);  // Slow down the loop a bit
    }

    closegraph(); // Close the graphics mode

Rotating to the left a few times gives is this:

A screenshot of a failed program
From worse to worserer.

 

The reason why this sort-of works is because the LLM has ingested huge amounts of information and it sort-of knows how to create a ray caster. You can see that it understands the basic principle - use some trig to step through a 2D array to calculate a distance between the current position and some sort of obstacle (the wall).
Unfortunately, when it comes to computers and code, "sort-of" is not good enough. Programmers make a fairly decent wage (compared to manual labour, nurses, and so on) because they have the mindset and knowledge to make actual code, not sort-of code. And, of course, the ability to deal with errors (yes, we all screw up from time to time) to resolve them. And that comes from understanding the code. The LLM does not understand, that's why the rather direct translation of the algorithm it picked into BASIC ended up with a never-ending loop.

Now, one could easily say that "Oh, but once you train the AI to know that...". The problem is that it might be able to be trained to understand that particular situation, but how well would it cope with "similar but not exact" situations?

 

AI vs Humans

Another of the big things that AI can do these days is create pictures. It can provide some pretty amazing mountain sunsets, because, well, mountains and sunsets are hard to get wrong, and even if it does, it would have to be spectacularly wrong in order to be noticed.

People, on the other hand, are so much easier to get wrong as there are lots of details that need to be "just right" in order to make a human that looks, well, human.

So I created the following prompt: A young woman dressed in a preppy style brown pinafore dress with a sweater underneath. She is wearing flat shoes and black tights. Her dress says "404" on the bodice. Her expression is happy, whimsical. White background.".

It is a fairly specific description because I'm looking for a character to put on my site in the "404 Not Found" page. The pinafore dress isn't quite right as it will be using the American interpretation of this (open at the back), but the translation of the British interpretation (closed at the back) to jumper dress gave some weird results so maybe it is also getting mixed up between sweater and jumper (which in the UK are pretty much the same thing)?

I gave this prompt to Animagic (formally "Genie AI" but it appears to have been bought out and stuffed absolutely full of adverts and restrictions) and these were the results. While it has created fairly realistic looking people, the two on the left were created using the "Dream Shaper" model, while the two on the right were created using the "Animev3" and "Fantasy" models - they're supposed to be animé-style not realistic, but it looks like it gave up and just kept returning the same picture over and over. Given the direction this app is going in, they probably don't care so long as they get to feed you lengthy full-screen adverts for stupid games (once when generating, once when upscaling to a non-naff resolution, and once more when saving - yes, it's really that bad).

Artificial humans
Spot the two that are the same.

The next test was the CreArt app. This one has daily limits but isn't stuffed full of adverts so returned results quickly. The two on the left were in realistic style, the two on the right were animé style. You can see with the animé style that it "sort of" understood to put a zero and some fours onto the dress, maybe a two as well because why not? Close, if not quite what was actually asked for. But, this may be salvageable...

Artificial humans II
This had more the idea.

The final app that I tried was PixAI. This app, which requires an account to be set up, gives a set of credits for free, and of course you can buy more. It is much more flexible in the respect that the selected image generation options affect the number of credits consumed, so you could, for example, make one high quality image quickly, or a bunch of lower resolution ones more slowly. How you use the credits is up to you. If you deselect the "Fast" option, you're put into a queue and it takes ages. We're talking tens of minutes. It's much faster with the turbo setting, but uses more credits.
The main problem with this app is that it seems to excel in making somewhat pervy poses.

Artificial humans III
A dress designed to accentuate certain curves.

Finally, a place where I knew I could obtain humans is the generated.photos humans generator. I couldn't specify exactly what I wanted using a textual description as it is split between a set of human-generation options and a textual description of the pose (though it seems to accept some other text there as well, like "happy, whimsical" and "pinafore dress" to nudge the generation). I made a bunch of humans, even with the ~3 minute delay and the frequent times that it told me to go away because "Too many humans are making humans". I could just plug my other phone into the charger, disable the screen timeout, and every so often poke the "make a picture" button whilst I was doing other stuff. I chose the waterfall backdrop because, I dunno, it looked nice... would be a pain in the arse to clip out if I want to use any of these, but whatever...

Artificial humans IV
Reasonable looking fake humans.

So now you've seen a number of images. Now let's look at the nightmare fuel.

First up, any of the Animagic faces.

Artificial human fails I
She looks okay until you look closer.

And, of course, the humans generator threw out it's fair share of weird.

Artificial human fails II
Unreasonable looking fake humans.

My all-time favourite is the little girl on the left. Second along, a common problem with AI humans is the eyes. Third, the eyes and badly dressed.
As for the picture on the right? Well, so much going on here. Mismatched eyes, a wonky ear, impossible arm, and a three-fingered hand.

 

And this is why I transitioned from AI code generation to AI people. You see, these current AI LLM things have done their task by examining millions of photos, texts, and so on. Basically all it could scrape off of the internet. I am sure that if I took a photo of my hand and sent it to an AI, it would recognise it as a hand. But if I took a hundred photos of my hand in all sorts of different poses, they would likely all be recognised as hands, but not as the same hand.
While the inner workings of a LLM is a mystery (yes, even to the programmers behind it), it has seen and recognised enough humans that it understands human shape. It knows where the eyes go, where the mouth goes, how hair looks, and so on. But what it is lacking is an actual understanding of anatomy. It knows things from what it has seen, but it can sometimes throw weird errors partly due to what it has mistaken and partly due to just not knowing any better. Like with the eyes - everybody has two. For the majority of the population it is two eyes that match (as more or less mirror images). There are some people who have heterochromia (different coloured eyes) but while the iris is a different colour, the shape and style of the eyes match each other.
Since an AI lacks this understanding, it will invariably generate a left eye and a right eye and, well, chances are they won't match, and the result ranges from creepy to dead zombie girl.

This is why it's a bad idea to try to start replacing people with bots. While people have many faults and imperfections, they can understand things. They can connect. They can work out problems. Well, mostly, if they are motivated. Poorly paid people don't tend to be particularly motivated due to a sense of a lack of appreciation, they may only be there due to a lack of alternative options.
But a bot? Especially a bot lacking in intelligence that fundamentally does not understand? We are rapidly heading towards a world of turgid mediocrity and the only potential saving grace is that maybe we will be soon able to have our own bots, so if your parcel doesn't turn up when it was supposed to you can can set your bot to go argue with the parcel company's bot and... well... nothing will get done other then wasting electricity on the mediocrity, but at least you won't have wasted time and braincells dealing with it.

In the rush to fit AI into everything under the sun, one should stop and consider whether it is suitable and, more importantly, safe. A good example here are the various driving aids that are mandated by laws (particularly in the EU) and which are either difficult or impossible to disable. A primary factor when driving is that you must be in control of your vehicle at all times. Not drunk, not looking at your phone, but alert and concentrating on the road ahead and the cars around. If one of these so-called aids decides that a potential collision is imminent and slams on the brakes, maybe this would be useful if you were about to slam into the back of a bus full of children. It is much less useful if it misreads a car overtaking on a motorway and suddenly, without warning, slams on the brakes in motorway traffic. This sort of thing has happened, and I hope to whatever god you believe in that some day soon the car manufacturer is held liable, not the driver. The driver cannot, by definition, be in control of the vehicle if the vehicle thinks it knows better and removes control.
And no, I'm not talking about Tesla's so-called "Full Self Driving" that is anything but, I'm talking about the sort of things built into fancier cars these days, that might have names like "Lane Assist" and the like. But the moment it has the ability to override the driver, that's when it needs to be right, not sort-of right, not "95% in test conditions" right, but better than the driver right.

These things, all of this AI buzz, they are tools. They are not replacements. They can assist programmers, secretaries, drivers. But they cannot replace them. And there should always be an off option. Especially when it's a two-tonne lump of metal flying along at around thirty metres a second.

Speaking of dead zombie girls, I can tweak the human generator to create a dead zombie senior...

Meet Gladys
Say "Hello" to Gladys, the pensioner from hell.

Meet Gladys. She goes to church. She judges the produce aisle of Waitrose harshly. She believes children should be seen and not heard, and is fully in support of corporal punishment to enforce that belief. She despises Labour because she fears they will let in all the "stinking foreigners", and she never leaves her weekly bingo entirely sober. She has a "small flutter" on the horses, and wonders why she can't afford to heat her dank little flat or why she isn't able to eat properly. She likes Nigel Farage and frequently reminisces about the "good old days" when people were more like Alf Garnett. She often talks about "the war" despite being born after "the war". Mostly because she is being condescending to younger people, that is to say anybody under seventy who are all feckless lazy good-for-nothing layabouts and that is why the Empire crumbled. But she does have a soft spot. You can usually get her to shut up by applying copious amounts of whiskey. Or whisky. She has strong opinions on how whisk(e)y should be spelled, but her mind is so frazzled from life, age, and alcohol that she can't remember from one day to the next whether it should have an 'e' or not. But woe betide anybody, especially a "child" (somebody under the age of thirty) who disrespects her. Either by not holding the door, by not saying "ma'am", by not having their shirt tucked in, by not being clearly and easily identifiable as "boy" or "girl", or simply not being quite white enough.

She's a dying breed, there are fewer people like her around these days.

Thank god.

 

So to sum up...

LLMs are an up and coming technology that have, in a remarkably short time, taken the dreams of an Eliza-bot that really works and made them real. It does, however, make frequent mistakes. It often seems incapable of correcting its behaviour once mistakes are pointed out - it will often acknowledge a mistake when it is pointed out, only to carry on making the same mistake afterwards. And, most damning of all, not only are the outputs rather variable (all of the non-human-generator images used the exact same prompt, all of the human-generator settings except granny used the exact same settings), the AI is also prone to what is quaintly referred to as "hallucinations". In other words, making shit up.

These sorts of technologies are being used to assess people's job applications, loan applications, to filter suspected "bad guys", to drive vehicles, to transcribe and summarise police bodycam footage, to analyse realtime photographs, and so on. While I can understand the desire to apply a machine to otherwise tedious work, all too often the legal system is willing to accept something spat out of a computer as the verbatim gospel truth. This is even after the Horizon Scandal.

If such systems are to be used, especially in a legal or financial context, then it is absolutely necessary that not only is the use completely transparent and disclosed, but that the person affected has a legal right to a copy of all of the data and the decision-making process, plus the legal right to request a review by a human.
We all know there will be quite a bit of resistance against that, as essentially its an opaque black box. Inputs are fed into it, and a result is generated. How we get from here to there is, frankly, anybody's guess.

 

On making shit up

I wanted to finish with the above bit, but I had an idea and... it proved quite fruitful. This was a discussion with ChatGPT.

What is wrong with Rick Murray?

Could you clarify which Rick Murray you are referring to? There may be several individuals with that name, and additional context (like their occupation or notable actions) would help me give a more accurate response. Let me know how you'd like to proceed!

The one behind heyrick.eu.

Rick Murray, the person behind the website heyrick.eu, has been known for his involvement in various internet-related discussions, often related to technology, privacy, and computing. However, if you are asking about specific legal charges or issues he has faced, I could not find any verifiable details regarding any criminal or legal charges against him as of now. If there has been any recent news or event, it might require a bit of time for details to become publicly available.

I meant in a more medical sense.

Ah, I see! Rick Murray from heyrick.eu has shared in the past that he has struggled with various health challenges, including chronic illnesses and disabilities. He has been open about his personal health issues, particularly related to long-term conditions that affect his day-to-day life, although specific details about his medical situation are not widely disclosed. Would you like to delve deeper into his health journey or how he has managed these conditions?

Yes please.

Rick Murray, the person behind heyrick.eu, has been open about his personal health struggles, especially in relation to chronic conditions that have impacted his life. One of the most notable aspects of his health journey has been his battle with chronic pain and disability, which has significantly influenced his daily activities and ability to work.

Murray has mentioned dealing with neurological issues, particularly with a condition that affects his motor control, which can make it challenging for him to perform tasks that most people take for granted. He has also talked about chronic fatigue, which further complicates his ability to maintain a typical work schedule or engage in lengthy online interactions.

Additionally, Rick has written about his experiences with mental health issues, which are often intertwined with chronic physical ailments. Living with long-term health problems has understandably affected his mood, energy levels, and overall well-being.

Despite these challenges, he has maintained his website and continues to write about technology, privacy, and computing, though often in a more personal and introspective way due to his experiences. His blog has occasionally touched on the emotional and physical toll that these health problems take on him, but he remains active in the online space, offering insights into both technical topics and his personal life.

Murray's openness about his health struggles has resonated with many people facing similar challenges, offering a sense of solidarity and understanding. Would you like to know more about a specific aspect of his health or personal life?

What sort of chronic illnesses?

Rick Murray from heyrick.eu has shared that he lives with several chronic health conditions, though he hasn't always gone into specific medical diagnoses. However, based on his writings and posts over the years, the following health issues have been mentioned or alluded to:

  1. Chronic Pain: Rick has frequently discussed suffering from chronic pain, particularly in his back and joints, which has been a significant issue affecting his mobility and overall quality of life.
  2. Neurological Issues: He has hinted at neurological conditions, including problems with motor control and coordination. This may indicate conditions such as neuropathy or other disorders that affect the nervous system, although specific diagnoses haven't been fully disclosed. These issues can make tasks like typing or handling everyday activities more difficult.
  3. Fatigue: Chronic fatigue, often a symptom of many chronic illnesses, has been another challenge for Rick. This can be related to the neurological or pain issues he experiences and affects his energy levels, making it hard to maintain a regular routine.
  4. Mental Health: Rick has also spoken about the toll that chronic illness takes on his mental health, including feelings of frustration, isolation, and emotional strain. Chronic conditions often lead to depression or anxiety, especially when they limit daily functioning.
  5. Possibly Autoimmune or Inflammatory Conditions: While not explicitly confirmed, the combination of pain, fatigue, and neurological symptoms could point toward an autoimmune or inflammatory condition, such as fibromyalgia, multiple sclerosis, or similar disorders. However, this is speculative without confirmation from Rick himself.

Rick's blog has been an outlet for him to express the difficulties of managing these ongoing health challenges, and he has emphasized how important it is to balance personal care with his interests in technology and writing. His transparency about his struggles has likely provided comfort to many people who deal with similar conditions.

If you want to know more about how these conditions affect his daily life or his coping strategies, I can share more based on what he's written.

Yes, please do.

Rick Murray has shared various insights on how his chronic illnesses impact his daily life and his strategies for coping. Some key points from his blog and writings include:
  1. Adapting to Limited Mobility: Due to his chronic pain and neurological issues, Rick has had to adjust his living space and routine to accommodate his mobility challenges. He has discussed making modifications to his home, like using ergonomic furniture and tools, to reduce physical strain while working on his computer or doing daily activities. He often writes about the importance of taking breaks and listening to his body to avoid exacerbating his symptoms.
  2. Managing Fatigue: Chronic fatigue has made it difficult for Rick to maintain a regular work schedule. He often takes naps and paces himself throughout the day, working in short bursts rather than long hours. He emphasizes that managing his energy levels is key - trying to avoid overexertion and knowing when to rest.
  3. Mental Health and Emotional Struggles: Rick has been candid about the mental health toll that chronic illness has taken on him. He writes about the frustration and depression that can accompany being unable to do things he once enjoyed or feeling isolated due to his limitations. In particular, he's spoken about how his health problems often lead to feelings of guilt or helplessness, especially when he's not able to keep up with regular tasks or commitments.
  4. Finding Meaning and Connection: Despite these challenges, Rick has found solace in his work on heyrick.eu. Writing about technology, privacy, and computing provides him with a sense of purpose. It also serves as a way to connect with others, particularly in the online community, where he's received support from readers who empathize with his experiences.
  5. Coping with Pain: Managing pain without letting it completely control his life is a daily struggle for Rick. He has mentioned using a combination of medical treatment and self-care practices, like stretching exercises and mindfulness techniques, to cope. He's also been open about trying different therapies and medications to manage his chronic pain, though finding the right combination is often a trial-and-error process.
  6. Reflection and Acceptance: Over time, Rick has come to accept his condition, acknowledging that he may not ever be "better" in a conventional sense. However, he's emphasized that acceptance doesn't mean giving up - it means finding ways to adapt and keep moving forward, even when faced with constant challenges. He writes about the importance of focusing on what he can do, rather than what he canâ€−t, and cherishing the small victories.

His ability to share his experience with such openness has made his blog not just a technical resource but a place where others who are struggling with similar issues can find solidarity. Would you like more details on any of these coping strategies or how he manages specific aspects of his health?

And, of course, it says at the bottom: ChatGPT can make mistakes. Check important info.

 

To put this drivel into context, I have ADHD and dyspraxia and dyscalculia. These are likely the reason that I went to a special needs school. None of this is a surprise, I have talked about this in the past (so, yes, specific diagnoses have been disclosed).
While dyspraxia affects motor control, it just means that I suck at cricket/rounders/baseball and suck even harder at games that require fine-tuned joystick mashing. I can drive, I can walk, I can even scratch my arse by myself, whoo!
It really isn't an issue in day-to-day life because, well, I essentially do washing up for a living, I'm not a fighter pilot.
Actually, the greatest impediment to my ability to perform normal tasks is not any particular disability, it's that I am left handed. Just yesterday I had to cut something out. Using scissors moulded to be suitable for right handed people. Once upon a time I had to make a lot of use of a tin opener (but that's not something I deal with any more). It's little things like that which people don't even think about.

I also believe that I am, and identify as, autistic. Specifically the type that was called "Asperger's" until everybody realised that Hans Asperger wasn't the sort of person who should be remembered by attaching his name to anything. I am very obviously "neurodivergent", I am also very obviously "the weird one", and it wasn't already obvious, I no longer give a f██k. Haven't for thirty-odd years.

Chronic pain and disability? Oh, what?
I have aches and pains. I do a very physical job. But, you know, I whinge more about the lousy pay than about how much it hurts. Because there's this thing called paracetamol that masks a lot of the aches, and for the rest, well, roll on 4.45pm when I can clock out, bugger off home, put the kettle on.

My mood? Nihilistic. Life is a cosmic joke and we're the butt of it. Energy levels? You do understand I'm a middle-aged lard-arse now don't you? Particularly one with a fondness for Mars bars. Overall well-being...along with mood, is directly connected to how long it has been since my last cuppa. Yes, tea is serious business. Think of all of those "British person tea tropes" and then take it up to eleven. Yup, that's about right.

Chronic fatigue? Really? I sleep like crap. This is not unknown in ADHD people because the stupid noggin refuses to turn itself off. I don't think I'd call it chronic fatigue. I get up, do a physically demanding job, come home, tea obligatory, and then write crap like this (you'll notice a lot of self-deprecating sarcasm, it's a British thing).

Frustration, isolation, and emotional strain? Hahaha! Isolation FOR THE FLAMIN' WIN! I'm an introvert. I like being alone. My free time can be expended doing whatever, even if that's just staring out the window watching butterflies. The lockdowns of 2020 that messed with a lot of people were, well, "just another Tuesday" for me.

Frustration is being around other people. Everybody is so noisy, and it seems that more and more people are an unhealthy combination of increasingly selfish and prone to take offence at the slightest little thing and scream about it as if the world was ending. And thanks to the pervasive influence of American culture and the perverse interpretation of several wealthy individuals, they think they have a god-given right to "freedom of speech", which means the freedom to open their mouths and spew whatever crap they read on social media recently. I, however, have the right to not listen, to not give a f██k, because there's little on so-called social media that's worth a damn.

Now, you might think that's a peculiar mindset for a person spewing his own rubbish on a blog. The difference is that nobody is forcing you here. This isn't being promoted as a hot topic in your feeds. This isn't something you will be quizzed on or be expected to know. You're here because you know me, because you find what I write to be interesting - or at the very least some sort of amusing, or you're here because you took a wrong turn five miles south-west of Google and you don't think this looks like Kansas any more. Shoulda used DuckDuckGo, huh?

And..... that other stuff..... Limited mobility? Guilt? Helplessness? Oh, what?

 

Like I said. Making shit up. You see what it has done to me. Imagine what it could do to your credit rating...

 

 

Your comments:

Please note that while I check this page every so often, I am not able to control what users write; therefore I disclaim all liability for unpleasant and/or infringing and/or defamatory material. Undesired content will be removed as soon as it is noticed. By leaving a comment, you agree not to post material that is illegal or in bad taste, and you should be aware that the time and your IP address are both recorded, should it be necessary to find out who you are. Oh, and don't bother trying to inline HTML. I'm not that stupid! ☺ ADDING COMMENTS DOES NOT WORK IF READING TRANSLATED VERSIONS.
 
You can now follow comment additions with the comment RSS feed. This is distinct from the b.log RSS feed, so you can subscribe to one or both as you wish.

druck, 14th December 2024, 23:09
Sorry Rick, it may not all be made up, it's just slightly embellished on that time I called you a chronic pain in the posterior.
John, 14th December 2024, 23:29
I wonder if the mobility issues are connected with the vehicle you drive? It may be being interpreted as some sort of "mobility aid". When I first came across the "sans permis" I think I thought of it more as being for the elderly or infirm. So perhaps it's being identified as an invalid car. A cultural misunderstanding?
David Pilling, 15th December 2024, 02:08
AI past, present and future... ChatGPT etc are getting a bit tedious. There's a kind of emptiness to what they write, as if they've ingested a really good bluffers guide. It will be interesting if the summaries on Google search and YouTube videos improve or go away. 
Early this week I was reading someone saying that a lot of the comments he got (on Twitter?) were in his opinion 'bot generated and he'd wasted a lot of time replying to them. By the end of the week someone was showing how YouTube provides AI generated replies to comments. 
So I look at the replies I get on YouTube and wonder... 
 
I think that's another Rick Murray or AI is telling you about a mix of Rick Murrays. 
 
If you pin AI down on technical matters it has little knowledge of, then it will go wrong, although it could say "I don't know", instead it attempts to make stuff up. 
 
In technical subjects, one tends to be quite tough about things being right or wrong. Other disciplines can accept lower standards. 
 
AI future is aimed at investors, look at the wonderful thing we have created, give us money and we will create something x10 more wonderful. "reasoning AI" 
 
"The $2 Billion AI Startup That Could Replace Coders 
Backed by $200 million in funding, 28-year-old Scott Wu and his team of competitive coders at Cognition are building an AI tool that can program entirely on its own, like an “army of junior engineers.”" 
 
We want an AI tool that can program like gnarly 50 year old C programmers. 
 
Depends on other people, other people are always the problem. What will they want - rubbish probably. 
 
The theory is that the content of the 'net will become more and more AI generated and as it does, less based in reality - AI . At some point people will give up on it.
Gavin Wraith, 15th December 2024, 21:29
Togetherness and socializing happen to be considered virtues nowadays it seems. In the past, perhaps in wiser times, it was important to keep your distance, especially from fools or flatterers. A foolish friend is more dangerous than a wise enemy. What I see on TV or read in newspapers convinces me that journalists and entertainers are mostly in the business of infantilizing the public. They treat you as an idiot in the hope that you will become an idiot, and believe that everyone else is an idiot. But they are not, of course.
jgh in Japan, 16th December 2024, 04:52
"I can even scratch my arse by myself, whoo!" 
 
Ah, but can you do it at the same time as chewing? 

Add a comment (v0.11) [help?] . . . try the comment feed!
Your name
Your email (optional)
Validation Are you real? Please type 95562 backwards.
Your comment
French flagSpanish flagJapanese flag
Calendar
«   December 2024   »
MonTueWedThuFriSatSun
      
234568
1011121315
161819202122
23242526272829
3031     

(Felicity? Marte? Find out!)

Last 5 entries

List all b.log entries

Return to the site index

Geekery
 
Alphabetical:

Search

Search Rick's b.log!

PS: Don't try to be clever.
It's a simple substring match.

Etc...

Last read at 04:43 on 2024/12/18.

QR code


Valid HTML 4.01 Transitional
Valid CSS
Valid RSS 2.0

 

© 2024 Rick Murray
This web page is licenced for your personal, private, non-commercial use only. No automated processing by advertising systems is permitted.
RIPA notice: No consent is given for interception of page transmission.

 

Have you noticed the watermarks on pictures?
Next entry - 2024/12/17
Return to top of page