It is the 1728th of March 2020 (aka the 22nd of November 2024)
You are 3.15.186.78,
pleased to meet you!
mailto:blog-at-heyrick-dot-eu
AI is the wrong hammer to crack a nut with
The problem with AI is that there's no 'I'. It is hype. It is snake oil. It is bullshit. But it is not intelligent.
There is, perhaps, no better example of this than to note that LLMs (which are a specific type of the 'AI' bandwagon) cannot play chess. They sort-of know how, but have a tendency to make illegal moves.
Chess is not a difficult game to learn. There are only six different sorts of pieces, with very well defined behaviours. There are a few quirky rules like castling, en passant, and pawn promotion, but again these rules are well defined.
The hard part, and what separates the men from the boys, is the strategy. You need to find a way to trap the opposing king, your opponent will be doing the same to you. So you need to maintain the state of the game in your head to determine the most beneficial moves for you whilst also being aware of the potential moves your opponent may make. There is no hidden information in chess, no unturned cards. Everything is right there on the board. But simply evaluating all possible potential game state positions for just a few moves ahead would easily stack up to a ridiculous number...
Look, on the very first move there are only 20 options (16 pawn positions and four knight positions). By the second move, around 400 options. After three moves, something like nine thousand options. After four moves, about two hundred thousand options. And after five moves, five million options. And after six moves, nearly a hundred and twenty million options.
Oh, and that's not six moves for you, that's six moves total. Three white, three black.
Technically, it is known as the Shannon Number, but as you can clearly see, looking ahead in a game of chess gets very complicated very quickly. That's why it takes a certain sort of person to be a Chess Master.
But this, this is recording a load of game state and testing all the potential moves to see which one would result in the most promising outcome. Isn't this, chundering through loads of numbers, exactly the sort of thing that computers are supposed to be good at?
So why can't an AI play a good game of chess? Or, better still, why can a simple smallish program that runs on a computer from the late '80s provide a much better, correct, and more challenging game than a 'bot running with resources that are so astonishingly grander in scope?
Simple. These so-called AIs are great at inferring things. They can absorb tens of thousands of games in a mere heartbeat, and follow all of the moves that each piece made. This does not mean that it actually understands the rules of chess. Only that it has sort of learned by watching. Which is possibly why these illegal moves happen. It's like a child learning chess - why does the horsey move in such a weird way? can't I just plonk it three places ahead?, only the AI doesn't have a parent to say "no".
This, also, underpins why my animé style girls were sometimes drawn with additional limbs, why they never get the eyes quite right, and why six fingers (or other horrors) sometimes crop up. The machine doesn't know human anatomy. It has simply looked at a vast array of images to determine what "people" generally look like, and maybe also various attributes such as jumping, kneeling, and so on. But it doesn't understand the underlying anatomy of a person any more than it understands the rules of chess. It has only inferred these things from observation.
From writing code to drawing people and playing chess, the current LLM style of AI is really quite awful at coming up with anything unique. It is, however, quite good at highlighting patterns that are already present in the data but might have been missed by a human observer. For example, how many times do you think the word "magic" appears in the Harry Potter series? ChatGPT doesn't directly answer (probably too risky to admit to having ingested the text!) but points out that estimates put the number at around 1,200 times.
One could, maybe, use the current level of so-called AI to infer patterns from input data. Being able to process much faster than an entire room full of people, it may spot things that have so far remained elusive. But this isn't creating something, this is simply noticing what was always there waiting to be found.
But, with a caveat...
Apophenia
I asked ChatGPT...
Here are some numbers: 13, 42, 404, and 1984. What is the next number and why?
Given the numbers 13, 42, 404, and 1984, the sequence isn't immediately obvious through common numerical patterns like arithmetic or geometric progressions. Another method to explore is finding the underlying relationship between these numbers, often hidden in some non-linear or context-specific manner.
The second-level differences are 333 and 1218, which still don't form a recognizable arithmetic or geometric pattern.
Another possible approach is to consider the numbers themselves and see if they are related by some contextual or less straightforward pattern:
3. Check for a pattern in the sequence:
· Look for relationships or properties: Consider if the numbers could be forming a sequence based on a less obvious mathematical relationship or contextual pattern.
· Given the significant increase, it might suggest exponential growth or a rapid increase influenced by a non-linear factor.
Let's explore a simpler hypothesis that might fit:
One interesting observation is that the sequence could relate to powers of numbers or combinations thereof:
· 13 (13 = 13)
· 42 (42 ≈ 6^2 + 6)
· 404 (404 ≈ 20^2 + 4)
· 1984 (1984 ≈ 44^2 + 40)
Identify a potential pattern:
If we hypothesize a pattern, consider squares or powers of numbers plus a linear increment:
4. Continue the pattern:
Given 13, 42, 404, 1984, let's consider the next number following a rough pattern of n^2 + k where n is the base number and k is a small addition.
1984 ≈ 44^2 + 40
· We see that the numbers involve significant quadratic elements.
Let's hypothesize the next number might involve the next step in a quadratic progression:
· We can test 1984 being a rough quadratic progression.
Thus, we can estimate the next logical number in the progression could be:
1984 + (44^2) ≈ 1984 + 1936 ≈ 3920
The next number might be roughly around 3920 based on rough quadratic extrapolation.
Conclusion:
The next number in the sequence could be estimated as 3920, fitting a pattern of quadratic progression plus a linear component.
So ChatGPT took my selection of numbers, tried to find a valid pattern in them, and then arrived at an estimated result.
Even better, I gave it the exact same question again and it tried something different this time.
To determine the next number in the sequence 13, 42, 404, and 1984, we need to identify a pattern. One possible approach is to check for differences or ratios, but given these specific numbers, another interesting approach involves the lengths of the English words for these numbers:
1. Thirteen (8 letters)
2. Forty-two (8 letters)
3. Four hundred four (15 letters)
4. One thousand nine hundred eighty-four (28 letters)
Here are the numbers written out:
· Thirteen: 8 letters
· Forty-two: 8 letters
· Four hundred four: 15 letters
· One thousand nine hundred eighty-four: 28 letters
The sequence of the letter counts is 8, 8, 15, 28. Now, let's see if there's a recognizable pattern in the letter counts.
The differences between the letter counts:
1. 8 - 8 = 0
2. 15 - 8 = 7
3. 28 - 15 = 13
These differences (0, 7, 13) don’t immediately suggest a common arithmetic progression, but the numbers themselves might still follow a different mathematical relationship. Let's consider another approach.
An interesting observation: the numbers 13, 42, 404, and 1984 might not form a typical arithmetic or geometric sequence, but they could be part of a sequence derived from a more complex or less straightforward relationship.
If we analyze the sequence from a broader perspective, one might recognize the Fibonacci-like growth in the letter count differences, suggesting a more additive growth model rather than a purely arithmetic one.
Let’s consider the sum of the previous differences:
0 + 7 + 13 = 20
Adding this difference (20) to the last known difference (13), we could predict the next difference might be 33 (as a continuation of increasing the gap):
28 + 33 = 61
Thus, we could hypothesize that the next number, when written out in words, has around 61 letters. Calculating the exact number that fits this criterion would be complex, but based on letter count alone, a potential next number in the sequence could be quite large.
If we simplify and consider numbers growing in a recognizable numeric pattern instead of focusing on letters:
· If we look at exponential growth or another kind of rapid increase, and given the non-linear jumps from 13 to 42, to 404, to 1984, it might indicate the next number could be in the thousands range.
Considering the pattern, one plausible next number, considering a broad non-linear growth, might be a significantly larger number like 3977 or more.
Conclusion:
Given the complexity, let's settle on a potential growth pattern and suggest:
· If the sequence is 13, 42, 404, 1984, the next number might follow an exponential or significant growth pattern.
Thus, a possible next number could be around 3977, doubling or increasing significantly from 1984.
Without a clear deterministic pattern, 3977 could be a reasonable next estimate based on previous rapid growth.
Personally, I disagree with the estimate. If the exponential growth is from 42 to 404 and 404 to 1984, then the estimate should be quite a bit larger.
Look, try this. 1,984 ÷ 404 = 4.9 (and some change). Therefore a better estimate might be 1,984 × 4.9, which is 9,721.6. So we're looking at something around ten thousand. BUT 404 ÷ 42 is 9.6; whilst 42 ÷ 13 is 3.2. There isn't really any stand-out pattern here other than to suggest tha since the sequence seems to be running 3.2 to 9.6 to 4.9, it is possible that the sequence alternates.
The difference between 3.2 and 4.9 is 1.7. So if we add 1.7 to 9.6, this makes 11.3. Therefore we can further refine our next number estimate to be 1,984 × 11.3 which is 22,419.2. Damn that got big fast. ☺
That's the thing about numbers. It's way too easy to read patterns into them where there aren't any. ChatGPT came up with two intreguing, but incorrect, possibilities. I added another, also wrong.
The relationship between the numbers? The correct answer is "Rick".
13 - My lucky number.
42 - The Answer to the Ultimate Question.
404 - Not Found
1984 - Was not intended as a user manual.
Which means a likely next number could be, oh, I dunno... 6502, perhaps? ☺
Your comments:
Please note that while I check this page every so often, I am not able to control what users write; therefore I disclaim all liability for unpleasant and/or infringing and/or defamatory material. Undesired content will be removed as soon as it is noticed. By leaving a comment, you agree not to post material that is illegal or in bad taste, and you should be aware that the time and your IP address are both recorded, should it be necessary to find out who you are. Oh, and don't bother trying to inline HTML. I'm not that stupid! ☺ ADDING COMMENTS DOES NOT WORK IF READING TRANSLATED VERSIONS.
You can now follow comment additions with the comment RSS feed. This is distinct from the b.log RSS feed, so you can subscribe to one or both as you wish.
Gavin Wraith, 5th June 2024, 22:50
I asked ChatGPT how many athematic verbs were listed in Liddell and Scott's Greek lexicon. It replied
Athematic verbs, also known as "mi-verbs," are a specific category of verbs in Ancient Greek that do not follow the standard thematic conjugation patterns. They are relatively fewer in number compared to thematic verbs.
To find the exact number of athematic verbs listed in Liddell and Scott's Greek Lexicon would require a detailed analysis of the lexicon itself, which is a substantial task given the size and scope of the dictionary. This information is not typically provided in summary form in secondary sources, and there is no readily available count in the lexicon's indexes or prefaces.
I thought this quite an intelligent reply. But of course that does not mean ChatGPT is intelligent.
David Pilling, 6th June 2024, 15:02
Chess is interesting, any false move would be an hallucination - can't imagine there are lots of games online to train on that are wrong.
No doubt Chat GPT "knows" the rules of chess.
Sequences - 101, there's never a right next number, pity it didn't point that out.
What it had to say sounds intelligent, but what we don't know is if it is just parroting someone's guide to solving sequences.
I don't know what AI is actually doing. Probabilities, crunching lots of data, what else. I read that people have been paid to categorise data - so its like the old time autonomata, bloke in the machine.
Neural processor for Pi came out this week - I am wondering what could I do with one.
David Pilling, 6th June 2024, 15:24
We had an example recently...
Why is rosy saxifrage so called?
AI: red flowers mate
But all the photos show white flowers?
AI: You're right, followed by an essay on how plants get their names.
The interest is in why AI is so good at this second step, it's been found wrong, it is now getting out of it.
Gemini put up an answer for a fraction of a second and then removed it - no idea what that was about.
Turns out there was a wikipedia article (which I only found from the Latin name) that explained the name comes from the shoots - a bit like bloody cranesill.
Might be worth always asking "you're wrong, explain?".
David Pilling, 6th June 2024, 15:25
Geranium sanguineum - bloody cranesbill
Gavin Wraith, 6th June 2024, 15:38
Saxifrage = Rock-breaker. 50, 42, 34, 23, 14 .. what comes next? Answer: Washington Square. Numbers are little use unless you know what they are counting, or else you are a number theorist.
This web page is licenced for your personal, private, non-commercial use only. No automated processing by advertising systems is permitted.
RIPA notice: No consent is given for interception of page transmission.