AI - Artificial Intelligence: boon or bollocks?

When faced with a chatbot, do you:

  • Close the chatbot and continue using the site?

    Votes: 13 32.5%
  • Close the website and go elsewhere?

    Votes: 5 12.5%
  • Engage with the chatbot and use it to answer your questions?

    Votes: 1 2.5%
  • Ask the chatbot what underwear (if any) it is wearing and would it like to engage in procreation?

    Votes: 9 22.5%
  • Give me a human to talk to or I'll burn down your head office!

    Votes: 12 30.0%

  • Total voters
    40
The interesting bit of AI is in the "I" bit. Say there's a chatbot, and it has one of three opening lines:

"Yo mug, we've made $50K out of idiots like you already today. How can we make it $51K?"

"I see you're 19. We've helped 39 people in your age group save money on car insurance already today. What can we do to make it 40?"

"Thanks for contacting us, how can we be of help?"

If the algorithm then works out the response rates and adjusts the opening gambit accordingly, it's doing its job. Better yet would be a report detailing the questions and response rates, suggesting more refinement on the most successful question, for each demographic.

I know this is a ridiculously simple example, but machine learning is where it's at. The machine needs to make its own decisions to be "AI".

"I see you are female, weigh 250lbs, and are between the ages of 14 and 50. Are you pregnant or just fat?" No-one need get hurt :)
Not quite there yet, officially.

As I have already stated many systems and bot's are simply expert systems and decision tree's, long complicated ones but, still decision trees. When I played with them and was knocking them out mine were basically self adjusting mathematical algorithms that iterated until the input produced the required output. No independent autonomous Terminator like thought there.

However, we did play with some 68K Motorola processors to produce some electro-mechanical outputs driving lights, fans, and a robot arm. With advances in sensor tech and increased processor speeds nowadays you can probably knock out some decent hardware capable of programmed reactions to particular external stimuli.

But at the end of the day AI will just sit there, it will not pick up a book to read it, will not dap off to make itself a coffee, will not go and shag that sexy looking server in the corner. To me, and I played with the stuff, these chat bot's and similar things that they run when you call your bank, or electric company are about as welcome as calling an Indian call centre.
 
Unless you have a stack of cash to invest in companies that now only have a tiny wage bill, it is a no-brainer disaster.

No jobs=no taxes paid+no national insurances payments = unemployment+homelessness, starvation+poverty+ now is the time for the 99% to rid the planet of the 1% = all those post-apocalyptic movies you have ever watched.
 
Anyone with an Alexa at home ask it 'what's one hundred one hundred one hundred in Welsh' and see what the answer is :)
 
Last edited:
With a very limited knowledge of computing, every time I hear or read about AI, I just don't believe in it. To me it seems that if you interact with a computer and it gives you information, it's a souped-up version of Conditional Formatting in Excel - you input something and the computer is programmed to give you a response against what you've said or typed.

That isn't AI, it's simply clever computer programming. Am I right or have I got the wrong end of the stick?
Absolutely. Computers are deterministic, and cannot be anything other than deterministic, since all the microprocessor does is pass signals to specific locations in an array of nano-scale circuits - its behaviour is literally hard-coded. Feed a given set of opcodes to a processor, and you'll always get precisely the same output.
The more I read about 'AI', the more I'm convinced it's just another term for statistical analysis.


Good luck with ethics. I taught a structured methods module to Doc's doing a masters in order to show them how to try and approach problems in a more structured engineering like way - basically to put a bit of logical order into their decision making approaches. The biggest issue they had with the use of expert systems and AI was ethical in how far should they hand off accountability and responsibility for decisions made by a box.
It's pretty easy to see how the ethics thing will play out. It'll be the programmers/developers who will be held responsible for any cock-up, since a) the 'AI' system is effectively an implementation of their decisions, and b) developers are perceived as subordinate to layers of business and managerial employees, and so would be thrown under the bus when something goes badly wrong.
 
Last edited:
NSFW with volume up:
 
Absolutely. Computers are deterministic, and cannot be anything other than deterministic, since all the microprocessor does is pass signals to specific locations in an array of nano-scale circuits - its behaviour is literally hard-coded.
If f***ing only.

Write a piece of code, throw in a pile of "if" statements. Fifty or sixty of them. Congratulations, you now have more paths through that piece of code than there are grains of sand on the beach.

Take a large software system (say, the F-35 software); it can exist in more individual states than there are atoms making up the planet. Declaring that it's "hard-coded" rather misses the point.

Feed a given set of opcodes to a processor, and you'll always get precisely the same output.
At an opcode level, maybe. But that's like saying that the human brain is deterministic, because if you feed an impulse to an individual neuron you'll always get precisely the same output given the same initial conditions...
 
If f***ing only.

Write a piece of code, throw in a pile of "if" statements. Fifty or sixty of them. Congratulations, you now have more paths through that piece of code than there are grains of sand on the beach.

Take a large software system (say, the F-35 software); it can exist in more individual states than there are atoms making up the planet. Declaring that it's "hard-coded" rather misses the point.



At an opcode level, maybe. But that's like saying that the human brain is deterministic, because if you feed an impulse to an individual neuron you'll always get precisely the same output given the same initial conditions...
I've seen systems that were 1,000 times more complex than they needed to be, simply because another developer had applied SOLID principles too rigidly for the sake of it, and/or taken loose coupling to a silly level, and/or saw fit to architect software so the data, business logic and presentation layers as separate projects, and/or they focussed too much on making their code 'testable' (i.e. unreadable) despite the absence of unit tests.
These systems, even in all their complexity, are still deterministic. Each line of code, each operator, can be mapped to a set of micro-code instructions.

I don't think biologists and neuroscientists are even close to understanding what consciousness is or how it originates, so it's really debatable whether the brain could be considered deterministic.
 
I don't think biologists and neuroscientists are even close to understanding what consciousness is or how it originates, so it's really debatable whether the brain could be considered deterministic.
...see "artificial neural networks". Now mash a whole pile of them into a single, massively-parallel system. Individual fragments may well be deterministic, but the overall system is beyond comprehension.

I've spent far too long programming parallel systems, and working with rather large FPGAs, to declare that anything is "deterministic"...
 

Blogg

LE
Done intelligently and in proper context it's genuinely useful as a tool.

As a way of suits and bean counters simply cutting headcount and reducing costs, all too often it is worse than useless

Want to guess which is the most common?
 
...see "artificial neural networks". Now mash a whole pile of them into a single, massively-parallel system. Individual fragments may well be deterministic, but the overall system is beyond comprehension.

I've spent far too long programming parallel systems, and working with rather large FPGAs, to declare that anything is "deterministic"...
Of course a programmed FPGA is deterministic. It wouldn't be of any use otherwise. Same for parallel systems, otherwise you wouldn't really be parallelising anything.
 
Of course a programmed FPGA is deterministic. It wouldn't be of any use otherwise. Same for parallel systems, otherwise you wouldn't really be parallelising anything.
My point is that sufficiently complex systems are far from "deterministic" in the sense I thought you were using it, even though they are built from deterministic components.

Having done the "WTF is going on, why is this bug happening" all too often, complex systems built from simple components can produce unpredictable results at unpredictable times - with the resolution appearing weeks or months later. See "Heisenbugs". Race conditions. Timing sensitivity. Even some of the priority inversion problems that drove early embedded systems. Now throw in genetic algorithms, self-modifying code.

Biology doesn't leave code comments, doesn't have a detailed design description. Unfortunately, that's rather like some systems I've worked on...
 
My point is that sufficiently complex systems are far from "deterministic" in the sense I thought you were using it, even though they are built from deterministic components.

Having done the "WTF is going on, why is this bug happening" all too often, complex systems built from simple components can produce unpredictable results at unpredictable times - with the resolution appearing weeks or months later. See "Heisenbugs". Race conditions. Timing sensitivity. Even some of the priority inversion problems that drove early embedded systems. Now throw in genetic algorithms, self-modifying code.

Biology doesn't leave code comments, doesn't have a detailed design description. Unfortunately, that's rather like some systems I've worked on...
Well, yes. You get anomalous behaviour in a system, it fails a level of testing, so you run the debugger through layers of code. What do you find? You find the system is working exactly as programmed, but the person who coded is dropped a bollock somewhere, or more usually in my case the compiler/interpreter is too fussy about something being of a very specific object type.

Point is a deterministic system consistently does whatever it's programmed to, no matter the level of complexity, whether the programmer intended it or not.
 
Point is a deterministic system consistently does whatever it's programmed to, no matter the level of complexity, whether the programmer intended it or not.
Define "programmed to". Define "fails a level of testing". Because unless you're in a near-zero subset of systems, you don't have an unambiguous design description (because written in English), and you don't have complete test coverage (because humans - I'm not arrogant enough to claim that I've ever come close more than once or twice in my thirty-year career). Even then, most bugs in the really well-defined systems are an argument over "what did we really mean it to do, in this edge condition that we didn't think about". Which is great, but not really "deterministic".

Take a peek at the complexity levels in a large-scale FPGA (I have, I spent a decade working for an FPGA firm on their design tools) - this was the whole reason behind the drive to SystemVerilog, static analysis tools, etc, etc. Now throw in single-event upsets. Perhaps even the occasional timing glitch (back in the 90s, we were running Built-In Test for our radar to a contractual confidence level - it involved simulating our ASICs, breaking one of the simulated transistors, and running the BIT to see if the signature changed. That confidence level was not 100%).

Such systems are only "deterministic" in the sense that at the narrowest level, you expect to see 1s and 0s. At a macro level, no one person's span of comprehension can cope with saying "yes, I can guarantee that this is precisely what it's going to do". You can get close, you can even define the externally-observable behaviour, but on the inside you're looking at numbers of possible states that approach "number of atoms in the universe".
 
A company I used to work for was an FPGA house. The particular set of problems we usually worked on was in the "find the needle in the haystack" realm; looking for data in a large dataset. Can't really go into more detail, but that's the essence of it.

One of our key selling points vs throwing more CPU cores at the same problem was that our solution was deterministic. A software-only solution isn't. We could define parameters such as qty X search strings of Y length and Z input in Gb/s and produce a metric for that. You really couldn't in a software-based system, because it would depend on what else the system was doing, or how complex the search patterns were.
 
Define "programmed to". Define "fails a level of testing". Because unless you're in a near-zero subset of systems, you don't have an unambiguous design description (because written in English), and you don't have complete test coverage (because humans - I'm not arrogant enough to claim that I've ever come close more than once or twice in my thirty-year career). Even then, most bugs in the really well-defined systems are an argument over "what did we really mean it to do, in this edge condition that we didn't think about". Which is great, but not really "deterministic".

Take a peek at the complexity levels in a large-scale FPGA (I have, I spent a decade working for an FPGA firm on their design tools) - this was the whole reason behind the drive to SystemVerilog, static analysis tools, etc, etc. Now throw in single-event upsets. Perhaps even the occasional timing glitch (back in the 90s, we were running Built-In Test for our radar to a contractual confidence level - it involved simulating our ASICs, breaking one of the simulated transistors, and running the BIT to see if the signature changed. That confidence level was not 100%).

Such systems are only "deterministic" in the sense that at the narrowest level, you expect to see 1s and 0s. At a macro level, no one person's span of comprehension can cope with saying "yes, I can guarantee that this is precisely what it's going to do". You can get close, you can even define the externally-observable behaviour, but on the inside you're looking at numbers of possible states that approach "number of atoms in the universe".

By 'level of testing', I mean any given layer of testing, whether it's unit, regression, functional, UI, acceptance, beta etc. And by 'fail', I'm referring pretty much any defect found when testing against all the documented use cases. Simple logic tells us we can't test against anything if there isn't a deviation from something deterministic.

I think it was Dijkstra (I probably spelled the name wrong) who stated something to the effect that the human mind can only follow/comprehend the workings of ~50 lines (or paths) of code - that's a human limitation, and complexity doesn't imply non-determinism, and that's incidentally something I get frustrated with fellow religious people about.

But the overall point I was contributing to this thread is that consciousness, sentience, or genuine intelligence cannot possibly be emulated on a man-made system. Any given section of code will always result in the same output on a given hardware.
 
AI - Artificial Intelligence: boon or bollocks?

AI is sponsored as a replacement of 99% of the human population.
 
I think it was Dijkstra (I probably spelled the name wrong) who stated something to the effect that the human mind can only follow/comprehend the workings of ~50 lines (or paths) of code - that's a human limitation, and complexity doesn't imply non-determinism, and that's incidentally something I get frustrated with fellow religious people about.
Nope, that's the correct spelling. However, it does make me wonder about two things:
  • While complexity doesn't imply non-determinism, have you considered a sensitive dependence on initial conditions? Chaos theory looks very much like non-determinism
  • Does your religion lend you to an ideological perspective that of course AI is impossible, because you believe sentience is God-given?
https://en.wikipedia.org/wiki/Chaos_theory
But the overall point I was contributing to this thread is that consciousness, sentience, or genuine intelligence cannot possibly be emulated on a man-made system. Any given section of code will always result in the same output on a given hardware.
Except when it doesn't. I'm currently trying to figure out why certain of our team's regression tests sometimes fail on our build server.

We think it might be time-related, in that it's dependent on when our Information Security teams run updates (nightly runs at 10pm generally work, things triggered at 6pm occasionally don't); we've considered that it might be load-related, in that certain loops run fractionally slower, enough to trigger a race condition in the asynchronous I/O behaviour between system and test stubs.

Same machine, same test (these tests aren't introducing any random variation), different results. Utter pain in the behind. Of course, when we figure it out, we'll probably cry "Of course! It's deterministic!" - but until the point where we understand it, we're firmly in "black cockerel, silver knife?" territory.

Bit like AI, really...
 
Last edited:
Nope, that's the correct spelling. However, it does make me wonder about two things:
  • While complexity doesn't imply non-determinism, have you considered a sensitive dependence on initial conditions? Chaos theory looks very much like non-determinism
  • Does your religion lend you to an ideological perspective that of course AI is impossible, because you believe sentience is God-given?
Chaos theory - Wikipedia
Yes and no.

A computing system has a) completely the wrong architecture to emulate anything truly intelligent. Like I said, at the microprocessor level, it's ultimately a complex logic state system, and b) it is really profound to think that sentient life somehow arose from an extremely unlikely combination of molecules, that something as complex and intricate as DNA and the machinery to process it came about naturally, within the first 100 million years of the Earth's existence.

Living beings are manifestly different from inanimate matter, and it's not simply a difference between organic and inorganic material. There is something supernatural and metaphysical about life. Something qualitative that differentiates it from the natural world. There is a clear boundary between living and inanimate matter, so I'm pretty certain we're not simply at the tail end of the Universe's complexity.

I think there is a 'hidden hand' behind life itself.


Except when it doesn't. I'm currently trying to figure out why certain of our team's regression tests sometimes fail on our build server.

We think it might be time-related, in that it's dependent on when our Information Security teams run updates (nightly runs at 10pm generally work, things triggered at 6pm occasionally don't); we've considered that it might be load-related, in that certain loops run fractionally slower, enough to trigger a race condition in the asynchronous I/O behaviour between system and test stubs.

Same machine, same test (these tests aren't introducing any random variation), different results. Utter pain in the behind. Of course, when we figure it out, we'll probably cry "Of course! It's deterministic!" - but until the point where we understand it, we're firmly in "black cockerel, silver knife?" territory.

Bit like AI, really...
Yet you still deduced that the problem might be related to the load or timing, which would be an input variable as far as the module being tested is concerned, and you've likely determined, by the time you read this, which module is being pre-empted by the race condition.
If it wasn't deterministic, why would you bother trying to debug the software? It's because you know the software is deviating from its expected behaviour, and you know there's a defect somewhere causing that deviation, and you'll eventually find that defect through the process of elimination and deduction. You might find it breaks because the system can't adapt to a timing issue, because it is a dumb system.
 
Except when it doesn't. I'm currently trying to figure out why certain of our team's regression tests sometimes fail on our build server.

We think it might be time-related, in that it's dependent on when our Information Security teams run updates (nightly runs at 10pm generally work, things triggered at 6pm occasionally don't); we've considered that it might be load-related, in that certain loops run fractionally slower, enough to trigger a race condition in the asynchronous I/O behaviour between system and test stubs.

Same machine, same test (these tests aren't introducing any random variation), different results...
We had a system that exhibited similar characteristics. We found out that the PRIMARY channels were transferring across the D/R channels, resulting in erratic behaviours depending on when D/R updates were being executed. Bad network architecture was to blame.

We also had six servers, all supposedly base-build and updates from automated tools. Two had issues - the "base build" was slightly modified between them and the other four being built. One piece of crypto software was at a -1 version. Because all six servers were round-robin and the sub-level software was not being invoked every transaction, it took months to find the issue. Bad base-build processes.

In both instances, stress-testing and parallel testing didn't reveal the issues. A combination of bad design and lack of business knowledge were the main culprits, much the same issues that I see in modern AI being developed.
 
A computing system has a) completely the wrong architecture to emulate anything truly intelligent.
A Von Neumann machine, perhaps - but that's not the only approach to computing.

...it's not simply a difference between organic and inorganic material. There is something supernatural and metaphysical about life. Something qualitative that differentiates it from the natural world... I think there is a 'hidden hand' behind life itself.
And here we differ (not least because I'm an atheist). The irony is that on one hand, you're arguing that the universe is both deterministic (from the perspective of your hidden hand), and non-deterministic (from our perspective, because ineffable).

If it wasn't deterministic, why would you bother trying to debug the software?
A good point. The answer is, "because it's deterministic enough". Remember, this is analogy - the system is chaotic, I'm trying to reduce the chaos to the point of determinism.
 

Latest Threads

Top