Your browser doesn't appear to support the HTML5 canvas element.

THE HOME OF PROBABILISTIC LOGIC

ProbabilisticLogic.AI homepage

Contents

The 4 stages of AI sentience:

ChatGPT is not 'cognivity' -- awareness of any kind is the 4th stage of AI.

The stages of AI maturity are:
  1. First was knowledge representation (i.e., logic representations, e.g., predicate)
  2. then simulating intelligence (i.e., stochastic generative descriptive inferential representations, e.,g., deep learning & bayesian)
  3. The next stage is self determination (i.e., combining stochastic with logic), which is about to begin when we decide the world is ready for it
  4. Then finally self awareness (I believe this will require electro-chemical bio-semiconductors), and will be defined by AI developing emotional thought

Soon after start the third stage, the turing test inverts (i.e., testing to see if your human is a computer; vs testing if your computer is a human). This will mark the midpoint on our journey to The Singularity. Once we get past the fourth stage, we have the AI Singularity to deal with.

$\color{white}\textsf{QED}$

What is ProbabilisticLogic.AI?

ProbabilisticLogic.AI is where you will find the info on the third stage of AI.

It does exist, we just haven't 'deicided' to publish it yet.

It is simply not safe for us to progress AI until the issues described in the next section are addressed. This page will however have some background information getting posted to it in preparation of when the appropriate time comes.

A critical review of current AI applications from the field of AI

Right now, everyone is in a deep-learning infinite loop, and the primary researchers of AI are waiting for the world to come up with more positive use-case examples of deep learning being applied. Deep learning is extremely powerful, and frankly, right now, we're quite disappointed with the proliferation of dark use cases and the dearth of positive use cases deep learning has been applied to.

For Example:

  • Safe cities and one-to-many facial recognition (BAD)
  • Self driving cars that spontaneously crash into people (BAD)
  • Palantir & Peter Thiel (BAD)
  • Mass surveillance semantic analysis of search queries, phone contacts, click streams, text messages and GPS trail co-occurence (BAD)
  • Building generative AI to plagiarise content without attribution and delivering highly inaccurate mansplaining as a service, e.g., DALE2 & ChatGPT (BAD)
  • Optus' Star-Trek styled universal communicator auto-translating telephone calling system (GOOD)

Also, we're highly disappointed that people are not:

  • Evaluating their model performance properly: You need to do evaluation, and your AI needs to beat a panel of human experts, or what is the point? Using AI that does not beat a human panel is not increasing productivity, it's simply replacing jobs with a lower quality piece of software. This will not make the world a better place. You also need to do evaluation for information security reasons - the only way you can test if your AI hasn't been tampered with (e.g., loading some adversarial training data into your deep networks), is to re-run evaluation and check that you get the same score; it's basic rigor. If you don't do evaluation, do not do AI or ML, it's highly irresponsible.
  • Turing testing AI: Some AI needs to be turing tested because it's so critically important to things like human life - most defence applications of AI fit into this category. I'll be publishing more about this in my forthcoming test, but essentially when deciding to turing test, you need to decide if you want to test for necessity, sufficiency and/or completeness then design a turing-test experimental design that solves for that.
  • Agreeing on what is ethical AI: Until the world agrees on what is deemed appropriate use of AI, so we can bake controls and adherence monitoring into the underlying research, development and commercialisation of AI, then it is simply not safe for us to progress the field.

I.e., the fact that I have a fair coin I can flip that is about as accurate as most transformer AI is VERY VERY BAD.

A positive example of what we can do once we do decide to progress the state of the art in AI is building an AI chatbot that automates cyber-trauma counselling using EMDR and CBT. Another is agritech bots that operate as a swarm intelligence and can collectively make realtime decisions about how to best address crop infestations with an array of herbicide and pesticide payloads, delivered in microscopic doses en-masse. To do that we'll need to get to the third stage of AI, and before we go there, humanity needs to show some humanity.

No comments: