Skip to main content

Verified by Psychology Today

Artificial Intelligence

Your Dog Is Smarter Than AI

There are many domains where AI still falls short of man's best friend.

Key points

  • AI has trouble generalizing what it has learned for new situations and contexts.
  • AI has trouble understanding humans' motives and goals, interacting only with the "visible" world.
  • Dogs excel at many tasks where AI fails, particularly where social interactions with humans are involved.
  • The future of AI and human cooperation will depend on AI making valid inferences about our minds.

With the introduction of ChatGPT and the proliferation of effective artificial intelligence [AI] in our everyday lives, it’s easy to feel overwhelmed by how “smart” AI can be. After all, if AI can drive, type, pick movies and TV shows, give directions, identify objects, recognize songs, and complete an obstacle course, then it must be progressing pretty quickly. But what makes humans intelligent, what makes AI intelligent, and what makes other animals intelligent can be very different.

Computers have been better than us at a lot of tasks for a long time. In particular, their proclivity for math has made them very good at solving specific, quantifiable, concrete problems in a fraction of the time humans take. Building up blocks of computations has allowed AI to make rapid progress on tasks like “identify what is in this image” (ResNet) or “predict the next thing a person might say in response to this prompt” (ChatGPT). To the extent that a problem can be reduced to one of prediction (what will come next?) or classification (what type of thing is this?), modern AI is quite effective. Yet there still seem to be problems that AI can’t solve, and the most vexing are the ones that come easily to you and me — and often to your dog.

Many recent advances in AI have been inspired by borrowing ideas from the structure of the human brain. For example, deep learning takes the idea that the brain has many “layers” of neurons for processing information (e.g., visual information) that extract important features about input and progressively transform that information into an inference (e.g., about the identity of an object). Other advances, like transformer models, make distinct departures from the structure of the brain and instead rely on vast computing power in place of biological structures that are known to be able to solve the problems they seek to address.

For all its endeavors to emulate human intelligence, AI is still terrible at several simple yet crucial problems. One of these is generalization, which is the idea that something we learn in one context (e.g., at home) can be applied elsewhere (e.g., at work or outside). Even though many of us have never seen a wallaby hop around in person, we could use our experiences of other animals hopping (like rabbits), our knowledge of the dynamics of hopping, and our ideas of what a wallaby looks like to imagine one hopping or to recognize it if we saw it happening. Your dog can consistently recognize a squirrel, no matter how large, what angle it’s facing, how bright it is outside, or where it sees the furry little creature.

Conversely, generalization is hugely problematic for most AI. These systems are often referred to as “brittle,” in that they have trouble extending their training to even very small changes in context. Weather conditions, lighting, or even tweaks to a couple of pixels in an image can have dramatic effects on image recognition, and GPT can be fooled by simple puzzles that aren’t presented in a format it’s used to seeing. The problem is that AI is very good at interpolation — or making inferences about things that are within the set of things it’s seen before — but not so good at extrapolation beyond the boundaries of its past experiences.

The other problem that AI has with generalization is that AI is typically only good at one thing at a time. An individual AI might be good at recognizing speech, but it won’t be any good at speaking by itself. A system trained to classify plants might not be any good at classifying animals. In many cases, trying to train an AI on a similar task (e.g., learning to play chess and then checkers) causes “catastrophic forgetting” — where its previous abilities are lost when it has to adapt its internal parameters to the new task. Animal and human development do the opposite: We learn to run by first learning to walk, learn words by first identifying sounds, and learn where to use the bathroom by learning to classify different locations (inside, outside, bathroom).

There is at least one more area where your dog has AI beaten: understanding your latent goals and intentions. The most powerful AI is based on creating an abstract representation of the world (or at least, what it is programmed to see) and using that to predict the next thing that will happen. For example, AlphaGo uses the current board configuration to make inferences about what the next move “should” be, or what moves are most likely to lead to victory. It does not make any inferences about the other player — who they are, what they are trying to do, and why they made a particular move are irrelevant. In doing so, it foregoes the social aspects of the game entirely. It fails to make inferences about others' "hidden states" — and its own — making AI much farther from consciousness than most dogs.

Not being able to make social inferences may seem like minor antisocial quirk of AI to some, but it creates a huge problem for AI’s integration into many domains: if self-driving cars cannot make inferences about the motives of other drivers (Are they trying to pass me? Are they trying to merge? Can they see me?), then it will have a difficult time safely navigating roads currently occupied by human drivers. Self-preservation requires a robot to understand when a person is trying to harm it — and, conversely, deciding to act aggressively toward an innocent bystander would have massive repercussions. Building trust between AI and humans requires not only transparency on the part of the AI, but also requires the AI to understand what trust is and how people will react to its actions.

Your dog trusts you and knows you have earned its trust. It knows when you’re coming in for a pet, how you are feeling, and how to react. It knows when other people have malevolent intents, knows when to play gentle or rough, and knows who to trust based on how you interact with others. It can run and judge the flight of a ball at the same time, it can learn new commands it has never heard (and based on comparatively little training, when we look at AI!), it can generalize readily to new situations without forgetting how to do things it already knows. So in my opinion, your dog *is* smarter than an AI and it will continue to be for many years.

References

Kvam, P. D., Sokratous, K., Fitch, A., & Hintze, A. (2023). Using artificial intelligence to fit, compare, evaluate, and discover models of decision behavior. PsyArXiv.

advertisement
More from Peter Kvam Ph.D.
More from Psychology Today