A big, hairy problem in cognitive science and philosophy of mind. The basic idea is this: if we try to create a robot that learns from and interacts with its environment, then we need to teach it how to pay attention to only the appropriate parts of the environment.

If we don't do this, then the first thing the robot would have to do is sit up, and notice the colour of the room, and the size of the room, and the number of walls in the room, and the floor texture of the room, and the fact that the room does not (presumably) contain any elephants at the moment, and so on, ad nauseam. Clearly this is intractable - the robot needs to have a way of deciding that some things are relevant and other things should be ignored.

The frame problem then, can be loosely thought of as the problem of framing, or constraining a problem space - especially the universal problem space of interacting with the world around you. It's the problem of relevance - how do we figure out which parts of our universe are relevant to what we're doing, and which parts can be ignored?

Now, in principle this ought to be easy - humans do it every day. In fact, slugs do it every day. But despite being able to do it internally, we can't seem to say HOW exactly we do it. There are a bunch of bad attempts to do so - two popular ones are:

  • "Run a simulation in your brain of the activity you're going to perform, and notice which things matter in doing it." This doesn't solve anything though, since we would need to already have solved the frame problem in order to notice those parts of the simulation.
  • "Don't notice anything initially, and when something changes, tag it as important for the future." This sounds nice, except that now the robot could sit up, but would then have to go down the list of: Did sitting up change the color of the room? No - okay, don't tag that; Did sitting up change the number of walls in the room? No - okay, don't tag that... and so on, again ad infinitum.

So really, we don't have a solution, we don't even have a promising guess. And this is not a trivial thing, until we have one, we can't produce theories for a bunch of supposedly important things, like how human memory works (which seems to involve relating things that are relevant to each other.), or human creativity, or pretty much everything else our minds do more complicated than peeing. We're stuck - and AI and Psychology and Philosophy and Cognitive Science and Linguistics and Neuroscience are, to varying degrees and in various capacities, stuck with us.

One last interesting note: the frame problem is pretty new. It's only really been on the table for about 50 years, maybe 200, since there's a hint that David Hume may have almost gotten there. But still, given the eternal nature of most philosophical problems, this is cool - it tells us that cognitive science is really doing new things and finding new problems.