Any entity in the world can be seen as an agent. In fact, there are infinitely many ways to describe a given entity as an agent.
The agentic frame, similar to the “intentional stance”, involves conceptualizing an entity as having (a) a set of goals or values, (b) a set of available actions, and (c) a decision procedure for choosing actions, such that it is in some sense “trying” to achieve its goals or maximize its values, by choosing among its actions. Sometimes we also refer to the decision procedure as involving (d) methods of “sensing” the world, which might be used to update (e) a set of “beliefs” about the world.
Any entity can be seen as an agent: Tell me how some entity behaves, and I can always conceptualize that behavior as perfectly enacting the entity’s goals. But there are other ways of seeing entities: A common alternative is to see an entity reductively, as a composition of simpler causally-connected pieces. And another is to see an entity as a simple stimulus/response interface (a “policy”): “If X happens, the entity will do Y”. For example, you could see a dog as an agent with goals and actions, or as a collection of organs, cells, molecules, or atoms, or as a thing that will yelp if you accidentally step on its paw.
The main reason we often factor entities as agents is that many important entities in the world are much easier to understand and predict by reference to a set of goals and actions, than by thinking about (say) the behavior of their components. Under most ordinary circumstances, if you want to predict the behavior of a dog reasonably well, you’ll have a much easier time understanding it as an agent than as a collection of organs. On the other hand, in exceptional circumstances, like if your dog is sick and you want to treat it, the reductive understanding becomes much more effective, and you might turn to someone who specializes in understanding and interacting with dogs reductively (i.e. a veterinarian).
People and dogs are fairly extreme cases, though: They are especially amenable to being predicted from agentic descriptions, compared to trying to predict them from their components. The behavior of a banana in the grocery store, by contrast, is usually easier to understand as a relatively inert composition of parts (its peel and its flesh). To think about the banana as having goals and actions (or, more often, as being a part of a system with goals and actions) is useful in some cases, but if you’re trying to predict the banana’s near-term behavior, and considering how to interact with the banana, the purely-reductive frame is just as good, and simpler.
There’s a spectrum between humans and fundamental particles, here. In order of (roughly) decreasing “agency”, i.e. relative usefulness of applying the agentic frame, we might list adult humans, dogs, fish, beetles, jellyfish, trees, amoebae, viruses, rocks, water molecules, and electrons. There are lots of things that are confusing to try to place in this list, though: Where does the McDonald’s corporation rank? The United States? What about AlphaGo? Google Chrome? Your thermostat? A laptop? A dreamed version of a friend? A character in a novel?
It depends on what kinds of predictions you want to make, and what kind of knowledge about the entity you start with. As one learns more about an entity using alternative frames, the agentic frame may become relatively less useful. If you perfectly understand how all the parts of a tree work, and can easily think about how they act, there’s no need to factor the tree into goals and actions. Like in the case of the banana, the tree’s goals and actions become superfluous as descriptions.
It’s interesting that the most fundamental things in the universe, the stuff of quantum field theory, at least do not appear to be especially well-described via the agentic frame. So, why does the agentic frame crop up so much?
I think the basic reason is selection pressure. There are some properties of systems that result in those systems being comparatively more common than others. A relatively inert example is the solar system: You might ask why everything in the solar system is in a relatively stable and periodic orbit. One reason is essentially that everything unstable tends to decay basically by definition, either into a stable orbit or off to infinity. Stable things are self-perpetuating; in some sense stable is just another word for self-perpetuating. Life is a more interesting example: Earth is teeming with systems that have various proliferative properties, self-perpetuating properties, world-modeling properties, and so on. It’s commonly accepted that these exist, in the way they do, because they were selected for.
There are lots of agentic systems on Earth, and I think this is because agentic systems, i.e. those that are easier to describe and predict as goals + actions + trying, are selected for in the sense that they (at least in some cases that are common ‘round these parts) proliferate more than similar but less-agentic systems.