When we examine more carefully the relationships among our purposive influences, we find nothing so straightforward. For example, our desire to survive is certainly strong, but rarely is it so dominant that we are willing to forego all but the safest activities. Those who claim to have a manifest purpose are sometimes observed not to be focused at all on that purpose. Most people have a set of important values in their lives that continuously ebb and flow in their importance. They have career, family, friends, and avocations, all of which vie for attention and time. Indeed, the people who genuinely hold a single purpose - such as world-class athletes, musicians, or inventors - stand out precisely because they are so unusual. Interestingly, It is common for such individuals to experience burnout and depression in the midst of their obsessive pursuit.
Even establishing a general criterion such as “leading a happy and satisfying life” provides only limited guidance. We must discover what actual activities provide such effects, and they are likely to change over time. Further, though, it is often necessary to invest in activities that are not intrinsically satisfying, so as to derive greater benefit later. How do we decide the extent to which we ought to make such investments at the expense of immediate gratification?
The point here is not that there is a lack of substantive purposes or values available to us to make decisions about our activities and directions. Quite the opposite - there are many of them and the relationships among them are uncertain, variable, and often unstructured. One could argue that this is merely how we typically behave, but that we ought to operate in a more structured way. Such a claim of course assumes some prior or higher purpose on which to judge a purposive structure. More importantly, though, we can look at examples of those who approximate such behavior and see what we would typically consider unhealthy or even sociopathic outcomes.
In the field of artificial general intelligence, efforts are underway to model motivation and purpose. To the extent this aims at capturing something like human purpose, it is likely not comparable to a rules engine or even a static pattern matching process. It may not even be possible to model it with a Markov process, because it is dependent on the path by which we arrived at our present state. Perhaps we could come close with some sort of time varying, constrained optimization model. Instead, though, it is often assumed that an artificial agent will be imbued with an explicit and singular final purpose, and the logical consequence is usually that the AI endeavors to kill all humans to achieve that purpose. In some accounts, this is even the case if the purpose is to make humans happy. Conclusions like these could be used as further suggesting that the inscrutable organization of human purpose has a certain stability and desirability that is lacking when it is oversimplified.
Purpose is and probably ought to be a hot mess of desires, values, goals, and visions that constrain and contradict one another so as not only to keep life interesting and fresh, but also to keep us from going off the rails on a crazy train.