My Goal

I decided some months ago that I wanted to “solve” intelligence. At that time, I had no (concrete) idea what it was I meant by “solve”. I knew I wanted to develop Human Level Machine Intelligence (HLMI), but not much beyond that. After some meditation (and learning a little more about AI), I settled on what exactly it is I wanted to do. I wanted to solve the theoretical stumbling blocks limiting progress in AGI. In particular, I wanted to take a first principles approach. Formulate intelligence (and intelligent agents) from first principles, then refine the theory generated by this approach. I do not expect to be in a position to make progress on these goals for the next 4 – 6 years (depending on how my education progresses), so this post would be edited in future times to better reflect my current position on my goals. This should be interpreted as desires of mine which I intend to pursue in a postgraduate program, and then (if all goes well), as postdoctoral research.

My goal can be summarised as developing a satisfactory model of intelligence; doing for intelligence what has been done for computation. An example of a good model are Turing machines. Some criteria which seem desirable about the Turing machines model and/or which I would want in my model (in no particular order) are:

  1. Timelessness: New practical advancements in the field should not render the model obsolete.
  2. Explanatory power: The model should explain the phenomenon being modelled. It should serve as a framework through which we understand and can reason about what we’re modelling. Using the model to reason about the phenomenon should take less mental bandwith than reasoning about the phenomenon in the abstract. The model should reduce the inferential distance between us and whatever it is we’re trying to learn. The model should serve to reduce (and not increase) the complexity of our mental map of the phenomena.
  3. Accuracy: The model should be accurate. It should cut reality at its joints, and correspond to whatever it is were trying to model.
  4. Predictive Power: We should be able to make (falsifiable) predictions about the phenomena we’re trying to model. A good model would help constrain our anticipations of observations regarding the phenomena. This ties back into the accuracy of the model. If we discover new relationships in our model, then it should correspond to relationship in the real world. Turing machines wouldn’t be a very good (universal) model of computation if super-Turing computation was feasible.

The above is by no means a complete list. If the model is not useful, then the goal was not achieved. The principal aim is an implementable model of intelligence. A model that would enable the construction of a provably optimal (I expect my analysis of intelligence to be asymptotic and resource independent, so provably optimal means “there does not exist a more efficient and/or effective algorithm”) intelligent agent. If theoretical research doesn’t lead to HLMI, then it’s not a victory.

In order to develop a model of intelligence, I expect I’ll take the following research path.

Goals

Foundations of Intelligence

  • Define “Intelligence”.
  • Develop a model of intelligence.
  • Develop a method for quantifying and measuring intelligence of arbitrary agents in agent space.
  • Understand intelligence and what makes certain agent designs produce more intelligent agents.
  • Develop a hierarchy of intelligent agents over all of agent space.
  • Answer: “is there a limit to intelligence?”

Formalise learning

  • Develop a model of learning.
  • Answer: What does it mean for a learning algorithm to be better than another?
  • Develop a method for analysing (I’m thinking of asymptotic analysis (at least as of now, all analysis I plan to do would be asymptotic)) (and comparing) the performance of learning algorithms on a particular problem, across a particular problem class, and across problem space using a particular knowledge representation system(KRS), using various KRS, and across the space of possible KRS.
  • Understand what causes the difference in performance between learning algorithms.
  • Determine the scope/extent of knowledge a given learning algorithm can learn.
  • Develop a hierarchy of learning algorithms capturing the entire space of learning algorithms.
  • Synthesise the results into a rigorous theory of learning (“learning theory”).

    Bonus

  • Develop a provably optimal (for some sensible definition of “optimal”) learning algorithm.

Formalise knowledge

  • Develop a model of knowledge and of KRS.
  • Develop a method for quantifying and measuring “knowledge” (for example, we might consider the utility of the information contained, the complexity of that body of knowledge, and its form(structure, relationships, etc).
  • Develop a method for analysing and comparing KRS, using a particular learning algorithm, using various types of learning algorithms, and across the space of learning algorithms, on a particular problem, across a particular problem class, and across problem space.
  • Determine the scope/extent of knowledge a given KRS can represent.
  • Develop a theory for transfer of knowledge among similar (for some sensible notion of “similarity”) knowledge representation systems, and among dissimlar knowledge representation systems.
  • Understand what makes certain KRS “better” (according to however we measure KRS) than other KRS.
  • Develop a hierarchy of KRS capturing the entire space of KRS.
  • Synthesise the results of the above, and on learning theory into a (rigorous) theory of knowledge (“knowledge theory”).

    Bonus

  • Develop a provably optimal (for some sensible definition of “optimal”) KRS.

“Solve” Intelligence

  • Synthesise all of the above into a useful theory of intelligent agents.

    Bonus

  • Develop a provably optimal (for some sensible definition of “optimal”) intelligent agent.

 


Motivations

While my goal itself may be commendable, the same cannot necessarily be said about my motivations. I do not merely desire the advent of human level machine intelligence—I desire that I be responsible for this advent. It is important to me that I am the one responsible for solving AI. That said, here are my motivations (in no particular order):

  • To bring about the glorious transhumanist future.
  • Intelligence hasn’t yet been solved in theory, and someone needs to do it.
  • Desire for intellectual status (my heroes are all mathematicians; Jon Von Neumann, Sir Alan Malthison Turing, Carl Friedrich Gauss, Sir Isaac Newton, Pierre-Simon de Laplace, Kurt Gödel, Blaise Pascal, etc)—I want to surpass them.