Wednesday 6 July 2011

Overcoming the Challenges With AI

Computer systems ought to be an aid to the human, but as we get more creative, we delegate as much as we can to the systems. When systems aid the decision maker, great decisions can be made. When systems actually make the decisions, we have seen many "silly" decisions made simply because systems succumb to their decision algorithms instead of instincts and feelings. Artificial Intelligence is quite often understood as the computerized simulation of human thinking. However, in its current form it is based on forms of logic. We will find that the customer does not always relate to the most logical decision - as illogical as that sounds! However, this is the human way, so we must find ways to overcome this deficiency with our Artificial Intelligence (AI) systems.

It may be that we haven't quite turned the corner to the most sophisticated forms of AI. We believe we can achieve much more. Technology has not been growing in a linear form, as we believed in the past. We now have proof, based on historical growth, that technology grows exponentially. This means we can expect to see very human-like systems in the very near future. AI systems may not take the form we expect. For example, while consumers expect a neural network with many lights and microscopic transistors, researchers developed a system with a functioning animal's brain installed within as the functioning CPU. A lot of progress was made with this method of artificial decision making. With this research and other breakthroughs, all of a sudden, the attributes we thought could never be handled by an AI suddenly are. We would not have enough time to discuss each aspect of an AI system. The least we can do is look at ways these are developed and state some of the ways we can overcome the obstacles associated with these.

Too much complexity: Think of AI systems as being split into three parts. The first part would be the input (where it is fed the information used to make decisions). The second part would be what we call the "hidden node" (where the algorithm lives that will process the information and decide what the outcome will be). The third part would be the output (where it organizes and displays its decisions to the end user or connecting system).
Concerning the hidden nodes section, you can have multiple nodes. In fact, the more hidden nodes you have the more complex your AI system will be. However, if follows that if you have too many hidden nodes, the complexity can become too high, and there become too many patterns for the NN to learn. Who would think a computer could have too many patterns to learn? It is not so much a problem of CPU efficiency as it is more of an introduction of inconsistencies into your algorithms causing multiple system failures, as well as an overload in specifics, such that the system does not recognize outliers. So use less hidden nodes for a more simple or basic problem.

Memorization: You want your system to actually "learn", and not just "memorize". When it just memorizes, it is unable to properly analyze new problems or outliers. There is no need to create overly complex systems to compensate for memorization deficiencies. You can mitigate this problem by testing the system with brand new information to see if it learned anything yet. Instruct your analysts and programmers to develop algorithms that find ways to categorize brand new information and store it, but in addition, receive feedback from output, and store this (effectively learning from mistakes as humans do)

Over Learning: If you "teach" the system for too long, it can "over learn". This can cause it to get "paranoid" and see patterns there that do not truly exist. As a result, errors can start increasing at this point onwards. This would render your system useless, or leave it vulnerable to a hacker's manipulative tactics. You can mitigate this by stopping training and testing to see what point the errors stop decreasing. This is why you should spend ample time developing ways to test your system.

Local Solutions: You may receive a solution that is simply a local one. So although the solution works, it may not be the optimal solution. You can mitigate this problem by testing the solutions received to determine the best one.

Limited Capabilities: The hard truth is that your AI system can only solve problems that it is made to solve. As a result, some problems cannot be solved efficiently by the system. The good news is that it can solve problems that should!

The takeaway message is that the AI system should not completely replace the human being. If it does, the process must be basic or straightforward in nature. A system must be in place to monitor how the AI system treats outliers. Also, human review and intervention is absolutely necessary to the effectiveness of the system.

Article Source: http://EzineArticles.com/6340165

No comments:

Post a Comment