Jamie Lee Preece

Jamie Lee Preece

Journal

  • All
  • Philosophy
  • Travel
  • Science
  • Existentialism
  • er

Coming Soon!

This website is loading

Technology

Philosophy

All Articles

#technology

08th October, 2018

Machine Learning: Introduction and Techniques

Within traditional programming, input data is commonly processed using a collection of functions and linear instructions. It is the programmers objective to assign the correct logical paths and processes from which the machine can follow. These instructions are most often executed in sequential order, until the desired result is met.  Applied refinement of modern software usually results in deceptively smart systems and applications. Though, the fundamentals of computing and software operation itself is inherently dumb. Due to software being mainly developed for specific tasks and operations, any ambiguous changes to input data outside of its programming will result in errors and/or exceptions. This issue becomes a substantial downfall to scripted programming, as it requires long term updates and support to exist in a dynamic environment. A direct answer to this issue is conceptualised within machine learning (ML). The architecture itself utilises the integration of augmented neural networks (ANN), which allows the machine to build an internal model and understanding of the data it collects. The program, or artificial intelligence (AI), can then build dynamic functions to cope within a dynamic environment; essentially granting the ability to learn. The need for auxiliary human programming then becomes redundant, as the machine becomes capable of assisting its own evolution and ongoing optimisation. Birth of a neuron Unlike traditional approaches to software development, every AI based on ANN will have to undergo development and training in order to become proficient in its task. Each new instance of machine learning begins with both input data and a ruling algorithm. The machine has to make sense of the input data, from which it constructs neurons and pathways in a bid to progress. The end goal algorithm is used as a set of instructions, so the AI can decide on what task each neuron will perform. The result of this, is an interconnected web of neurons, which handles the processing of data. This web of neurons can then be used as a dynamic interface that aims to achieve the end goal. An example of this can be translated through training a machine to play the classic Nintendo video game, Super Mario. Due to the in-game progression being mainly linear in its side scrolling nature, it becomes a suitable candidate for exampling a neural network. For an overall set goal, the AI would need only understand that the amount of distance travelled east directly correlates with the overall progression. Only slight additional parameters can then be introduced to refine the end goal, such as time taken, and amount of deaths experienced. These parameters can be useful, as they are used to gauge the overall development, or ‘fitness level’ of the AI. Due to the dynamic development of neural networks, each new instance of ML will also most likely produce independent and contrasting results. To improve the efficiency of global development, multiple simultaneous instances of the same AI can be ran in order to increase the chance of developing higher performing sets of neurons. Some instances will learn faster than others: and others will approach the task in different ways. This also allows for insightful and interesting results between developing instances… The initial application and development of this process is referred to as the first generation. Within this generation of grouped instances, the in-game character would mostly struggle to perform any significant progress - mostly standing still, or making sporadic movements. This is the AI trying to propagate neurons and map the available controls to logical functions. One neuron, for example, may be used to move the character further right, whilst another makes the character jump. The more complex a network becomes, the more likely it will be able to succeed in the overall task. Further complexity is then left to develop until a progressional bottleneck is reached. 
It is at this point where the performance of each instance is analysed and compared, allowing the most desired set of neurons to be selected for merging. The succeeding candidates will have their neural networks merged, allowing specific strengths to be unified into one entity. Not only does this help in excluding any unwanted behaviour and developments, but also acts as a form of natural selection. The task of merging is usually performed by humans in low-end and developing instances of machine learning. However, additional AIs can be assigned to this task, allowing the process to become completely automated. The merging of these networks is also integrated with some small mutations, which is very similar to natural evolution. This allows for some experimental deviations and guaranteed changes between ongoing evolutions. This first evolution, or second generation, will begin learning with all the best neural paths inherited from their parents. It is this merging of neurons which most always breaks through bottlenecks and allows another generation to excel past the successes of their parents. This is a direct example of the machine becoming more intelligent. Unlike natural evolution, where only genes are passed to subsequent generations, machine learning directly allows the next generation to extend and build upon fully developed neural networks… The technical term for this process is referred to as neuro evolution. In our example of Super Mario, subsequent generations will increasingly progress through the level at ease. Once the machine is able to complete a level without deaths, the AI could be left to either optimise its current processes, or be reassigned additional algorithms, from which it can further progress. As the machine is able to quickly master such simple games, the point of advanced execution is reached somewhat exponentially. AIs then have the ability to far exceed the skill and efficiency of humans, which make them a very desirable technology to automate tasks. “Our intuition about the future is linear. But the reality of information technology is exponential, and that makes a profound difference. If I take 30 steps linearly, I get to 30. If I take 30 steps exponentially, I get to a billion.” - Ray Kurzweil Further Learning Another method used in training ML based AI is via reinforcement learning. As the term suggests, the AI builds up an understanding of the input data through boolean (true or false) feedback. This is achieved by returning the final output back into the AI, where the data is stored within the internal model for future reference and comparison. Over time, the machine can then become more efficient at handling dynamic inputs, as the internal modal continues to build a wider understanding of the data. To present a digestible example, an image identifier application can be used in showcasing the logic involved. For this in particular, the machine will only be asked to identify pictures of either a single cat or dog. Using two photos of the respective animals in the base model, the machine can operate with less need for advanced algorithms, as it can use this data for an initial learning reference. Within the dynamic learning process, additional images of both cats and dogs are then ran through the input stream. The machine can immediately start producing probabilities based on the initial images, which are then assessed and returned to the machine as reinforcement learning. However, even with minimal feedback, the machine can reach relatively advanced levels of identification through its neural network alone. The instance can develop some unique neurons, which identify certain features of the animals, as subtle as they may be. This, combined with an internal model and ongoing feedback, results in quite a powerful methods of implementing machine learning. “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” - Eliezer Yudkowsky Outcomes  The human brain is estimated to have on average 100 billion neurons, with each neuron connected to around another 1000 neurons. In our examples of a simple neural networks, there will only be a handful of neurons that directly deal with the input data itself. For a network as complex as the human brain to be simulated, it would require a computer at least 3.5 times more powerful than the most powerful computer on earth. As interesting and impressive as machine learning can be, it is still considered ‘weak AI’ in comparison to other developing AI technologies. The concept of ML is regarded as the next level of automation, rather than significant intelligence. However, the potentials of ML are still staggering and can easily surpass human abilities within specific tasks. The technology must be monitored and respected, as abuse of such things can result in repercussions that arise at exponential rates.  Due to Moors law and the advent of affordable and powerful processors, ML is increasingly used within a plethora of industries. The trend in later years has seen companies in many governing and commercial sectors adopt ML into their core technologies. This trend will more than likely continue in years to come, as the technology has great potential. “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” - Elon Musk  



												dd(Storage::disk('local')->getDriver()->getAdapter()->getPathPrefix());Article

#technology

08th October, 2018

Neural Networks: From Human to Machine

Biological Neural Networks Within central nervous systems, neurons handle and relay information to other interconnected functions and processing pathways. They communicate tasks and serve as key processors, allowing a biological entity to survive and excel within a host of dynamic environments. The cumulative complexity of these networks not only allows consciousness to exist, but also bodily functions to operate and memories to be stored. They are a unique and extraordinary feat of nature; entirely essential to life as we know it. Addressing any single neuron, the overall functioning anatomy can be demonstrated via three major components: dendrites; the cell body, and axions. Dendrites, meaning “tree” in Greek, receive information, or electrical impulses from other neurons. These, as suggested, resemble the roots of a tree and emerge from the cell body. The nucleus, housed within the cell body, handles, or processes data. It decides on what action to take — whether adjacent neurons are to be informed, or triggered via electrical impulses. Axions are also root-like in their biological appearance, connecting the cell body to other neurons and allowing the exchange of information. These functions are thus essential in natural biological intelligence, as they amass to power a vast, complex and interconnected entity, entirely predefined within ones DNA.  The modern human brain consists of an estimated 86 - 100 billion neurons. Being that each neuron is connected to an additional 1,000 neurons, the scope of complexity is presented as infinitely complex. This complexity does not, however, only come from sheer quantity, but the overall interconnected functionality of the network. Each neuron can independently navigate connections to other neurons, communicating and strengthening its functions via electrical impulses. Propagation of additional neurons within this network is thus a direct approach and method of increasing intelligence. Newer pathways are a response to stimuli sent from both the conscious and subconscious mind. The more these pathways are triggered, or used, the more they grow and reinforce. Observation and speculation of neural behaviour within the brain is both an intriguing and controversial subject. Under observation, neurons seem to each possess individual behaviour, as they are known to compete to store critical memories and experiences. They also independently navigate the network, looking for meaningful connections amongst adjacent neurons. The idea that each neuron extends a primitive consciousness is not deemed accurate within modern neuroscience. More so that neurons have evolved to respond to certain stimuli and function as part of a greater whole.  Each human individual has the biological potential to evolve their intellect and understanding of the world around them. It is argued in many fields that significant advantages can be attained through early exposure of key stimuli — including both nurture and nature in the early stages of development. However, age does not entirely restrict improvements and restructures within an already matured network. There are even cases of individuals experiencing super intelligence, or personality changes seemingly overnight. Usually caused via severe head trauma, damage is thought to cause many breaks, or disruptions within a mass of interconnected neurons. The damaged pathways attempt to re-establish connections, causing an alternative structure, or formation than before. This can then have a knock on effect, resulting in an altered consciousness or personality. Though much is understood about essential biological functions, modern science and philosophy is still unable to fundamentally explain what consciousness really is. It’s established that it occurs as a substrate independence, where our biological systems support the existence of the mind, but remains unestablished as to how it comes to be. The mind itself is not the biology where it resides, but exists only as a result. “We are the cosmos made conscious and life is the means by which the universe understands itself.” ― Brian Cox Within Philosophy For what is already known, the human brain could potentially be the most advanced biological entity in the known universe. It not only allows us to excel within many environments, but drives us to prolong DNA and allows individualistic consciousness to exist and flourish — an integral archetype to the entire human condition. Topics of recreating such biological feats have resonated through the realm of philosophical question over the decades; giving the topic much substance and controversy. As an overall concept, the thought of accomplishing such tasks has inflicted morality and fear within many great philosophical thinkers and modern philanthropists. However, in the age of digital accomplishments, practices and technologies are rapidly excelled upon with increasingly less caution and philosophical thought.  “If man realises technology is within reach, he achieves it. Like it's damn-near instinctive.” — Major Motoko Kusanagi [Ghost in the Shell - 1995]  Although replicating complete human level consciousness not yet possible, it is predicted to be successfully emulated within the digital domain in the next twenty years. This estimate is partly due to Moore’s Law and other similar predictions surrounding computational advances. However, it is also thought that computational power will eventually stagnate in terms of raw processing power, and other, more efficient programming practices will allow better use of existing technology. To successfully emulate this biological network work with todays technology, it would take a computer four times more powerful than the worlds fastest supercomputer to even operate.  Application Implementation Artificial neural networks (ANN) are loosely based on the current understanding of neural biology. The concept aims to emulate biological processing, with distinct alterations. As for the architecture itself, it is a framework that enables AI and other machine software to produce a metaphorical map of how neurons handle data — an information processing paradigm.  Instead of replicating the human brain, in its vast, interconnected, asynchronous state, ANN focuses on a simplified and specific arrangement of neurons, which specialise in deciphering key information. Neurons are aptly arranged in subsequent clusters, which handle different, nuanced variables within the input data. Each cluster works to produce an output, which is passed into the next clusters inputs; finally producing a weighted outcome, or probability. The topology in this is significantly less complex than the human brain and can be illustrated in a tree branch type structure. Furthermore, the information processing is synchronous, handling single streams of data at a time. Generalising network structures is accomplished by breaking down neural clusters to three major layers. The first cluster, or layer, as commonly referred to, is known as the input layer (𝒙). Neurons exist to gather information from the outside world and relay data to a suitable neuron within the next clustered layer. The second major layer, or layers, is every layer in-between the input and output layers (𝒉, 𝒉1, 𝒉2, 𝒉3…). These contain the more complex analysing and weightings of probabilities within the network. As for the output layer (π’š), it will produce the final outcome, or specific result to the networks assigned task. The accuracy of this will, however, depend on the sophistication and development of the neural network.  Neuron Processing Understanding how neurons actually process data within ANNs; a closer look at activation functions reveal how each cluster of neurons amass to produce their outputs. For hidden layers, neurons typically receive a weighted input from the previous layer. This input can be used to decide whether or not it is applicable to trigger that specific neuron. This choice is dictated by one of many potential activation functions. Depending on the intended goal, there are many potential mathematical functions used to process and handle input data within ANNs. A step function produces one of two integer outputs between -1 and +1, which is dictated by a measuring threshold. If the input is below that threshold, then the specified output value below that threshold is send to the output. The same goes for values above the threshold, where the higher value is selected. As zero is also an integer, the lower value output can also be configured to this, along with the same threshold. The result would either output a one or zero — useful in filtering out greater chunks of raw data that’s not applicable. This is also sometimes known as a gate. To produce ‘smooth’ outputs with floating point values in-between one and zero, a sigmoid function can be used to produce values, which are determined by the mapping of a curve (sigmoid curve). The centre point of the curve runs through the zero point of the input (x), which equates to exactly half a value, or 0.5 on the output. This means that the start of the curve begins at zero and ends on one. As the input of (x) can span infinity plus, or infinity minus, the outputs produced would be an integer of one and zero respectively. However, depending on the gain of the sigmoid curve, the end points nearest to zero and one will correlate to different ranges on the (x) axis. When an input does fall within the changing range of the curve, a floating point value can be returned. The range of this curve is also adjusted via mathematical declarations. However, an input of zero will always return a half value. As for the overall use of this function, it produces a more natural output range due to its non-binary output, which is why it is heavily used within ANNs. As an extension of the previous function, there is also an additional variation, called a hyperbolic tangent function. This performs a similar objective, but instead produces output values between -1 and +1. The shift in this means that an input of zero will also produce an output of zero and the output has been signed, or been shifted to create a sinusoid. There are also linear functions, which produce absolutely that, a linear mapped output of the input data. However, this is more commonly utilised when developing ANNs augmented with reinforcement learning. “Our external physical reality is a mathematical structure.” — Max Tegmark ANN Advancement As a technology, ANNs cannot be credited as intelligent. It can, however, be deceivingly, or perceptually intelligent; depending on the level of refinement. On its own, ANNs are not able to evolve, or refine their existing architecture — even change any basic, low-level functions. In fact, if a network is left to run and process data, the outcomes of that network processing will somewhat always stay the same. In the evolution of smart networks that grow and become more ‘human like’, it is the presence of artificial intelligences (AI) that dictates a networks evolution — enabling a perceptually intelligent outcome. Biological neural networks also differ in this regard, as under study, the neurons themselves are seemingly intelligent, navigating connections and connecting new pathways with neuroplasticity. ANNs on the other hand are literally a series of mathematical equations, and do not possess any underlying intelligence. They remain in the state that they were left, and need a from of controlling body to influence any functional development. The managing of ANNs can indeed be developed via human operation, and very basic networks are often manually manipulated, as it presents a good way to learn about artificial neurons. However, due to the ability within AI to identify complex correlations found within substantial amounts of data, it becomes an ideal candidate to operate complex and evolving networks. Depending on the type of artificial intelligence used in ANN augmentation, the construction and/or evolution of the network can vary. There are alternative approaches in AI application, such machine learning and deep learning, which both make use of an ANN as their core framework. The AI dictates what it regards as progress, evaluating the networks evolution and further optimising clusters of neurons. This allows the AI to be controlled via a higher influence, such as algorithms, or additional AIs. The combination of these two technologies coalesce to produce what is considered an ‘intelligent’ technology. However, the level of actually perceived intelligence depends on a multitude of factors in the construction and training of such augmentations. Instances, such as machine learning, can vary, as the advancements of these networks are all tailored to different perceptually intelligent processes. Many instances of machine learning are also considered ‘dumb AI’, as they are too specific within their function and cannot process information outside of their intended purpose. “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” — Eliezer Yudkowsky



												dd(Storage::disk('local')->getDriver()->getAdapter()->getPathPrefix());Article

Neural Networks: From Human to Machine

Biological Neural Networks Within central nervous systems, neurons handle and relay information to other interconnected functions and processing pathways. They communicate tasks and serve as key processors, allowing a biological entity to survive and excel within a host of dynamic environments. The cumulative complexity of these networks not only allows consciousness to exist, but also bodily functions to operate and memories to be stored. They are a unique and extraordinary feat of nature; entirely essential to life as we know it. Addressing any single neuron, the overall functioning anatomy can be demonstrated via three major components: dendrites; the cell body, and axions. Dendrites, meaning “tree” in Greek, receive information, or electrical impulses from other neurons. These, as suggested, resemble the roots of a tree and emerge from the cell body. The nucleus, housed within the cell body, handles, or processes data. It decides on what action to take — whether adjacent neurons are to be informed, or triggered via electrical impulses. Axions are also root-like in their biological appearance, connecting the cell body to other neurons and allowing the exchange of information. These functions are thus essential in natural biological intelligence, as they amass to power a vast, complex and interconnected entity, entirely predefined within ones DNA.  The modern human brain consists of an estimated 86 - 100 billion neurons. Being that each neuron is connected to an additional 1,000 neurons, the scope of complexity is presented as infinitely complex. This complexity does not, however, only come from sheer quantity, but the overall interconnected functionality of the network. Each neuron can independently navigate connections to other neurons, communicating and strengthening its functions via electrical impulses. Propagation of additional neurons within this network is thus a direct approach and method of increasing intelligence. Newer pathways are a response to stimuli sent from both the conscious and subconscious mind. The more these pathways are triggered, or used, the more they grow and reinforce. Observation and speculation of neural behaviour within the brain is both an intriguing and controversial subject. Under observation, neurons seem to each possess individual behaviour, as they are known to compete to store critical memories and experiences. They also independently navigate the network, looking for meaningful connections amongst adjacent neurons. The idea that each neuron extends a primitive consciousness is not deemed accurate within modern neuroscience. More so that neurons have evolved to respond to certain stimuli and function as part of a greater whole.  Each human individual has the biological potential to evolve their intellect and understanding of the world around them. It is argued in many fields that significant advantages can be attained through early exposure of key stimuli — including both nurture and nature in the early stages of development. However, age does not entirely restrict improvements and restructures within an already matured network. There are even cases of individuals experiencing super intelligence, or personality changes seemingly overnight. Usually caused via severe head trauma, damage is thought to cause many breaks, or disruptions within a mass of interconnected neurons. The damaged pathways attempt to re-establish connections, causing an alternative structure, or formation than before. This can then have a knock on effect, resulting in an altered consciousness or personality. Though much is understood about essential biological functions, modern science and philosophy is still unable to fundamentally explain what consciousness really is. It’s established that it occurs as a substrate independence, where our biological systems support the existence of the mind, but remains unestablished as to how it comes to be. The mind itself is not the biology where it resides, but exists only as a result. “We are the cosmos made conscious and life is the means by which the universe understands itself.” ― Brian Cox Within Philosophy For what is already known, the human brain could potentially be the most advanced biological entity in the known universe. It not only allows us to excel within many environments, but drives us to prolong DNA and allows individualistic consciousness to exist and flourish — an integral archetype to the entire human condition. Topics of recreating such biological feats have resonated through the realm of philosophical question over the decades; giving the topic much substance and controversy. As an overall concept, the thought of accomplishing such tasks has inflicted morality and fear within many great philosophical thinkers and modern philanthropists. However, in the age of digital accomplishments, practices and technologies are rapidly excelled upon with increasingly less caution and philosophical thought.  “If man realises technology is within reach, he achieves it. Like it's damn-near instinctive.” — Major Motoko Kusanagi [Ghost in the Shell - 1995]  Although replicating complete human level consciousness not yet possible, it is predicted to be successfully emulated within the digital domain in the next twenty years. This estimate is partly due to Moore’s Law and other similar predictions surrounding computational advances. However, it is also thought that computational power will eventually stagnate in terms of raw processing power, and other, more efficient programming practices will allow better use of existing technology. To successfully emulate this biological network work with todays technology, it would take a computer four times more powerful than the worlds fastest supercomputer to even operate.  Application Implementation Artificial neural networks (ANN) are loosely based on the current understanding of neural biology. The concept aims to emulate biological processing, with distinct alterations. As for the architecture itself, it is a framework that enables AI and other machine software to produce a metaphorical map of how neurons handle data — an information processing paradigm.  Instead of replicating the human brain, in its vast, interconnected, asynchronous state, ANN focuses on a simplified and specific arrangement of neurons, which specialise in deciphering key information. Neurons are aptly arranged in subsequent clusters, which handle different, nuanced variables within the input data. Each cluster works to produce an output, which is passed into the next clusters inputs; finally producing a weighted outcome, or probability. The topology in this is significantly less complex than the human brain and can be illustrated in a tree branch type structure. Furthermore, the information processing is synchronous, handling single streams of data at a time. Generalising network structures is accomplished by breaking down neural clusters to three major layers. The first cluster, or layer, as commonly referred to, is known as the input layer (𝒙). Neurons exist to gather information from the outside world and relay data to a suitable neuron within the next clustered layer. The second major layer, or layers, is every layer in-between the input and output layers (𝒉, 𝒉1, 𝒉2, 𝒉3…). These contain the more complex analysing and weightings of probabilities within the network. As for the output layer (π’š), it will produce the final outcome, or specific result to the networks assigned task. The accuracy of this will, however, depend on the sophistication and development of the neural network.  Neuron Processing Understanding how neurons actually process data within ANNs; a closer look at activation functions reveal how each cluster of neurons amass to produce their outputs. For hidden layers, neurons typically receive a weighted input from the previous layer. This input can be used to decide whether or not it is applicable to trigger that specific neuron. This choice is dictated by one of many potential activation functions. Depending on the intended goal, there are many potential mathematical functions used to process and handle input data within ANNs. A step function produces one of two integer outputs between -1 and +1, which is dictated by a measuring threshold. If the input is below that threshold, then the specified output value below that threshold is send to the output. The same goes for values above the threshold, where the higher value is selected. As zero is also an integer, the lower value output can also be configured to this, along with the same threshold. The result would either output a one or zero — useful in filtering out greater chunks of raw data that’s not applicable. This is also sometimes known as a gate. To produce ‘smooth’ outputs with floating point values in-between one and zero, a sigmoid function can be used to produce values, which are determined by the mapping of a curve (sigmoid curve). The centre point of the curve runs through the zero point of the input (x), which equates to exactly half a value, or 0.5 on the output. This means that the start of the curve begins at zero and ends on one. As the input of (x) can span infinity plus, or infinity minus, the outputs produced would be an integer of one and zero respectively. However, depending on the gain of the sigmoid curve, the end points nearest to zero and one will correlate to different ranges on the (x) axis. When an input does fall within the changing range of the curve, a floating point value can be returned. The range of this curve is also adjusted via mathematical declarations. However, an input of zero will always return a half value. As for the overall use of this function, it produces a more natural output range due to its non-binary output, which is why it is heavily used within ANNs. As an extension of the previous function, there is also an additional variation, called a hyperbolic tangent function. This performs a similar objective, but instead produces output values between -1 and +1. The shift in this means that an input of zero will also produce an output of zero and the output has been signed, or been shifted to create a sinusoid. There are also linear functions, which produce absolutely that, a linear mapped output of the input data. However, this is more commonly utilised when developing ANNs augmented with reinforcement learning. “Our external physical reality is a mathematical structure.” — Max Tegmark ANN Advancement As a technology, ANNs cannot be credited as intelligent. It can, however, be deceivingly, or perceptually intelligent; depending on the level of refinement. On its own, ANNs are not able to evolve, or refine their existing architecture — even change any basic, low-level functions. In fact, if a network is left to run and process data, the outcomes of that network processing will somewhat always stay the same. In the evolution of smart networks that grow and become more ‘human like’, it is the presence of artificial intelligences (AI) that dictates a networks evolution — enabling a perceptually intelligent outcome. Biological neural networks also differ in this regard, as under study, the neurons themselves are seemingly intelligent, navigating connections and connecting new pathways with neuroplasticity. ANNs on the other hand are literally a series of mathematical equations, and do not possess any underlying intelligence. They remain in the state that they were left, and need a from of controlling body to influence any functional development. The managing of ANNs can indeed be developed via human operation, and very basic networks are often manually manipulated, as it presents a good way to learn about artificial neurons. However, due to the ability within AI to identify complex correlations found within substantial amounts of data, it becomes an ideal candidate to operate complex and evolving networks. Depending on the type of artificial intelligence used in ANN augmentation, the construction and/or evolution of the network can vary. There are alternative approaches in AI application, such machine learning and deep learning, which both make use of an ANN as their core framework. The AI dictates what it regards as progress, evaluating the networks evolution and further optimising clusters of neurons. This allows the AI to be controlled via a higher influence, such as algorithms, or additional AIs. The combination of these two technologies coalesce to produce what is considered an ‘intelligent’ technology. However, the level of actually perceived intelligence depends on a multitude of factors in the construction and training of such augmentations. Instances, such as machine learning, can vary, as the advancements of these networks are all tailored to different perceptually intelligent processes. Many instances of machine learning are also considered ‘dumb AI’, as they are too specific within their function and cannot process information outside of their intended purpose. “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” — Eliezer Yudkowsky

Article dd(Storage::disk('local')->getDriver()->getAdapter()->getPathPrefix());

Machine Learning: Introduction and Techniques

Within traditional programming, input data is commonly processed using a collection of functions and linear instructions. It is the programmers objective to assign the correct logical paths and processes from which the machine can follow. These instructions are most often executed in sequential order, until the desired result is met.  Applied refinement of modern software usually results in deceptively smart systems and applications. Though, the fundamentals of computing and software operation itself is inherently dumb. Due to software being mainly developed for specific tasks and operations, any ambiguous changes to input data outside of its programming will result in errors and/or exceptions. This issue becomes a substantial downfall to scripted programming, as it requires long term updates and support to exist in a dynamic environment. A direct answer to this issue is conceptualised within machine learning (ML). The architecture itself utilises the integration of augmented neural networks (ANN), which allows the machine to build an internal model and understanding of the data it collects. The program, or artificial intelligence (AI), can then build dynamic functions to cope within a dynamic environment; essentially granting the ability to learn. The need for auxiliary human programming then becomes redundant, as the machine becomes capable of assisting its own evolution and ongoing optimisation. Birth of a neuron Unlike traditional approaches to software development, every AI based on ANN will have to undergo development and training in order to become proficient in its task. Each new instance of machine learning begins with both input data and a ruling algorithm. The machine has to make sense of the input data, from which it constructs neurons and pathways in a bid to progress. The end goal algorithm is used as a set of instructions, so the AI can decide on what task each neuron will perform. The result of this, is an interconnected web of neurons, which handles the processing of data. This web of neurons can then be used as a dynamic interface that aims to achieve the end goal. An example of this can be translated through training a machine to play the classic Nintendo video game, Super Mario. Due to the in-game progression being mainly linear in its side scrolling nature, it becomes a suitable candidate for exampling a neural network. For an overall set goal, the AI would need only understand that the amount of distance travelled east directly correlates with the overall progression. Only slight additional parameters can then be introduced to refine the end goal, such as time taken, and amount of deaths experienced. These parameters can be useful, as they are used to gauge the overall development, or ‘fitness level’ of the AI. Due to the dynamic development of neural networks, each new instance of ML will also most likely produce independent and contrasting results. To improve the efficiency of global development, multiple simultaneous instances of the same AI can be ran in order to increase the chance of developing higher performing sets of neurons. Some instances will learn faster than others: and others will approach the task in different ways. This also allows for insightful and interesting results between developing instances… The initial application and development of this process is referred to as the first generation. Within this generation of grouped instances, the in-game character would mostly struggle to perform any significant progress - mostly standing still, or making sporadic movements. This is the AI trying to propagate neurons and map the available controls to logical functions. One neuron, for example, may be used to move the character further right, whilst another makes the character jump. The more complex a network becomes, the more likely it will be able to succeed in the overall task. Further complexity is then left to develop until a progressional bottleneck is reached. 
It is at this point where the performance of each instance is analysed and compared, allowing the most desired set of neurons to be selected for merging. The succeeding candidates will have their neural networks merged, allowing specific strengths to be unified into one entity. Not only does this help in excluding any unwanted behaviour and developments, but also acts as a form of natural selection. The task of merging is usually performed by humans in low-end and developing instances of machine learning. However, additional AIs can be assigned to this task, allowing the process to become completely automated. The merging of these networks is also integrated with some small mutations, which is very similar to natural evolution. This allows for some experimental deviations and guaranteed changes between ongoing evolutions. This first evolution, or second generation, will begin learning with all the best neural paths inherited from their parents. It is this merging of neurons which most always breaks through bottlenecks and allows another generation to excel past the successes of their parents. This is a direct example of the machine becoming more intelligent. Unlike natural evolution, where only genes are passed to subsequent generations, machine learning directly allows the next generation to extend and build upon fully developed neural networks… The technical term for this process is referred to as neuro evolution. In our example of Super Mario, subsequent generations will increasingly progress through the level at ease. Once the machine is able to complete a level without deaths, the AI could be left to either optimise its current processes, or be reassigned additional algorithms, from which it can further progress. As the machine is able to quickly master such simple games, the point of advanced execution is reached somewhat exponentially. AIs then have the ability to far exceed the skill and efficiency of humans, which make them a very desirable technology to automate tasks. “Our intuition about the future is linear. But the reality of information technology is exponential, and that makes a profound difference. If I take 30 steps linearly, I get to 30. If I take 30 steps exponentially, I get to a billion.” - Ray Kurzweil Further Learning Another method used in training ML based AI is via reinforcement learning. As the term suggests, the AI builds up an understanding of the input data through boolean (true or false) feedback. This is achieved by returning the final output back into the AI, where the data is stored within the internal model for future reference and comparison. Over time, the machine can then become more efficient at handling dynamic inputs, as the internal modal continues to build a wider understanding of the data. To present a digestible example, an image identifier application can be used in showcasing the logic involved. For this in particular, the machine will only be asked to identify pictures of either a single cat or dog. Using two photos of the respective animals in the base model, the machine can operate with less need for advanced algorithms, as it can use this data for an initial learning reference. Within the dynamic learning process, additional images of both cats and dogs are then ran through the input stream. The machine can immediately start producing probabilities based on the initial images, which are then assessed and returned to the machine as reinforcement learning. However, even with minimal feedback, the machine can reach relatively advanced levels of identification through its neural network alone. The instance can develop some unique neurons, which identify certain features of the animals, as subtle as they may be. This, combined with an internal model and ongoing feedback, results in quite a powerful methods of implementing machine learning. “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” - Eliezer Yudkowsky Outcomes  The human brain is estimated to have on average 100 billion neurons, with each neuron connected to around another 1000 neurons. In our examples of a simple neural networks, there will only be a handful of neurons that directly deal with the input data itself. For a network as complex as the human brain to be simulated, it would require a computer at least 3.5 times more powerful than the most powerful computer on earth. As interesting and impressive as machine learning can be, it is still considered ‘weak AI’ in comparison to other developing AI technologies. The concept of ML is regarded as the next level of automation, rather than significant intelligence. However, the potentials of ML are still staggering and can easily surpass human abilities within specific tasks. The technology must be monitored and respected, as abuse of such things can result in repercussions that arise at exponential rates.  Due to Moors law and the advent of affordable and powerful processors, ML is increasingly used within a plethora of industries. The trend in later years has seen companies in many governing and commercial sectors adopt ML into their core technologies. This trend will more than likely continue in years to come, as the technology has great potential. “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” - Elon Musk  

Article dd(Storage::disk('local')->getDriver()->getAdapter()->getPathPrefix());
  • Technology
  • Philosophy
Article

ε“²ε­¦

Pioneering micro processors

29th Jan 2018

Quisque nisi turpis, ullamcorper eget dolor non, bibendum laoreet enim. Sed nisi dui, tincidunt ac consequat vel, consectetur at sapien. Fusce convallis dictum volutpat.

Article

ζŠ€θ‘“

Technology in Japan

29th Jan 2018

Quisque nisi turpis, ullamcorper eget dolor non, bibendum laoreet enim. Sed nisi dui, tincidunt ac consequat vel, consectetur at sapien. Fusce convallis dictum volutpat.

Article

心理学

Deeper into space

29th Jan 2018

Quisque nisi turpis, ullamcorper eget dolor non, bibendum laoreet enim. Sed nisi dui, tincidunt ac consequat vel, consectetur at sapien. Fusce convallis dictum volutpat.

Article

ι‡ε­εŠ›ε­¦

A science fiction reality.

29th Jan 2018

Quisque nisi turpis, ullamcorper eget dolor non, bibendum laoreet enim. Sed nisi dui, tincidunt ac consequat vel, consectetur at sapien. Fusce convallis dictum volutpat.