{"id":19781,"date":"2022-06-29T02:13:37","date_gmt":"2022-06-28T22:13:37","guid":{"rendered":"https:\/\/me-en.kaspersky.com\/blog\/neuromorphic-processor-motive\/19781\/"},"modified":"2022-06-29T02:13:37","modified_gmt":"2022-06-28T22:13:37","slug":"neuromorphic-processor-motive","status":"publish","type":"post","link":"https:\/\/me-en.kaspersky.com\/blog\/neuromorphic-processor-motive\/19781\/","title":{"rendered":"A mind of their own: we need to talk about neuroprocessors"},"content":{"rendered":"<p>Kaspersky recently announced investing in Motive NT, which is developing an in-house neuroprocessor \u201cAltai\u201d. Let\u2019s take a look at what neuroprocessors are, how they differ from conventional processors, and why this field looks to be a very promising one in terms of computer technology development.<\/p>\n<h2>Computer brain<\/h2>\n<p>Any modern computer, tablet, smartphone, network device or digital player has a central processing unit (CPU)\u00a0\u2014 a general-purpose electronic-circuitry device for executing computer code. The operating principles of the traditional processor were <a href=\"https:\/\/en.wikipedia.org\/wiki\/Von_Neumann_architecture\" target=\"_blank\" rel=\"nofollow noopener\">laid down<\/a> way back in the 1940s, but, perhaps surprisingly, haven\u2019t changed much since then: CPUs read commands and execute them sequentially. At the CPU level, any program is broken down into the simplest of tasks. These are commands like \u201cread from memory\u201d, \u201cwrite to memory\u201d, \u201cadd two numbers\u201d, \u201cmultiply\u201d, \u201cdivide\u201d, etc. There are many nuances regarding CPU operation, but for today\u2019s discussion what\u2019s important is to remember that for a long time CPUs could perform only one operation per cycle. These cycles could be very numerous indeed: at first hundreds of thousands, then millions, and today billions per second. Nevertheless, until recently (the mid-2000s), a typical home computer or laptop had only one processor.<\/p>\n<p>Multitasking, or the ability to execute several programs simultaneously on one CPU, was achieved through resource allocation: several clock cycles are given to one program, then the resources are assigned to another, then to a third, and so on. When affordable multicore processors came on to the market, resources could be allocated more efficiently. Then it was possible not only to run different programs on different cores, but to execute one program on several cores simultaneously. At first, this was no easy task: many programs and games for some time were not optimized for multicore or multiprocessor systems.<\/p>\n<p>Today\u2019s CPUs that can be picked up by home users can have 16 or even 32 cores. This is an impressive figure, but far from the maximum possible \u2014 even for conventional consumer technology. For instance, the Nvidia GeForce 3080Ti video card has 10,240 cores! So why the huge difference? Because traditional CPUs are far more complicated than the processing cores found on video cards. Ordinary CPUs perform a limited set of simple functions, but specialized graphics processing units (GPUs) in video cards are even more primitive, capable of only very basic operations but which they do very quickly, which comes in handy when you need to perform billions of such operations per second. Like in computer games, for example, where, say, to calculate the lighting of a scene, a lot of relatively simple computations need to be carried out for each point in the image.<\/p>\n<p>Despite this nuances, the processing cores that go into conventional CPUs and video cards don\u2019t differ fundamentally from each other. However, neuromorphic processors are radically different from both CPUs and GPUs. They do not attempt to implement a set of elements for performing arithmetic operations \u2014 sequentially or in parallel. Instead, they aim to reproduce the structure of the human brain!<\/p>\n<p>In computing, the smallest building block is considered to be the lowly transistor: there are several billion such microscopic elements in a typical CPU in any computer or smartphone. In the human brain, the equivalent basic element is the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Neuron\" target=\"_blank\" rel=\"nofollow noopener\">neuron<\/a>, or nerve cell. Neurons are connected to each other by <a href=\"https:\/\/en.wikipedia.org\/wiki\/Synapse\" target=\"_blank\" rel=\"nofollow noopener\">synapses<\/a>. Several tens of billions of neurons make up the human brain, which is a highly complex self-learning system. For decades, the discipline known as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Neuromorphic_engineering\" target=\"_blank\" rel=\"nofollow noopener\">neuromorphic engineering<\/a> has been focused on reproducing, at least partially, the structure of the human brain in the form of electronic circuits. The Altai processor, developed using this approach, is a hardware implementation of brain tissue \u2014 with all its neurons and synapses.<\/p>\n<h2>Neuroprocessors and neural networks<\/h2>\n<p>But let\u2019s not get ahead of ourselves. Although researchers have succeeded in reproducing certain elements of the brain structure using semiconductors, this doesn\u2019t mean we\u2019ll be seeing digital copies of humans any time soon. Such a task is way too complicated, though it does represent the holy grail of such research. In the meantime, neuroprocessors\u00a0\u2014 semiconductor copies of our brain structure\u00a0\u2014 have some rather practical applications. They are needed to implement machine-learning systems and the neural networks that underpin them.<\/p>\n<p>A neural network or, more precisely, an <a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_neural_network\" target=\"_blank\" rel=\"nofollow noopener\">artificial neural network<\/a> (as opposed to the natural one inside our head) consists of a set of cells capable of processing and storing information. The classic model of a neural network, the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Perceptron\" target=\"_blank\" rel=\"nofollow noopener\">perceptron<\/a>, was developed back in the 1960s. This set of cells can be compared to a camera matrix, but it\u2019s also capable of learning, interpreting a resulting image, and finding patterns in it. Special connections between cells and different types of cells process information so as to distinguish, for example, between alphabet cards held in front of the lens. But that was 60 years ago. Since then, over the past decade especially, machine learning and neural networks have become commonplace in many mundane tasks.<\/p>\n<p>The problem of recognizing letters of the alphabet has long been solved; as motorists know only too well, speed cameras can recognize the license plate of their vehicle\u00a0\u2014 from any angle, day or night, even if covered in mud. A typical task for a neural network is to take a photo (for example, of a stadium from above) and count the number of people. These tasks have something in common: the inputs are always slightly different. An ordinary, old-fashioned program would likely be able to recognize a license plate photographed from straight ahead, but not at an angle. In this case, to train a neural network, we feed in myriad photos of license plates (or something else), and it learns to distinguish the letters and numbers it consists of (or other features the input has). And sometimes it becomes so expert that, say, in the medical field it can make a diagnosis better \u2014 or earlier \u2014 than a doctor.<\/p>\n<p>But let\u2019s get back to the implementation of neural networks. The computations required to implement a neural network algorithm are rather simple, but there are many such operations. This job best suits not a traditional CPU, but a video card with thousands, or tens of thousands, of computation modules. It\u2019s possible also to make an even more specialized CPU that performs a set of computations needed only for a particular learning algorithm. This would be a little cheaper and a touch more effective. But all these devices still build the neural network (the set of cell nodes that perceive and process information, connected by multiple links to each other) at the software level, whereas a neuroprocessor implements the neural network scheme at the hardware level.<\/p>\n<p>This hardware implementation is significantly more efficient. Intel\u2019s <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cognitive_computer#Intel_Loihi_chip\" target=\"_blank\" rel=\"nofollow noopener\">Loihi<\/a> neuroprocessor consists of 131,072 artificial neurons, which are interconnected by a great many more synapses (over 130 million). An important advantage of this scheme is low power consumption when idle, while conventional GPUs are energy-hungry even when not in operation. This, plus the theoretically higher performance in neural network training tasks, delivers a much lower power consumption. The first generation of the Altai processor, for instance, consumes a thousand times less power than an analogous GPU implementation.<\/p>\n<h2>Neural networks and security<\/h2>\n<p>130,000 neurons are far fewer than the tens of billions in the human brain. Research that will bring us closer to a fuller understanding of how the brain works \u2014 and create much more efficient self-learning systems \u2014 has only just begun. Importantly, neuroprocessors are in demand already, since theoretically they allow us to solve existing problems more effectively. \u0410 pattern recognizer built into your smartphone that can distinguish, say, between different the kinds of berries you\u2019re out picking is just one example. Already, specialized CPUs for processing video and similar tasks are embedded in our smartphones and laptops en masse. Neuroprocessors take the idea of machine learning several steps further, providing a more effective solution.<\/p>\n<p>Why is this area of interest to Kaspersky? First, our products already make active <a href=\"https:\/\/neuro.kaspersky.com\/\" target=\"_blank\" rel=\"noopener\">use<\/a> of neural networks, and of machine-learning technologies in general. These include, for example, technologies for processing vast quantities of information about the operation of a corporate network: for example, monitoring data shared by nodes with each other or with the outside world. Machine-learning technologies allow us to identify anomalies in this traffic flow and find unusual activity, which may be the result of an intrusion or the malicious actions of an insider. Second, Kaspersky is developing its own operating system \u2014 <a href=\"https:\/\/os.kaspersky.com\/\" target=\"_blank\" rel=\"noopener nofollow\">KasperskyOS<\/a> \u2014 \u00a0which guarantees safe execution of the tasks assigned to devices under its control. Integrating hardware neural networks into KasperskyOS-based devices looks very promising for the future.<\/p>\n<p>At the very end of all this progress will be the emergence of a true AI\u00a0\u2014 a machine that not only solves tasks for us, but sets (and likewise solves) its own. This will be fraught with ethical issues, and it will surely be hard for folks to comprehend that a subservient machine has become smarter than its creator. Still, that\u2019s all a long way off. About five years ago, everyone was sure that self-driving cars were literally around the corner and they just needed fine-tuning. Such systems are also closely linked to machine learning, and in 2022 the opportunities in this field are still counterbalanced by the problems. Even the narrow task of driving a car \u2014 which humans have managed with reasonably well \u2013 \u00a0cannot yet be fully entrusted to a robot. That\u2019s why new developments in this area are of great importance \u2014 at both the software and ideas level, as well as at the hardware level. All this, combined, may not lead just yet to the emergence of smart robots like in sci-fi books and movies, but it will definitely make our lives a little bit easier and safer.<\/p>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"kesb-top3\">\n","protected":false},"excerpt":{"rendered":"<p>Why the future belongs to neuromorphic processors, and how they differ from conventional processors in modern devices.<\/p>\n","protected":false},"author":2581,"featured_media":19782,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1318,1916,1917],"tags":[1457,2568,2117,2569],"class_list":{"0":"post-19781","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"category-enterprise","9":"category-smb","10":"tag-business","11":"tag-investments","12":"tag-neural-networks","13":"tag-processors"},"hreflang":[{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/neuromorphic-processor-motive\/19781\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/neuromorphic-processor-motive\/24314\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/neuromorphic-processor-motive\/26677\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/neuromorphic-processor-motive\/24615\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/neuromorphic-processor-motive\/33408\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/neuromorphic-processor-motive\/10819\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/neuromorphic-processor-motive\/44736\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/neuromorphic-processor-motive\/25165\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/neuromorphic-processor-motive\/30678\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/neuromorphic-processor-motive\/30427\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/me-en.kaspersky.com\/blog\/tag\/processors\/","name":"processors"},"_links":{"self":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/19781","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2581"}],"replies":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=19781"}],"version-history":[{"count":0,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/19781\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/19782"}],"wp:attachment":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=19781"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=19781"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=19781"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}