The learning process of an artificial neural network consists of the cyclical repetition of two phases - data preparation, that is, pre-processing and delivery of the necessary input data to the suitable network, and the learning phase itself, during which the framework for in-depth training, a matrix of multiplication operations and an activation functions application of are all in use. These elements, among others, total the network result calculations and the parameter changes through the use of reverse error proliferation. The first phase is performed using the computational tools of the central processor, while the second is now generally performed using video cards (GPU) as a relatively budgetary and efficient solution to the parallel learning problem. The data preparation phase takes time, regardless of the complexity of the network architecture, despite the fact it can be optimized and partially in progress before the beginning of the entire learning process. The learning phase varies in time and can take place both faster and slower than its own preparation phases. With extreme network sizes, it is unquestionably necessary to use video cards with a large amount of memory and, preferably, high speed. Thus, it is necessary to create a specially configured computer that can effectively deal with the complexities of deep learning.