GPU SuperComputer

The fastest way to work with artificial intelligence
Find out more
Price list
Technical characteristics

NVIDIA hardware and software systems
  • NVIDIA BigData DGX Supercomputers with NVIDIA Tesla V100 32G Graphics Cards
  • Use of NVIDIA KVM virtualization to create cloud servers with required number of video cards.
  • Use of NVIDIA's Ubuntu-based DGX OS operating system on hosts and virtual machines
  • NVIDIA Docker containerization system is pre-installed on each virtual machine
  • Full support of optimized GPU–accelerated NVIDIA GPU Cloud application container catalog
Amazing performance
  • Productivity up to 1000 TERAFLOPS (for deep learning tasks)
  • 40.960 CUDA cores to speed up video card performance
  • 5120 Tensor Cores to accelerate matrix operations that lie at core of Artificial Intelligence
  • Up to 10 times faster than connecting video cards through PCIe interface because of service bus NVidia NVLink with capacity of 300 Gbit/s
  • Up to 4 times faster training of deep learning algorithms compared to other GPU-based systems
Creating a platform for deep learning goes far beyond than just a choice of a provider and graphics processors. The desire to introduce artificial intelligence in work of enterprise implies careful selection and integration of a complex set of software and hardware hardware. GPU SuperComputer accelerates this process, providing a ready-made solution that allows to process data and get results within the shortest possible time.
Technical characteristics
NVIDIA hardware and software systems
  • NVIDIA BigData DGX Supercomputers with NVIDIA Tesla V100 32G Graphics Cards
  • Use of NVIDIA KVM virtualization to create cloud servers with required number of video cards.
  • Use of NVIDIA's Ubuntu-based DGX OS operating system on hosts and virtual machines
  • NVIDIA Docker containerization system is pre-installed on each virtual machine
  • Full support of optimized GPU–accelerated NVIDIA GPU Cloud application container catalog
Amazing performance
  • Productivity up to 1000 TERAFLOPS (for deep learning tasks)
  • 40.960 CUDA cores to speed up video card performance
  • 5120 Tensor Cores to accelerate matrix operations that lie at core of Artificial Intelligence
  • Up to 10 times faster than connecting video cards through PCIe interface because of service bus NVidia NVLink with capacity of 300 Gbit/s
  • Up to 4 times faster training of deep learning algorithms compared to other GPU-based systems
Configurations and pricing
1 x Tesla V100 32GB
  • 10 cores
  • 61GB RAM
  • 250GB SSD
1 999 rubles / day
49 000 rubles / month
2 x Tesla V100 32GB
  • 20 cores
  • 122GB RAM
  • 500GB SSD
3 999 rubles / day
99 000 rubles / month
4 x Tesla V100 32GB
  • 40 cores
  • 215GB RAM
  • 1000GB SSD
5 999 rubles / day
199 000 rubles / month
8 x Tesla V100 32GB
  • 80 cores
  • 490GB RAM
  • 2000GB SSD
9 999 rubles / day
259 000 rubles / month

There are many GPU-accelerated solutions on the market today. But only NVIDIA BigData DGX-based SuperComputer GPU unlocks the full potential of the most advanced NVIDIA Tesla V100 accelerators and uses the new NVIDIA NVLink technology and the Tensor core architecture. The service provides 4-fold acceleration of deep learning algorithms training compared to other GPU-based systems. This is possible through the use of NVIDIA GPU Cloud catalog that provides access to optimized versions of currently most popular frameworks.

Main supported technologies

HOOMD-blue
HOOMD-blue
RELION
RELION
PIConGPU
PIConGPU
QMCPACK
QMCPACK
Parabricks
Parabricks
NAMD
NAMD
MILC
MILC
Microvolution
Microvolution
LAMMPS
LAMMPS
GROMACS
GROMACS
GAMESS
GAMESS
CHROMA
CHROMA
BigDFT
BigDFT
PGI Compilers
PGI Compilers
Lattice Microbes
Lattice Microbes
CANDLE
CANDLE
MATLAB
MATLAB
VMD
VMD
RAPIDS
RAPIDS
TensorRT Inference Server
TensorRT Inference Server
PyTorch
PyTorch
TensorRT
TensorRT
TensorFlow
TensorFlow
DIGITS
DIGITS
MXNet
MXNet
NVCaffe
NVCaffe
Kaldi
Kaldi
Deep Cognition Studio
Deep Cognition Studio
Dotscience Runner
Dotscience Runner
MapR PACC
MapR PACC
Cuda
Cuda
Cuda GL
Cuda GL
MATLAB
MATLAB
Torch
Torch
Theano
Theano
Microsoft Cognitive Toolkit
Microsoft Cognitive Toolkit
Caffe2
Caffe2
Kinetica
Kinetica
H2O Driverless AI
H2O Driverless AI
Chainer
Chainer
PaddlePaddle
PaddlePaddle
OmniSci [MapD]
OmniSci [MapD]
VMD
VMD
ParaView IndeX
ParaView IndeX
Clara Render Server
Clara Render Server
ParaView
ParaView
ParaView OptiX
ParaView OptiX
ParaView Holodeck
ParaView Holodeck
IndeX
IndeX
Smart Parking Detection
Smart Parking Detection
DeepStream
DeepStream
CT Organ Segmentation AI
CT Organ Segmentation AI
Cuda
Cuda
Cuda GL
Cuda GL
TensorRT
TensorRT
TensorRT Inference Server
TensorRT Inference Server
Kaldi
Kaldi
Transfer Learning Toolkit
Transfer Learning Toolkit
Smart Parking Detection
Smart Parking Detection
DeepStream
DeepStream
Dotscience Runner
Dotscience Runner
MapR PACC
MapR PACC
Cuda
Cuda
Cuda GL
Cuda GL
DCGM Exporter
DCGM Exporter
Device Plugin
Device Plugin
CT Organ Segmentation AI
CT Organ Segmentation AI
Clara Render Server
Clara Render Server

Use NVIDIA and MTS experience in the field of deep learning for your project and there will be no need to spend extra time and money to get the needed results. Spend less time on setup and optimization and do more research.

Make innovations happen faster

High-performance training speeds up your productivity meaning that you can spend less time on launch of your solution in production use.

NVIDIA DGX-1 Delivers 140X Faster Deep Learning Training

Technical characteristics
NVIDIA hardware and software systems
  • NVIDIA BigData DGX Supercomputers with NVIDIA Tesla V100 32G Graphics Cards
  • Use of NVIDIA KVM virtualization to create cloud servers with required number of video cards.
  • Use of NVIDIA's Ubuntu-based DGX OS operating system on hosts and virtual machines
  • NVIDIA Docker containerization system is pre-installed on each virtual machine
  • Full support of optimized GPU–accelerated NVIDIA GPU Cloud application container catalog
Amazing performance
  • Productivity up to 1000 TERAFLOPS (for deep learning tasks)
  • 40.960 CUDA cores to speed up video card performance
  • 5120 Tensor Cores to accelerate matrix operations that lie at core of Artificial Intelligence
  • Up to 10 times faster than connecting video cards through PCIe interface because of service bus NVidia NVLink with capacity of 300 Gbit/s
  • Up to 4 times faster training of deep learning algorithms compared to other GPU-based systems

Customer feedback

FAQ
?Which CUDA versions are supported?