GEX130 website matrix big
GEX SERVERS:
Train and run AI models such as DeepSeek and Llama
DEDICATED GPU SERVERS

YOUR HETZNER SERVER WITH GPU

FOR EVERY AI PROJECT

Our GEX-line is powered by NVIDIA GPUs with CUDA technology and is perfect for AI workloads and machine learning.
GEX44
for AI inference
184 205 00 00 max/mo.
0.2948 0.3285 /hr
setup fee
once
79.00 88.00
GEX130
for AI training
starting from
813 903 00 00 max/mo.
1.3029 1.4471 /hr
setup fee
once
0.00 0.00
GEX44
CPU
RAM
GPU
Tensor Performance
vRAM
Intel® Core™ i5-13500
6 Performance Cores 8 Efficient Cores
64 GB DDR4 ECC
Nvidia RTX™ 4000 SFF Ada Generation
306.8 TFLOPS
20 GB GDDR6
GEX130
CPU
RAM
GPU
Tensor Performance
vRAM
Intel® Xeon® Gold 5412U
24 Cores
128 GB DDR4 ECC
NVIDIA RTX™ 6000 Ada Generation
1457.0 TFLOPS
48 GB GDDR6
100% green energy 100% Green Electricity
GDPR compliant. Our GPU Servers are located in Germany.

Selected AI use-cases

GEX44 AI NLP

Natural Language Processing

AI models from the field of NLP are designed to enable computers to capture human language in its full complexity. Artificial intelligence should understand, interpret and generate text and spoken words in a way that is natural and useful.



Specific applications include speech recognition, translation, chatbots and the creation, analysis and classification of texts.

Find your perfect GPU server for your work with AI

GEX130 – Maximum power for AI training

When training AI, an artificial intelligence model like DeepSeek is fed with large data sets so that it can perform certain tasks. This is an iterative process in which the model is optimized by adjusting its parameters to deliver the best results. The GEX130 offers the performance required for such complex processes with the NVIDIA RTX™ 6000 of the Ada generation. 48 GB of graphics memory, 142 RT cores, 568 tensor cores and 18176 CUDA cores provide the enormous speed and performance required for working with massive data sets and complex calculations. This is particularly helpful when dealing with image and language processing, generative models and time series analyses.

GEX44 AI Inference

GEX44 – Highly efficient AI inference

AI inference means using a trained AI model like Llama that analyzes new data and makes predictions or decisions based on it. The process is important for the practical application of AI systems, as learned knowledge is applied to new, unknown data. The GEX44 is optimized for such scenarios and delivers fast, accurate answers with minimal latency. By utilizing the 192 tensor cores in the Nvidia RTX ™ 4000 SFF Ada Generation GPU, it can accelerate demanding calculations and thus significantly increase efficiency. This is important for applications such as real-time image recognition, speech processing and complex data analysis. The RTX™ 4000 SFF Ada Generation is also particularly energy-efficient.

GEX44 Image Recognition
background elements
Thank you for participating in our poll!
Unfortunately, something went wrong.
Please try again at a later time.

Everything you need to kickstart your AI project

Get AI models such as DeepSeek or Llama running on our dedicated GPU servers and tag us on Hugging Face for a shout-out of your favorite Projects.