The use of GPUs to accelerate general-purpose scientific and engineering applications is mainstream nowadays, but their adoption in current high performance computing clusters is primarily impaired by acquisition costs and power consumption. Therefore, the benefits of sharing a reduced number of GPUs among all the nodes of a cluster are overwhelming for many applications. This approach, usually referred to as remote GPU virtualization, aims at reducing the number of GPUs present in a cluster, while increasing their utilization rate. The performance of the interconnection network is key to achieve reasonable performance results when using remote GPU virtualization. In this line, several networking technologies with throughput comparable to the one by PCI Express have appeared recently. In this paper we analyze the influence of Infiniband FDR on the performance of GPU virtualization, comparing the effect for a variety of GPU-accelerated applications, against other networking technologies, such as Infiniband QDR or Gigabit Ethernet. Given the severe limitations of freely available remote GPU virtualization solutions, the rCUDA framework is used as case study for this analysis.