rsync will do block level transfers in some cases, which will be faster, and if only deltas need to be transmitted you will see even more efficiency. scp works at the file level exclusively, so the overhead of the filesystem factors into the equation. any udp based protocol will be fast to transfer, but the overhead of the file system does still come into play
in general:
rsync block level transfers - most efficient
udp based file transfers - fastest transfer
tcp based file transfers - most resilient
i think the drbd project is used to replicate data between hosts that you are running NAS or SAN services on, and that is near real-time replication with similar functionality of rsync. you have 10Gb links between hosts. are you able to use Infiniband, or the open source alternative iSER? this usually has a requirement of Mellanox or Marvell cards because there are drivers that allow the CPU to be bypassed when shuttling bits around between memory and the network stack.
when thinking about how to go about this, think of the subsystems involved. how many, how much overhead each adds, what the slowest links are and what protocols are used. you can also look at tweaking kernel settings via sysctl to improve networking and afford better performance. most kernels are shipped with tuning configs that offend the fewest number of customers. tweaks i have on my nas allocate more memory to the networking stack, and set larger buffers. with the bonded 4 port Gb NICs, i also increase ring buffers, which helps.
aside from using the appropriate transfer mechanism, tuning and tweaking things may help improve the overall performance, as well.