Looking For Ideas About How To Create A Constant Data Stream

Home » CentOS » Looking For Ideas About How To Create A Constant Data Stream
CentOS 5 Comments

Hi,

I’m looking for a good way to create a constant data stream that will occupy a bandwidth of about 2–5Mbit/sec between two remote hosts over the internet. I
have full access to the hosts involved.

My first attempt to use scp to copy data from /dev/null on host A to /dev/null on host B, but scp says ‘/dev/null: not a regular file’. If something like that would work, I would be able to limit the bandwidth of this transfer in the router(s) involved so that it won’t occupy all the bandwidth.

Of course, it would be better if I could limit the bandwidth on the sending side rather than dropping packages. I could probably write some program to do that, but since I have never programmed such a kind of network application, it would be rather time consuming. Maybe there’s already a kind of tool around that can do this.

I need this to work around whatever settings my ISP has made 3 days ago that block my VPN connection so that I effectively can’t reasonably work anymore.
I do know what the problem with the connection is and that occupying some bandwidth would unblock the VPN; only there doesn’t seem to be anything else I
could about it.

5 thoughts on - Looking For Ideas About How To Create A Constant Data Stream

  • Hi hw,

    You can’t read from /dev/null. You get nothing from it. You’re better off using /dev/random, for example. That will give you a continuous stream of random bytes.

    However, that’s not the focus of this. You want a sustain a stream of packets between two hosts. You’re better off using UDP for this. And a good tool for generating such packets would be “iperf”. It can measure bandwidth between two nodes more accurately.

    Regards, Anand

  • Hmm, last time I had such issues (~10 years ago), I had a ssh-server on one side running, and used scp from the other side:
    scp -l [banwidth in Kbit/sec] /dev/zero [user@remote host]:/dev/null

    For me at the time 150 kbit/sec was enough to keep my channel open.

    Others used netcat (nc) in a script to get the similar results
    (feeding it “lines” at a certain rate to limit the traffic)

    Have a nice weekend.
    – Yamaban

  • Oh, ok, yes, of course, that makes sense :)

    Hm, iperf came to mind, and I looked at the manpage again. It doesn’t seem to have a way to transmit/receive indefinitely, though it seems it has basically everything I’m looking for except for unlimited transfers. I’ll try it out; I
    can always look at the source code and try to do about something about the limit if I need to.

  • I would recommend to read from /dev/zero instead, it will give you a stream of zeroes. Using /dev/random is OK, but has one disadvantage in the OP case: you exhaust machine’s accumulated entropy which may be more needed for other tasks (like SSH or ssl connections)…

    Just my 2 cents.

    Valeri


    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • /dev/random will block when you run out of entropy, so you won’t get a consistent flow of data after some time. /dev/zero should always return data, though. It I agree it makes more sense to use iperf.


    Jonathan Billings