There are many reasons that networking code is very hard to write, but one of the less well known is a reason I call the "Law of Quadratic Reliability".
Let me illustrate using a computer game as an example. Say your computer game has bugs in it (as most software does) which causes it to crash occasionally. Say that on average the game runs for ten hours before crashing. This will certainly be annoying to your customers, but not devastating. Most of the games they play will complete without problems, so they probably won't be too angry about your occasional hard-to-find bug.
Now imagine that your game is running on ten networked computers, with ten players. Because there are ten copies of your program in use, the overall probability that one of the ten machines will suffer a crash during the game due to that bug is ten times higher. The ten player game now runs on average for only one hour before one of the machines suffers a crash. If that crash stops the game, then you've ruined the game-play experience for not just one player, but for all ten. Your bug is not just ten times more serious in a ten player game -- it's a hundred times more serious. It is ten times more likely to happen, and when it does happen it annoys ten times as many people.
That's why it's the law of Quadratic Reliability -- the reliability requirement of a multi-player game goes up proportionally to the square of the number of players.
In some ways the problem is even worse than quadratic:
So this is the double-jeopardy of networked games compared to single-player games -- it is both much harder to make a networked game that is bug free, and much more important to do so.
The problem of Quadratic Reliability can be somewhat mitigated if you can make your game networking code robust enough that the game can continue even if one or more of the player's machines crash. This is also very hard to do, and most multi-player games do not attempt it.