h1

Why can’t malware be intercepted within the Internet infrastructure?

February 15, 2009

Vista busy cursor This is the text of a question I recently put to Steve Gibson of SpinRite and Security Now! fame via his grc.com/feedback page. Steve gets lots of questions and I have no idea whether it will make it onto the show, but it is a notion I think merits discussion:

Steve, this is a question for Security Now!

The onus is on computer owners and administrators to keep their machines fully patched and to avoid malware infections.  The whole focus of malware detection and elimination is at the Internet traffic delivery endpoint.

But we know that isn’t good enough. Too many machines end up in botnets because they are running old, unsupported systems, are unattended or have had updates deliberately turned off to avoid risk of automatic reboots at awkward moments.

Why can’t the virus checking be done within the Internet infrastructure instead? Could we not have intelligent switchers and routers that detect, intercept and block the most virulent malware?

Maybe the processing overhead would slow the Internet down too much if applied to all traffic, but there could still be sampling at strategic points.

In particular, such checks should identify the IP addresses of the computers which are malware sources so that the relevant ISPs can contact the owners of the machines concerned. It may be that the source is a machine which has been compromised and added to a botnet without the owner’s knowledge. If so the ISP should insist the user run a malware removal program (or where necessary be guided through a reinstall) before allowing them back on the Internet.

I appreciate there may be legal issues which impact the feasibility of this, and would need international co-operation to be properly effective, but would welcome your thoughts.

Now the second I fired this off I realised the big flaw in my suggestion. Internet traffic is packet-based. Each block of bytes transmitted over the Internet via the TCP/IP protocol gets broken up into a sequence of sequentially numbered packets which in general make their way independently from point of origin to point of delivery. There is no guarantee every packet will pass through the same sequence of intermediate Internet routers, it all depends on the flow of traffic at any given time.  That means at no given staging point on the way can you rely on having sight of all the packets making up a complete HTML page or file, so applying virus checks is a non-starter.

The only exception is the last leg of the journey, between the ISP of the recipient of the message and their computer. All the packets making up a discrete TCP/IP delivery must travel along that path.  That means only the ISP is in a position to carry out virus checks.

That puts us uncomfortably close to the “Phorm issue“, the notion of the ISP acting as a spy on consumers’ Internet traffic. Now there is a big difference between ISPs spying on their customers to make money out of targeted ads, and “spying” for the purpose of tracking down and eliminating sources of malware.  All the same, the legal and privacy issues which arise in the case of Phorm, NebuAd and others also have a bearing on the idea of ISPs monitoring traffic for malware.

We might get some further insight if Steve answers my question on a future episode of Security Now!

AddThis Social Bookmark Button

Advertisements

2 comments

  1. I think we’d need specially programmed routers at the ISP that picked sequences of packets at random and forked duplicate sets off to a dedicated batch of computers for virus testing, rather than attempt to do the virus checking in the router itself.

    The idea would be to identify sources of infected files rather than block infected traffic. I don’t think the latter is feasible. The sources would be contacted by their own ISP for help with cleaning up their machines.


  2. how could you tell the packets coudl still arive out of order.

    And switc’s and routers are designed to work at layer 2 and 3 making a routers that could work at layer 7+ and still be fast and cheap enough is the main problem.



Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: