As the Internet grows from the bottom up, with new connections to the Internet appearing at the rate of 1 million per month, we are witness to a personal computing revolution. To the average user, the Internet offers never before seen possibilities and functionality, but to system administrators the Internet provides a nightmare, in the maintenance and upkeep of controlled and secure sites.
The people involved in hacking these days are not just bored students. With the wealth of information potentially available throughout the world, systematic and automated probing of new Internet connections is being carried out by information brokers, foreign governments, and mercenaries looking for means of extorting money.
Firewalls are seen as a major element in the protection of local networks against outside illegal access and hacking. The isolation of a local network from the internet by means of a singular, well setup gateway can offer near guaranteed security. However, an important slant on this is that no matter what technology employed within the firewall, it will not work if the company does not uphold an associated Security Policy. If an employee brings his own modem in to work and connects it to his computer, or if someone introduces a wireless connection, the firewall is rendered useless as data now completely bypassing it.
A security policy determines the basis of the firewall implementation; if the security administration hasn't clearly analysed what it intends to allow in and out, there are too many gray areas and gaping holes for the hacker to exploit.
A prudent security policy is probably applicable to most businesses that want to allow the employees some of the privileges that the internet provides, while ensuring enough measures exist to maintain system integrity. This highlights one of the problems with any firewall implementation; there will always be a conflicting balance between security and convenience/functionality to the user. The internet is supposed to link the world together. With many of the paranoid measures taken by some companies, we are limiting the full potential of the Internet.
Below is a diagram of the internals of a firewall, with the most typical components used. With only one connection to the outside world, all data from the LAN must first pass through this gateway, meeting certain requirements to guarantee access.
A=Packet Filter B=Application Gateway
In essence there are two levels of approach to the gateway solution:
At the packet level, packet filters exist to weed out data at the IP level. Depending on the setup of the filters, we could, if desired, only allow entrance to the LAN from specified IP addresses, or deny access to certain ports (Telnet, Finger, etc). This approach is one of the most common security mechanisms in use today, but in itself does not protect against all forms of attack, and would prove very difficult to exclude everything you want to keep out.
At the application level, proxies exist to check the data depending on the service request, with a drop/access on completion. This sentry system can examine traffic much more thoroughly, and has a lot more flexibility than the standard look-up facilities of the packet filters.
However, in order to take this approach, you must know first that your application software is secure, other wise there could be gaping holes in the software for the hacker to take advantage of. So specially reduced proxies, stripped-down versions of the applications are used to check and verify data. However, because of the reduction of the functional capabilites of the application, you can only use it with the purpose it was designed to serve.
A third aspect of the firewall is the inclusion of two separate LANS within the system. This is a necessary and wise precaution to avoid the possibility of the hacker rerouting data around the application proxies.
There exist many other flavours to the firewall methodology. Public and private name servers exist to keep the local domain names private to the outside world, a useful function in todays corporate workplace.
Auditing facilities which monitor traffic and build profiles of possible illegal access use.
Reactive systems, which are designed to react to illegal access by fooling the hacker into believing he has control of a system, when in fact the system is leading him into a 'hall of mirrors'; if a hacker deletes all files in a directory, a directory check will confirm his actions; whereas in reality nothing has been deleted. These tricks, including dummy passwords and dead-end traps, give the system administrator plenty of time to collect information on the source of the connection and his intentions.
Type enforcement, which assign data and processes to certain read and write priviledges, depending on security policy. So even if a hacker gained was to break into the firewall system itself, it would be left in the one type domain, without access to other applications or processes.
As can be seen, firewalls offer the ability to nearly guarantee against illegal access. However, where the technology can ably cope with the task in question, the problem is with the administrator who must know what he wants to protect the system from.
Packet filters are completely unintelligent, based on a look-up table scheme. As such, they should only be used as a first line of defense, whereas the application proxies provide a much comprehensive set of mechanisms and options. As new weaknesses are found in data formats and packets, these proxies can be easily updated to close the security hazard.
One of the few problems with firewall systems seems to be the price. At prices ranging from the one to tens of thousands, some companies may be holding back as the return may not appear so apparent. But with the conflagaration of computer users around the globe, hacking is not something that is going to go away. And as information continually grows in worth, peace of mind and the knowledge of a secure system are too important to sideline.