Amateur Fortress Building in Linux
Part 2
Sander Plomp
Contents
bugs and patches
firewalls
trojans and traitors
suid, sgid
what did I learn
Bugs and Patches
Trying to get my Linux system secured the way I like it, I found out I'm
actually working by a simple rule. I'm trying to avoid a single point of
failure.
A single
point of failure means that a single mistake, bug or error means an attacker
can get sufficient control on the host so that he can do serious damage. A
firewall is of limited use of various system daemons, running as root, peek
through it, waiting for the next buffer overflow attack. Similarly, if your
firewall is all that stands between the script kiddies and highly vulnerable
network services you're putting a lot of trust in your ability to build the
perfect firewall.
Of course, deep down there is always some potential for catastrophic
security hole - in the TCP stack, the kernel, whatever. There is no alternative
to accepting that, at some time, the worst happens and the only way out is to
get things patched as quickly as possible. I can live with that. I just don't
want it to be a biweekly event.
Many security experts believe in patches. I don't. It is not acceptable for
system security to depend on the owner being able to get the patches for it
quicker than the kiddies can get the exploit. It just leads to stupidities
like "my vendor patches quicker than yours". That doesn't mean you shouldn't
patch known security holes. It means you shouldn't trust your ability to do it
quickly enough.
Claiming that people should be up to date on their patches any waking moment
or else it's their own fault is just lame. Really lame. Almost
as lame as, say, making
all your document formats scriptable, encouraging people in every way to send
them by email to each other, putting a single mouse click between the user
and executing anything with whatever program that might be vaguely associated
with a file extension, giving it total control of the machine without any form
of sandboxing, adding an active policy of hiding file extensions whenever the
system feels like it, making sure there is no obvious way to examine a mail
attachment without activating it and saying things like: "users should be
educated not to open attachments if they are not sure it is safe".
So I have my standards. Anything that does interaction with the outside world
should either be very trivial, very well audited, or be sufficiently sandboxed
that possible damage is limited. Preferably it should be all of them. Many
classic programs cannot be set up this way and should not
be exposed to the public Internet.
In Linux, sandboxing is typically done by running under a special account, one
that has just enough rights to do what it needs to do. If possible, chroot() is
used to limit the part of the file system that is visible. In the future, once
capabilities get straightened out, it
should probably also include shedding unnecessary capabilities when appropriate.
The combination of not letting local services listen on public networks,
sandboxing services properly and keeping up to date on patches should keep the
risk of remote exploits under control. Of course, there are more dangers than
just remote exploits...
Firewalls
I don't believe in firewalls. That's more or less a lie, of course, but
I'm trying to get something across here. Most security experts REALLY believe
in firewalls. The expect that, when they die, arrive at the great firewall
in the sky where Saint Peter is running a default policy of REJECT. I've
go much bleaker expectations.
Current Internet firewalls work on IP packets. This is a fairly low level,
there are no 'connections' of 'services', just 'packets', 'protocols', 'ports',
and
'addresses'. It's possible to recognize the higher protocol levels in the
packet flow, but it isn't the natural level for a firewall.
A firewall makes sense for a lot of things:
- Since the natural level is the ip packet, it makes sense for the firewall
to detect among them the strange, the weird, the
malicious and the unusual, and take
action. This means filtering for spoofed packets, obviously misrouted ones,
any thing with suspicious options on, extremely long and extremely short
packets. It makes sense to have such filtering as a separate stage, away
from the actual processing. This makes for a clean separation of different
functionality, and makes it easy to make the filtering step highly
configurable. It even makes sense to keep 'state', which means that higher
level protocols are partly emulated so you can detect attacks that involve
protocol violations.
- Often the firewall is the only connection between a trusted and an untrusted
network (or two equally untrusted networks) and as such, sees all that goes in
and all that goes out. This puts it in an excellent position to do logging and
detecting suspicious activity. On the ip level, there are many filtering
operations that make sense. You can stop all communication with sites known to
be hostile. There is also traffic which we known should not occur between
the networks. Often packets that violate those rules are easily identified
based on protocol number, port number, direction of flow etc. When such a
packet tries to get into the trusted network the firewall should of course stop
it and possibly log it. When such a packet tries to get out the firewall should
stop it, log it and scream bloody murder. The screaming part is important; a
bad packet trying to get in just means someone attempts to attack the trusted
network. When you see a bad packet go out it means they were successful.
My problem with firewalls is that many people only see it's ability
to stop a lot
of different kinds of bad traffic, and that's precisely what they use it for.
The firewall is a true cure all for security woes. It can stop detectable
spoofed packets. It can stop things like the ping of death attack. It can
shield know from known attack sources and protect services you don't want
to be accessed by outsiders. It hide much things about the network behind it.
It also becomes the ultimate single point of failure.
If the firewall goes down, suddenly a lot of stuff is exposed. Vulnerable
services. Information about the network. Protocol weaknesses. Any of them
that truly depend on the firewall to protect them is in trouble. If a lot
of stuff depends on it then firewall trouble means bad trouble.
A frightening number of people don't even use it as a cure-all. They use it
to solve one specific problem: protecting vulnerable ports, and see this as
the main function of the firewall. They don't even bother about check outgoing
traffic since that has little to do with stopping a remote attack.
In general, firewall implementations are quite solid. That is, they don't go
down very often. They are quite easily misconfigured, so in reality they are
not always as solid as you would think. In fact, there is a whole list of
problems with setting up firewalls:
- It's fundamentally a subtractive method. No matter how much everyone
hammers on using a default DENY policy; if e.g. the firewall didn't
get started when it should then things are wide open.
- Writing good firewall rules is non trivial. It's easy to make a mistake
and accidentally leave a dangerous hole.
- Testing is problematic. Where for example netstat will tell you immediately
which ports are currently listening, you need to do a full port scan from the
outside to see what your firewall will let through.
In my world, any application or service should be able to fend for itself,
security-wise. If it's not available to the public internet it shouldn't be
listing on it. If it should be accessible by a limited number of hosts it
better be able to enforce that by itself. There should be no weaknesses that
must be masked by the firewall to make it safe. Basically, things should be
safe even if there was no firewall.
For me the firewall is just an extra layer of protection, and a very powerful
one as it protects against so many things. At the same time, when I check
that at least two problems must occur before some feature has a serious
security hole I'm really hesitant to count the firewall. Before
you know it a very large
number of things would depend on it, and "a firewall problem plus just about
any other problem" doesn't add up to two for me.
Proxying firewalls
By now, everyone should be booing me, because I've made the assumption
that all firewalls are packet filtering firewalls. There is another way to
set up firewalls: as proxying firewalls. Where a packet filtering firewall
routes packets between two networks and filters out the bad ones, a proxying
firewall keeps both networks completely separate. Only traffic for which
a proxy is running will get through such a firewall. Because proxies are
specific to an application protocol very precise filtering is possible.
For me the main Advantages are:
-
It's not a subtractive method. The only thing that will get through are the
things actively being proxied. If the proxies don't get started or fail the
door is closed.
-
It's easy to tell what's getting through simply by looking what proxies are
running.
From a security standpoint one disadvantage is that the proxies themselves are
potential targets for an attack, and must themselves be protected. On the good
side, proxies can also do various kinds of filtering and protection for the
internal network. Proxying firewalls are often considered safer than packet
filtering, but less practical to use. You need the right proxies, clients that
can use a proxy in the first place, and you need to configure each client to
use a proxy.
My setup
A good firewall should run on its own dedicated machine. Such a machine doesn't
run any services, doesn't have users, and is optimized for security. As long as
this machine is not compromised it will keep doing its firewalling properly.
The thing is, I don't want another ugly beige box grinding its fan bearings
out to clutter up my house. I don't care if they come for free, I
care that they come
small and silent. Until the day they sell cheap palm pilots with firewall
software on them I probably won't have a dedicated firewall machine.
So, in violation of yet another
security commandment I have a gateway box to the net that's also used to work
on.
I try to turn this into an advantage. As long as the gateway box has proxies
for the most important services (such as http) it's no problem for me if
certain services can be used from the gateway machine only. If there is no
proxy possibility for some esoteric service bad luck; then there's just one
machine that can use it on the Internet. Luckily, proxying software is available
for most commonly used protocols. Some provide even useful extra functionality;
after using junkbuster for I week I couldn't live without it. Many other
programs can use a generic SOCKS proxy server. You do have to make sure
outsiders can't use your proxies to relay their traffic. Proxies should service
the internal network only.
The gateway box has its own packet filtering firewall using IPCHAINS. This
box does not masquerade. Instead it runs proxies and servers for the
local network. Forwarding is off, computers on the local network can only reach
the Internet by going through some proxy. The packet filtering checks both the
incoming and outgoing traffic on the Internet as well as the local network.
So why, in spite of my attitude that it isn't done right if it isn't done
different, do I not use iptables? That's because iptables is just to
tantalizing. Once I start on that I will most likely go completely overboard
in the packet filtering department. I can't read the documentations without
thinking of all kinds of interesting extensions that could be
added to it. It would suck up all my spare time. If I had started
on that this article would never be finished. I'm not the only one; e.g. some
guys found a way to use iptables to change fingerprint of outgoing packets to
fool OS fingerprinting software. (This must be the ultimate in LINUX network
security: you can pretend to be OpenBSD.)
Trojans and Traitors
Once I'm done with messing up your linux box it's perfectly protected of
course. If any one breaks into it, it's not because I made a mistake, but
because you did. You were 20 minutes behind on your security patches. You
opened an email attachment that claimed your house was on fire. You accepted a
document or program over the Internet and used it without sterilizing
it first. You're probably familiar with this set of lame excuses
euphemistically known as 'user education', or, in the Redmond area, as 'best
practices'.
These are real problems and they need to be solved, not denied. There is
essentially no way you can avoid that a user, educated or not, gets tricked
into running a trojan. It's just a matter social engineering. However, it
should take more than an email message with a catchy title to completely
obliterate your system. This is what eventually let me to completely
loose confidence in windows 95 and it's offspring. No matter how many
hours you spend on configuring and securing it, it takes a
trojan about 10msec to complete wreck the system to the point
that reinstallation is the only option. All it needs is a single chance.
It is possible, for a determined and talented attacker, to get you to login
as root and start ripping out system daemons and replacing them, and do other
major reconfiguration work. However, this requires a level of personal service
and attention to detail that is seldomly found these days. What is more
realistically possible is to trick a user into accidentally running some
trojaned macro, script or code. The task it to limit the damage.
I don't think this problem has really been solved yet. It would require
cooperation of application that bring in untrusted data (email, web browsers)
to make sure users are unlikely to accidentally invoke dangerous content.
It would involve sandboxing code that works with untrusted data to protect
the user from malicious actions. It becomes far more difficult when users
actually transfers data from a remote source to their own data collection.
That would involve things like trust levels that are associated
with data and danger levels
with applications. Applications would be sensible in the presence of untrusted
data; they wouldn't run scripts or macros in them, and not invoke dangerous
programs on them. Virus scanners could be used as tools to raise trust levels
(they'd make sure their database is up to date and trusted.) Possibly
the operating system itself would get involved to keep track of trust and
danger in the user's files. There's interesting research topics waiting here,
but this isn't the moment.
Some things can be done without getting a PhD in the process. Things like
mailcap can be configured not to start any dangerous program on mail, but
use safe viewers only. Some mail transfer agents can be configured to scan
for dangerous content and do something about. Sometimes proxies can be used
to provide some protection, e.g. there is a patch to let junkbuster selectively
disable javascript in webpages viewed.
My main line of defense is to give each user two accounts: one to handle
untrusted data and a real account. The real account is for programming and
other serious work. The other one (the 'Internet' account) is used for Internet
access and other activities involving potentially dangerous data. Users can,
of course, transfer data between both accounts; the 'real' account has rights
to access some parts of the Internet accounts home directory. I'm fully aware
that it's virtually impossible to build a truly strong barrier between the
two accounts if users can freely move data between them. Never the less, it
provides a bit of protection for things the user wants to protect. The trojan
will have to sneak through several stages to get to the real account, difficult
for a simple 'exploit web browser bug' type of trojan.
The untrusted accounts have their home directories on a partition that is
mounted with the options 'noexec,nosuid,nodev'. This means you cannot execute
files in their home directory. There also cannot be any suid programs and no
devices but that's a minor issue. The mean purpose is to prevent an attacker
from smuggling executable code on the system and tricking a user into running
it. At the same time if these Internet accounts are the only ones on the system
with access to e.g. /dev/modem, with some tweaking things can be
setup so that only those accounts can use dialup Internet connections.
The effect of our 'noexec' mount is completely destroyed by /tmp, which
normally has rights like 'drwxrwxrwt'. This means anyone can do anything
in this directory except delete other peoples files. Hence /tmp must be
mounted with the same 'noexec,nosuid,nodev' options, or be a symlink to
such a partition. Doing so causes problems, for example midnight commander
uses shell script generated in /tmp to implement some functionality. Faced with
this, I've decided that such programs need to be fixed; no executable code
allowed in my /tmp directory.
Of course, is /tmp the only directory with this problem? Nope, there's at
least half a dozen other directories deep in /var that have the same access
rights. This pisses me off to no end. It turns out my system has a whole bunch
of hidden nooks and crannies that I didn't know about, places where any user
can freely hide their favorite exploit code, even if they've just stolen an
extremely restricted daemon account that normally doesn't have any write rights
at all. How many administrators ever check e.g. the metafont directory to see
if there is anything suspicious in there? The /tmp directory is an historical
anormality that's more or less implicitly a security violation (no easy way
to stop anyone from using your disks). I'd appreciate it if its offspring
doesn't take over unrelated parts of the directory tree. Now I've got to
disable the ones I don't use and more the rest over to an easily observed
and possibly protected place. Thanks a lot.
This particular part of my setup that I'm least happy about. It's relatively
complex and contrived, yet not very powerful. If an attacker can trick you
into running a trojan she can probably trick you into copying it to your real
account first. This setup wouldn't have stopped any of the email and macro
viruses that plague the windows world. Those problems are in application
programs (and for years I've been enjoying complete safety from them since I
can't afford the necessary bloatware.) Viruses are notorious for being able
to get past obstacles, simply by patiently waiting for the right opportunity.
It also breaks some programs, for example I cannot print from the 'Internet
account' because PDQ produces shell scripts in the user's home directory tree
to control printing.
In the end, it achieves only two things. The first is to provide each user with
a somewhat protected space that is less vulnerable to mail, news or
web based trojans. The second one is that viruses and trojans that rely on
executable code are likely to be thwarted because they cannot use files
to store and copy themselves. Given the amount of damage a single
malicious trojan could do I'm willing to go through quite a bit of
trouble to implement even this very limited protection.
Suid programs
Every article on unix security tells you to remove (or disable) unnecessary
or dangerous suid programs. This article is no exception. Suid programs are
dangerous. Suid root programs are really, really dangerous, even more so
than root owned daemons.
In fact not all root owned daemons are that dangerous. The danger is in daemons
that interact with untrusted users. By carefully crafting the input the user
can make the daemon misbehave. The best known of doing that is buffer
overflows, but there are many other ways. Clever input might trick a daemon
into reading or writing files it shouldn't access, starting other programs or
going into an infinite loop. Daemons that have very little or no interaction
with untrusted users provide few opportunities to do so. The biggest danger
comes from daemons that have extensive conversations with clients over the
Internet and perform complex tasks. Such daemons should never run as root.
Things like inetd and tcpserver do often start as root. However, they only accept
the connection, they have little or no direct interaction with the other side.
Instead they split of a child process that as soon as possible switches to a
specified (and hopefully safer) userid. Once interaction with the untrusted
user really start the daemon should be stripped of all dangerous powers.
The reason suid programs are so dangerous is that interaction with the
untrusted user begins before the program is even started. The caller controls
the environment in which the suid program will run. You can do really weird
things, like closing stdout before starting a suid program. If the programs
opens a file, and if the operating system always reuses the first free file
descriptor, the first file the program opens ends up on stdout. Anything the
program writes to stdout will be written to the file, something the programmer
most likely did not expect...
There are many other ways to confuse the program, using things like
environment variables, signals, or anything you want. A suid program must very,
very carefully sanitize its surroundings to avoid attacks. There is no 'safe'
stage in which to prepare for user interaction. This is the curse placed on all
that is suid root, and there is no escape.
Sometimes systems ship with hundreds of suid programs,
the vast majority of which is never actually used - they're just dormant
security risks. Other suid programs might get used - but not
by everyone. Most daemons do not call suid root programs unless someone has
cracked them and wishes to upgrade to root. Many installations have a classic
staff/student type population: a small set of users that may do system
administration tasks while other users never need to (nor should) do that.
Suid programs can be made only accessible to those who need to use them.
The usual way it to create a special group for each class of suid programs
you can identify and make each user a member of the groups needed. Programs
like sudo provide other ways to control who may use what program.
From a philosophical point of view, suid root is often used misused
as a way to run with security disabled.
Complex programs should
not run with 'security disabled' just because the programmer couldn't be
bothered to do the task in a more careful manner. The power behind suid
programs is to run a program with two ids active: that of the invoker and one
provided by the system. This means that for a sample a program to put messages
in the mail queue should
use the userid of the mail system; this should be enough for that task. Most
modern mail programs indeed avoid suid root programs for that purpose. In fact
such system accounts, like daemon accounts, are often much more restricted than
the typical user account.
Suid root should be reserved for when security truly
must be bypassed (programs like su for example).
At this point we can't put it off any longer: the thing to do is to get a
list of all suid programs on the system and start the boring task of going
through them, and examining each. Not just to see if we actually need this
program, but also to see what alternatives there are. In the end you should be
able to tell what each program is used for and why it can't be eliminated.
The questions I ask myself are:
- Do I need this program? Could I just strip off the suid bit and sleep better
at night? Like all those suid root games I never play.
- Are there more appropriate alternatives? For example, mount is suid root on
many systems just so the user can mount floppies and CDs. Often sudo would be
a better solution for that kind of thing.
- How about removing the suid bit, and making the user su to the right account
before using it? YMMV on this, some people prefer to make things like ping
and traceroute normal programs so only root can use them. This is a tradeoff,
if you overdo it people will have to su to root far more often, in
itself a security
risk. I prefer ping to be suid root, but in many server environments only root
would ever use such programs, so what's the point?
- If the program is suid root, is this a program that should really need to
disable security? Many people leave things like sendmail and lpr alone when
pruning suid programs since these are essential system services. It's worth
looking into alternatives that do not need suid root programs.
- Who needs to use this program. Frequently the answer is "only real users, not
daemons", "Only users that do system administration", or "only users over 18".
In such cases access controls should be set up that restrict access to these
programs.
Unfortunately, there are many programs that need to be suid root, even though
you really wouldn't want them to be. Most common are programs that need to
access a protected port number or other privileged network feature. For example
bringing up
a dialup ppp link requires super user rights because you're messing with
IP and interface configuration. We can only hope that in the
future capabilities and
other alternative mechanisms will put a halt to full scale
suid root programs popping up all over the place.
On my system I counted 49 suid programs, 47 of which were suid root. There were
also 28 sgid programs, 5 of which were sgid root.
Of the 47 suid root program, I found I could disable the suid bit on 35 of
them. 20 were programs that were never used - either because they provide a
service I don't use, or because they require hardware I don't have. For 8 it was
considered appropriate to restrict them to root use only (e.g. dump and
restore) while I could eliminate 7 more by switching to alternatives that
did not require suid root programs. This eliminates about three quarters of all
suid programs.
Of the remaining 12, 7 had a legitimate reason to be suid root (like su) while
the other 5 needed to be suid root for practical reasons but there really should
be a better way of doing this. Examples are ping and Xwrapper. All remaining
twelve programs had one thing in common: normally they would only be started by
humans. No system account needs to use them. I therefore put them all into a
directory that only those accounts that correspond to real users can access. If
an attacker breaks into a system account he cannot reach any suid root program.
For six of the twelve I added extra requirements - a number of special group
IDs controls who may access these.
There were 2 non-root suid programs. Both appeared to use suid to provide
controlled access to files for a service. Normally sgid is used for this, but
there might be occasions where there is a good reason to suid instead. In any
case, since I use neither of them I disabled both.
There were also 28 sgid files. Sgid is a somewhat more pleasant mechanism,
because the user's true identity is never camouflaged in any way. Only 5 are
sgid root, which doesn't mean much since group root is not in any way special.
In fact it's a major stupidity alert to me; most programs use a group id that
identifies the service they're working for ('news', 'man') which helps a lot if
you're trying to audit your system. Worse, since all important root owned files
are also of group root, a sgid root files might accidentally get more than
usual privileges on some of these unrelated
files. This should not happen on a correctly configured system, but it's no fun
having to check for this. I found most sgid root programs were sgid root
because they were installed incorrectly...
The most common sgid group is games, which appears 12 times. These are all
games using this mechanism to e.g. write to a shared high score
file. Personally, I
couldn't care less if someone gained group games right and cheated on his
pacman scores. The other 11 are assorted system accounts, used for things like
mail, news, and other services. For each of these, it's undesirable that
someone gained access to them and messed with that service. However, an exploit
in these programs would not directly threaten the whole system, only one
particular service or subsystem gets compromised. Note that in my setup,
none of these sgid programs will gain you access right to suid root
executables. They cannot be used, even if compromised, as a stepping stone from
a unprivileged system account to suid root programs.
All in all, I found an interesting dichotomy here. On one hand, there are the
suid root programs, each of them potentially dangerous. On the other hand,
there is a set of sgid non-root programs, relatively harmless if things are
setup correctly. For both, you'll be able to eliminate many and limit access to
others - which should enhance security considerably. From a practical point of
view, no suid root program should be world executable. There simply are none
that need it.
It's interesting to know that, for the standard lpr printing system, no less
than three suid root programs are needed for normal users (lpr, lpq and lprm)
while administering the printing system requires only a single sgid lp program
(lpc).
What did I learn?
I learned a lot. Most of all I learned that the average linux distribution is
made with almost no serious thoughts about security. If you take a look at a
default setup you will find with 20 minutes at least half a dozen ways to
improve it. Unfortunately most of these will take many hours to implement.
A cause of many security problems is that most distributions don't
differentiate between installing something and activating it. Most people will
install everything they think they might want to use some time. Even then, you
quickly find out it's better to err on the side of installing too much. After
digging up the CD for the third time because yet another package was needed you
just stop trying to figure out what everything is and just install it
unless you're really, really sure you'll never needed. For me, that amounts to
installing about everything except Klingon fonts and emacs. No wait, I might
need Klingon fonts some day.
The typical distro will not just install the stuff but also activate it. In
case of the Klingon fonts that is hardly a problem. When the package contains
suid root programs or public servers there is a lot to say for keeping them
disabled until someone actually starts using it.
A piece of software on it's way to from a shiny cdrom to being actually useful
goes through four steps:
-
Installation, the task of transferring it from cdrom to the right place on
the hard drive.
-
Configuration, setting it up the way you want it. It might vary from setting
just a few options to providing complex content, but very few programs require
no configuration at all.
-
Activation. Services are started, suid programs get their suid bit set.
-
Exposure, trying to set things up so that those that you want to use it can
access it and others are excluded as much as possible.
Distributions think they do people a favor by short circuiting things and, upon
installation, load some half backed default configuration, activate everything
and expose it to as much of the world as possible. The results vary from
useless (what's the point of activating a http server before anyone has put up
any content for it) to completely insane (putting linuxconf on a public port)
Given that 99% of the computers will connect to the
Internet some way or another this leaves the poor user with the task of
shutting down or securing 10 services she doesn't need for every one that is
desired. Gee, thanks a lot.
There is no reason why all but the most essential suid programs couldn't be
installed with the suid bit off, as long as a simple way is provided to
activate them when needed. In the same vein, there is not need for servers to
be running by default if a simply method exists to activate those needed. Most
stuff has to be configured and can be activated as part of the configuration
process. Activating a few things would certainly be a lot less work
than trying to find what's
active, determine what it is and if it's needed, and shutting it down if not.
But is isn't just the distributions that cut corners on security. It's been
clear for years that Internet security would greatly increase if all ISPs would
check the source addresses of the IP packets they forward for their users.
This would make spoofing much harder, which not only stops certain attacks but
also makes it more difficult for the attacker to cover their tracks.
But for many
ISPs the only check ever made is whether there is a valid credit card number on
file and configuring routers is too difficult, too much work and a load of other
stupid excuses.
Billions have been spend on writing Y2K readiness statements
but configuring routers to stop spoofed packets is suddenly beyond the reach of
mortals.
I won't even mention certain a competing OS family that not only has scripting
everywhere and everything eager to follow any URL, but also invented ActiveX
as the ultimate destructive content.
So every user will have to take care of securing their own setup. I found out
it is a lot of work. Most of it is spend reading documentation, trying to
understand what's going on and fixing the more obvious problems. This article
is much, much longer than I had originally intended, mostly because every time
I found out enough to fix one problem I learned enough to identify at least two
new ones. I don't mind doing some work on securing my machine. But I want it to
be my own personal touch - tailoring things to my own situation and building in
a few tricks of my own that'll snare the unsuspecting cracker. Not painfully
tracking down a long list stupid things like world accessible X servers, and
spending yet another few hours to find a decent fix.
There are many more things to do, things I originally wanted to do but didn't
get around to because there were so many trivial things to fix first. Kernel
patches, like LIDS and the openwall patches. Various integrity checking and
intrusion detection methods. Someday.
Not everything is complicated. Some things the simplest fixes are the best ones;
for example, the Bastille hardening script installs an immutable, root owned
empty .rhost file in every home directory. A very simple fix that eliminates a
classic weak spot. It's worth to look around a bit to see what others have
done.
Articles like this one tend to either end or begin by the author telling you
there is no absolutely secure system unless it's off, encased in concrete and
preferably on another planet. While technically true, the main reason they're
telling you this is that mankind fundamentally just likes telling other people
bad news. You can't have an absolutely secure system - but with a some
effort you can come up pretty good imitation. Linux security is
now at the stage
linux installation was some years ago. Back then, installing Linux and getting
X to run required quite a bit of knowledge and skill, as well as some serious
documentation diving. Nowadays most bumps are smoothed out and you have
comfortable graphical installers that help you with the few things that can't be
done automatically. Hardening scripts and security options are getting
more and more common in distributions. Once they've figured out that these are
no excuse for having a sloppy default setup, and that setting things up
carefully for the start is probably easier than securing them
afterwards, we're likely to get some progress.
Currently a good security setup requires skill, knowledge
and time. If history has the decency to repeat itself then in a few years
any Linux distribution will be able to build a secure default setup with
just a few hints from the user.