The basic syslog daemon included in most modern UNIX like operating systems (Linux, *BSD, to some extent Sun Solaris) is quite simple and is designed to get important information from running processes. Even the most basic implementations of syslogd now allow the system administrator to fine tune the system to the extent of which logs or log levels end up in which log file.
The various implementations of the stock syslogd was always designed to be internetworkable and log messages were transferred via UDP on port 514.
There are modernized alternative syslog daemons to choose from including (but not limited to) the excellent syslog-ng which has a number of quite essential improvements that may not be critical for hosts sending their logs but are crucial for the log server itself. That said, if you can run syslog-ng on your hosts too there are certain benefits to this configuration. The most important one is that it allows transmission of log data via TCP. This of course means that you can wrap it around a SSLized tunnel, not to mention the obvious benefits of using TCP – especially where the log host and log server are geographically dispersed and unfortunately the slightest connection delay or issue will cause potentially critical log information to be lost.
Modernized syslogd alternatives like syslog-ng allow for the log to be kept in an alternative format to a flat file and instead in a SQL database. This makes a lot of sense especially on large networks where log volume can be so large.
Many well meaning system administrators log to a centralized server and keep this in a database as a way of coping with the deluge of log information (I should note that disabling debugging and otherwise useless log entries goes a hell of a long way to making sense of it all), however this results in unencrypted UDP (and with alternatives like syslog-ng unencrypted TCP) packets flying around your network and you may be surprised what they contain.
If you have a spare fifteen minutes find a spare switch port and configure it as a “monitor” port (most managed switches will let you define a port as a monitor or mirror port and will mirror all packets from the switch into your port. Obviously this single port may exceed the throughput of all the switch users and you may get some dropping) and plug a network analyzer into it. Back in the days of hubs this kind of eavesdropping was easy as every port saw the traffic of the other ports. In fact this was often used as a metric to improve performance and reduce collisions.
If you don’t have a network analyzer grab yourself a laptop and acquire the excellent wireshark (formerly known as ethereal) which does a darn good job at converting a cheap PC into something that can replace a piece of kit that costs thousands. Refine your search to UDP/514 and just sit and wait. If you are in a typical large network you will see all sorts of goodies being logged, from imapd reporting logins of users to a radius daemon reporting successful authentication but includes the username, challenge and response instead. You could potentially build a nice bit of intelligence from log data alone.
Perhaps the crux of this article is in this single paragraph. If you have any sort of operation where you have a heap of systems then logging to an external host is a great idea, but consider the security implications and ensure that the logging traffic is tunnelled or otherwise corralled to prevent trivial recovery of log data.
If you use an IDS, then you have likely established that you are a “target”. Whatever the reason these IDS systems can push their findings via syslog to a remote host. What is important is that the integrity of your logs are maintained. I recall a survey of self confessed script kiddies and almost all of them said that machines that were in an almost default state and only logging locally were an easy target – what’s better, they can rootkit the machine and sanitize the logs so it appears that they were never there. When these folks discover external logging on a host they have obtained a root shell on they direct their focus on the log server and generally the log server is the same OS and the exploit that gained then access to the first host was just effective there.
The earliest machines had paper log printers. While incredibly wasteful any tampering with the log would be obvious.
At the very minimum acquire some 1GB WORM SD cards on eBay or Amazon. These were made by Sandisk at the request of a police department in Asia who was concerned about photographic evidence from cameras taken to the crime scene being altered. Unfortunately the program appears to have been unsuccessful and these SD cards are popping up everywhere for sale. How you use the SD is entirely up to you. Obviously you could just start writing to the block device without even using a filesystem and just continue appending until you ran out of space. By queuing two SD cards in the host an email could be sent to the admin notifying him the card is full while the secondary card is now used until full when hopefully the administrator has placed a blank device in the slot. The advantage of this method is speed – the data is very quickly appended to the WORM device and should an exploit be found that allowed access to the log server then there would be nothing they could do remotely to sanitize the logs.
Instead of this approach others may wish to put a basic filesystem on the media (such as FAT32) and instead export log data at an interval they are comfortable, compress it and then add it to the WORM. While you could argue that this is much more economical than the former suggestion, remember that there would be nothing stopping them from piping their data through gzip or a similar archiver before committing it to the volume.
I have spoken in detail on schneier.com about my solution for key storage and this actual real world implemented solution would also be ideal for logging. My device uses an AVR with two FIFOs to implement a crude RS232 data diode. By building something similar so that data may only pass through the serial port. We implemented a proxy on each side in the AVR so that data could be resent with out the host seeing the other host. Imagine a hardened down BSD machine running as a log “aggregator” that collects logs, builds the database (that can be replicated elsewhere). Logs of a level above a defined threshold are shot down the serial port to a second machine that is isolated and kept offline deliberately. The machine would only have utility after an attack where log modification is suspected.
Indeed a much cheaper way of doing what is described above involves setting up a host that has the sole purpose of archiving log data sent to it. By simply cutting the TX lines of the Ethernet cable you can make a primitive data diode. The host you connect it to, let’s pretend it is the second network interface on your main log server, will not negotiate ARP or anything fancy due to the lack of a response from the host with the cut TX lines. The easiest way to get your data into the quarantined box is to send it as a broadcast. On the box that has been quarantined you could use a number of methods to capture your data but the easiest way is to run tcpdump on the interface and pipe its results into a FIFO which is where your script listens, converts the received broadcast frames back into log file entries and archives them in an internal database. Some basic watchdog script would ensure everything continued to run. Given this machine doesn’t have access to the network by design it would be unable to tell us of its performance. Perhaps my RS232 data diode could be put to use, pushing out only statistics that could be graphed with cacti or nagios and information on its internal state (HDD SMART status, etc) into perhaps a Raspberry Pi or a similar cheap tiny embedded board which can then contain code to listen to the serial input, process it and then shoot it out as SNMP.
Anyway I have not slept in over thirty hours as a consequence of a long flight and the insomnia that followed, so please excuse me if this article isn’t in my normal style and is perhaps too verbose and wordy.
Ultimately remote logging is a very worthwhile feature. Ensure that any machines that are spread about geographically are not just sending UDP log packets across the notoriously insecure Internet. Even if you have a VPN between locations or you have a single office you should consider moving to a syslogd that allows sending with TCP and is configurable enough to allow you to use stunnel to encapsulate your connections. The overhead on modern hardware is negligible.
So, perhaps it’s time to start taking syslog seriously. There is even a piece of Windows software that pegged the Windows Event Log and send updates using syslog. Something like this would be ideal for a shop with many *NIX machines and only a few Windows boxes. I recall the software had a functioning SNMP client they claimed was superior to the vendor offering. “Why” you may ask.
“Because it actually works and doesn’t crash on its first trap”