Cyber security

Cyber security is a form of human antagonism.  Engineering, medicine and other technical and scientific endeavours don't have the aspect of human antagonism: they try to solve a problem with technical means and start from the assumption that the difficulty is human-neutral, that is, the origin of the problem doesn't try to evade the solution.  If a doctor faces the problem of curing a broken leg, he doesn't assume that if he finds a way to heal the leg, the "broken leg problem" will try to find a way around it, so that his cure doesn't work any more.

With cyber security, this is different.  Cyber security is a kind of warfare.  In warfare, there are no rules.  It is warfare without physical violence to humans (in most cases), but it is warfare nevertheless.   This is why there is no technical definitive solution to cyber security.  There are no secure systems (in the sense that it would be systems that are perfectly safe on the side of cyber security).  And if there would exist such systems today, tomorrow, somebody will find a way to attack them. 

Cyber security is based upon a distinction of humans: those that are supposed to work with the equipment, have access to the equipment's contents and are allowed to use it and alter it the way they like, and "all the others", who are supposed to be kept out of those equipments,  while some will try very hard to be able to do with the system what the first set of people doesn't want them to do.  We will call the two groups respectively the "users" and the "attackers" of the system.  The goal of cyber security is to have the users use the system as they like it, while denying the attackers any access to the system which the users don't want them to have.

The users are supposed to be able to read files on the system, to create them, to modify them, to delete them, to use the web cam, to install software, to alter the system, to reconfigure the system, to use the network resources, and so on, while the attackers want to be able to do so to, and are supposed to be denied this access.

The fundamental vulnerability is that the system has to be built in such a way that the users can do this ; as such, there is a way for the attacker to use the system in that way too, if he can trick the system in making him pass for a legal user, or in modifying the system slightly at a certain point in time so that the distinction user/attacker by the system is different than the user thinks it is.  This alteration is always possible at some level because computing systems are complex systems made of many different pieces of hardware, firmware and software, provided by many different people of which the trust relationship is not guaranteed,  of which it is, due to the complexity, almost certain that they contain critical errors somewhere, and of which, also due to the complexity, the legitimate user doesn't realize he's granting attackers rights that weren't intended.

In other words, if a system were designed in such a way to guarantee that it cannot be attacked, then it would also not function for the user.  The only way to have a system that is usable is to make a system that will contain vulnerabilities.  This is a general trade off: security versus utility.  It goes all the way.  An absolutely secure system doesn't work.

Cyber security and cryptography

There is a double relationship between cyber security and cryptography: cyber security is necessary for cryptography to have any sense, and cryptography can contribute to cyber security.  After all, both cryptography and cyber security have a common element: they are both models in which there is human antagonism.  Cryptography divided the human species into two groups: friends and enemies, where the distinction relied on the knowledge of some or other piece of data.  Cyber security divides the human species into two distinct groups, legal users of a system and attackers of that system.

Whereas cryptography has to do with information, cyber security has to do with concrete, physical installations and the data it contains, but as most if not all cryptography is done with computing systems, we obviously see the relationship. 

As cryptography is making the essential distinction between friends and enemies based upon the access to some piece of data (the secret key), obviously, if the computing system on which this piece of data is kept, is compromised so that an attacker gets access to that piece of data, then the cryptographic system is broken (because enemies now also possess the key, and have the same abilities as friends).  On the other hand, cryptographic techniques are often used in computing systems, to only allow legal users to use the functions of the system and keep attackers out.  The simplest version of this is asking for a password before giving access to the system.

The above simple truth is often overlooked.  Cryptography on a system on which the security is broken, is useless.   And a system that depends, for its security, on a broken cryptographic system is broken security-wise, too. There's no point in having state-of-the-art cryptographic systems on a system on which the most elementary security is missing.  To put it bluntly: there's no point in keeping your one-time-pad key on your Face book account and think that what you cypher with that one-time pad is absolutely secure, because there's a mathematical proof that tells you that nobody can break a one-time-pad cypher.

Threat model

As cyber security is warfare, you have to know your enemy, following Sun Tzu.   In fact, the strategic insight of Sun Tzu is extremely applicable to cyber security !   The basic strategy of cyber security is that one should find out what cost an enemy is willing to spend to apply a successful attack, how much damage such a successful attack will do to you, and what time, money and effort you are willing to spend to avoid it, knowing that every system can be broken.   The basic strategy is hence to render a successful attack so costly for the attacker, that there are chances that he will give up because it isn't worth it.

Does this mean that certain systems don't need any security, because after all there's nothing to gain from ?  This isn't true.  There are two aspects of every system which can be of interest for an attacker.   The first aspect is "entertainment".   It is fun to break systems.  Shear vandalism can be fun.  But this is in fact a minor threat: vandals are not very motivated and not willing or able to put a lot of resources in an attack, just for the fun of it.    So, although you should keep vandals out, this is in fact the easiest part.  There is another aspect to any system which is worth much more for an attacker: once your system is compromised and in the attacker's hands, it can be used to attack other systems over the network.   As such, breaking your system can be worth whatever the attack on the other system is worth.   If your system is somewhat easier to break that the neighbour's, then it can be interesting for an attacker to break your system rather than the neighbour's, to conduct his attack on a high-stakes system through yours.  A system, compromised in order to attack another system, is called a zombie.  A large collection of zombies attacking a target is a botnet.  The exact way of setting up a botnet by an attacker can differ a lot, but a few famous examples have been with viral infections.  Probably the most spectacular is the conflicker family of worms.  The worm is of course self-propagating, but the interesting part of the worm (which relates it to a botnet) is that it regularly asked for self-updates and instructions from its "masters" via cryptographically generated internet adresses so that it could be used for different purposes, adapt against defences and so on.  Certain versions of the worm even set up peer-to-peer networks amongst them to propagate the instructions of the "masters" on the network of infected machines.  At its height, one estimates that it has infected about 15 million computers, and it is still not eradicated.  A similar kind of worm stole bank account information: the zeus rootkit which was an earlier worm that could steal bank account information, but wasn't really a botnet, has been combined with the conflicker peer-to-peer botnet principles to form GameOverZeus, a botnet with similar peer-to-peer network properties, that did what the zeus rootkit did.  The attackers are not only criminals in the legal sense.   In fact, the biggest threat can come from state-sponsored cyber attackers, of which probably the most famous example is stuxnet.  Stuxnet was an outright cyber weapon which spread across many innocent computers to vandalise (that is, to destroy physically) specific foreign industrial equipment.  Happily the creators of stuxnet (probably the USA intelligence agencies with some aid of the Israeli state) took care to confine the vandalism to very specific targets, so that most infected systems that were not the intended target only served as zombies to transmit the attack, and didn't undergo much damage themselves.

You want to avoid your system being used to attack other systems, even if you think that you don't mind.  The reason is that you may get into quite some legal trouble if the attack on the other system is traced back to your system and you'll have a hard time proving that you have nothing to do with it.  You don't want law enforcement to take all of your systems for forensic analysis.  If you're a business, you don't want the bad publicity that will go with it.

As such, every system has a threat, even systems for which at first sight, there is nothing to gain from for an attacker.     You should hence in any case be "keeping up with the Jones'" and have your security level at least as good as that of the neighbour's. 

There are no systems that don't need security.   The public PC in the reception hall that doesn't contain any corporate or private information, and is placed there for guests to surf on the internet, should be secured too, or it is the ideal zombie. There is a general threat that one could qualify as "dragnet threat" that isn't particularly after you, but that is an opportunity with a sufficient benefit for attackers to put a certain effort in it. 

We saw two such dragnet threats: vandalism, and setting up zombies.   But there is a third element which an attacker can be after, even if you think you have nothing to hide: personal information.  Even though you may think that personal information is not something that is a secret, sufficient such information can be used in several ways.  The first way is identity theft.  Of course your name is not something you consider a secret.  Where you live isn't, either.  Your social security number is also administrative information that is not a big secret.  Your birthday, also not.  The name of your mother is also not a state secret.  And we can continue.  But if one collects all these pieces of information about you, it becomes more and more possible to make one spoof your real world identity and to have law enforcement erroneously be after you for that one.   Even if the personal information is not used for identity theft, information is power. Commercial negotiation is more advantageous if you know more about the other party than that party knows about you.  Even though we are all born naked, having to negotiate when you're in your underwear with men in black wearing sun glasses puts you in a difficult situation.  There is no reason not to protect personal information even if it doesn't seem "a secret".

Of course, most systems do contain confidential information.  The questions to be asked for every piece of confidential information are:

  • what is the cost for me were this piece of information to fall in the hands of others ?  Cost in terms of image damage (the information itself, as well as the fact that it wasn't protected), embarrassment (pictures where you, the CEO, are drunk on a party ; internal reports full of grammar errors...), customer liability (confidential customer information leaked out, they won't appreciate), loss of competitive advantage (strategic plans, price offers, ...), R&D, banking information and associated risk, ...  You might also include: law enforcement risk if the information can prove or hint at illegal activity which you prefer to hide.
  • who might be interested in this information ?  Competitors, disgruntled or fired co-workers, foreign nations (industrial and economic espionage), politicians who might not like you for one or another reason, workers unions, activist groups, hackers selling the information to the highest bidder, enemies of your customers, thieves who want to empty your bank account .... and law enforcement who is looking for proofs or indications of illegal activity.
  • how much are they interested in this information ?  That is, how much effort are they willing to put into obtaining it ?  How much can they afford ?
  • will the attacker mind to be detected and eventually identified or not ?  Most of the time, it is crucial for an attacker not to get caught or identified.  If an attacker realises that continuing his attack will expose him, he might quit.  Do I have legal protection against the attacker, or does the attacker has legal protection when attacking me (for instance, intelligence agencies) ?

Finally, there is another question that is of interest.  Although most cyber threats have to do with stealing information, the threat can be in accessing your systems, not for the information it contains, but for what one can do else: damage to the system, or doing something on what the system controls (for instance, a production line in a factory).  Stuxnet is an example.  If your system is controlling a chemical plant, there's a lot of damage or (state?) terrorism that can be done by taking over the control of that system. So there's one more threat to consider:

  • what else can an attacker do when he has access to my systems ?
  • who might be interested in doing this ?
  • how much is he interested in doing that ?
  • will the attacker mind that he might be detected or not ?

One should give these specific questions sufficient attention.  They allow you to set up a reliable threat model.

Attack surface types

Again following Sun Tzu, you have to know your weakness.

Essentially, the attack surface has the following parts:

  • hardware
  • firmware
  • networking
  • software
  • human

Each of these parts will show vulnerabilities, that is, potential ways on which to base a successful attack.  In this same order, we have the difficulty and cost of the attack decreasing, the detectability increasing, and the popularity and commonness increasing.  As a rule of thump, although there can be exceptions.

Building and selling you compromised hardware, or compromising your hardware (before delivery, or in situ at your place), is in general very difficult to do.  The amount of effort that it costs puts this in general on the account of state-sponsored organisations such as intelligence agencies, or other large-scale criminal organisations.  However, if you have compromised hardware, it will be extremely difficult for you to find out and there's not much you can do about it.  If Intel's processors have a back door in them, what can you do ?   In fact, the only way to hope to detect this, is if you can put your hand upon suspicious network activity initiated by your hardware, or your hardware responding suspiciously to strange network requests.

On the next level, we find firmware corruption.  Firmware is software, but which is kept in ROM memory and is supposed not to be changed (often).   The computer BIOS (or UEFI) is an example of firmware, but many devices have firmware running on a controller to make it perform the low-level tasks it is supposed to do.  Even a thump drive has firmware to it.  In the old days, firmware was really "firm", that is, you had to take specific action on the hardware to modify it, like connecting a special device on an internal connector.  Unfortunately, most firmware today is stored in FLASH memory, which can be re-written, and most of that flash memory is made reprogrammable by the system itself.  Almost as difficult to detect as compromised hardware, and very powerful, firmware corruption is the current head ache of cyber security.  Probably still limited, it is strongly gaining in popularity, because the logistics of firmware corruption are less demanding than those for hardware corruption.  Hence they are open to many more potential attackers.

Firmware corruption is currently the most dangerous attack surface, to which the defence is still very weak.  However, it is still a very difficult attack to put in place, but progress is made quickly.  I predict a bright future in the next years for firmware attacks.

Attacking your network, or your network connections, is a more classical attack surface.  Corrupting your outside network (over which you have no control if you use an internet connection) can give access to anything you do on the network, such as a man-in-the-middle attack, and/or using DNS spoofing.  In as much the outside network can get access to your system, or in as much as confidential information is sent to the outside, an outside network corruption allows one to steal the data or to gain access.

An inside network corruption is more dangerous.   Indeed, in as much as any reasonable cyber security policy strongly limits the damage by anything that comes from the external network, once on the inside, usually the security measures are much more relaxed, and hence the vulnerability of your system to an internal network corruption is much larger.

The "standard" attack surface is of course software, whether it is system software, or application software.   The attack surface can be corrupt software, or simply added malware (malware is software who's only goal is to attack your system).   That software then does on your system what the attacker wants it to do: to send him information, or to perform the actions on your system which were the object of his attack in the first place (like blowing up your chemical plant).

Finally, the simplest attack is by misleading the legal user and tricking him in doing what the attacker wants to do, as the legal user has all the means to do with the system as him pleases.  This kind of attack is called social engineering.   It can consist in tricking the user into sending directly the information to the attacker, or in corrupting the system so that the attacker has easier access afterwards.

Attack vectors

By what means can the attacker reach your system ?    In order to do something with your system in the way the user would do it, the attacker has to be in contact with your system in one way or another.  Fortunately, the stronger the attack vector, also usually the more difficult it is to perform it, the less people are actually capable of using the attack vector, and the higher the chance for the attacker to be caught.

The different attack vectors are:

  • the internet.  This is usually seen as the only attack vector.  Although it is indeed the most popular one and the one against one has to protect in the first place, it is not the most dangerous one.
  • Wireless access.  Since the advent of wireless LAN, the access to the (vulnerable!) local network has become much easier with wireless access.  If the network coverage reaches beyond the fences of the site where the system resides, you can attack the system wireless from a parked car for instance.  But at least, the attack is only possible nearby.  The amount of potential attackers reduces drastically.
  • Wired network access (externally, or internally).   This becomes already much harder to do.  You have to get physical access to a network outlet (a printer, a router in a cupboard,...), a cable, or the like.  While this limits seriously the number of people that can do it, and increases strongly the probability for the person to be caught in the act, it is usually much easier to get access to network infrastructure rather than to machines without raising suspicion.   The called-in maintenance electrician working on cables looks much less suspicious than the plumber opening a console session on a server in the server room.
  • Access to the console, and/or to machine ports such as USB ports.  Systems become very vulnerable in most cases when the attacker has access to the console, or when the attacker can connect a malicious device to the system.  In many cases these are vulnerabilities by design.
  • Casual access to the hardware.  If an attacker can open the machine case for a small amount of time, he can corrupt firmware or hardware doing so.
  • Access to hardware before delivery at your site.  Someone can tamper with your ordered material before it is delivered.
  • Stealing or confiscating hardware.  If the attacker can take away the hardware to analyse it for as long as pleases him in his own laboratory, most systems will give away all their secrets.

One should analyse how easy or how difficult you make it for each of the attack vectors to be executed.  As a function of the threat model, you should estimate whether the attack is probable or not.

For instance, if you have a networked PC in the front room of a shop that is not always attended, or a network printer next to the lavatory, the attack with physical access is rendered much easier than if those machines are in a locked room with personnel near the door.  The chance of someone sneaking in a USB thumb stick into an office PC depends how easy it is to have a stranger walk near an office computer.

One shouldn't necessarily protect against all these vectors.  But one should give it a thought as a function of the threat model.

Vulnerabilities

The attack vectors serve to reach a vulnerability.  Vulnerabilities can be classified in three types:

  1. Vulnerabilities by design
  2. Errors (bugs)
  3. Back doors

Vulnerabilities by design are known and accepted design decisions in the design of an aspect of a system, or a protocol, in the earlier mentioned trade-off between functionality and security, where one has knowingly sacrificed security for functionality.  For instance, allowing physical access to the machine allows one in many cases to format all the disks on that machine and install a new operating system with new software and new credentials, even if one doesn't have any original credentials to be a user.  The design of the machine is such that this is a well-documented procedure because being able to do so is estimated more important than being stopped from being able to do so (to refurbish a computer, for instance).  It can also be because avoiding this to happen is technically so complicated, expensive and performance-crippling that one accepted this vulnerability.  Another example of a vulnerability by design is: being able to re-flash the BIOS if one has physical access, and have the original operating system reboot, even if you don't have any credentials, cannot access the installed operating system, and the disks are encrypted with keys you don't know.  Or being able to read unencrypted disks on a machine, even if you don't have any credentials : you boot the machine on an external drive containing a system you installed yourself, you mount the original system disks, and you copy what you like to copy to your external drive ; next, you reboot the system on its own system (where you cannot get in).  There's essentially no trace of what you did (apart the last usage date in some file systems).

Another important vulnerability by design is the internet protocol: you do not master where your network packets are going, and anybody along the road can see them, see where they come from, see where they are going, and can even modify them.   The internet protocol was not designed with security, nor anonymity, in mind.

One should be aware of vulnerabilities by design, and protect them from the relevant attack vectors in another way (for instance, by physically protecting them) in this particular case of the vulnerabilities by design of a desktop computer, or by using extra network layers on top of the internet protocol to obtain any form of security and/or anonymity.

Most 'interesting' vulnerabilities, however, are bugs.  Complex software systems are full of bugs and some of these bugs mean that the software is not going to act as intended versus (eventually very strange) attacker input.  In order to access the vulnerability, the attacker needs some access to the software at hand.  That access can be desired (for instance, a public web service), unavoidable (for instance, an open port on an internet connection), or by intrusion into another way.  If the software at hand doesn't handle correctly the strange request, it can act in such a way that the attacker gets unwanted access to system resources.  This is the classical way of attacking systems: through software bugs.   We will come to that later.

Finally, there can be designed and on-purpose access, which is concealed and non-documented, built in by the designer of a sub-system: that is called a back door.  In principle only the designer and the people he informs about it, know this back door and can get into your system that way.  These back doors can be put in there for copyright owners to find out if you are violating their rights, for commercial companies who like to steal personal information for data mining purposes, by intelligence agencies to spy on your activities, it can be required by law enforcement to be able to investigate people in general on unlawful or on politically incorrect attitudes (global surveillance), ....

Usually, the best protection against back doors is open source software.   It is much more difficult to hide a back door if the source code is available,  There have been and still are back doors in open source software discovered.  Only in a few of these cases, the back door was intentionally entered by a contributor in a release ; in many cases, back doors are introduced by compromising the source code servers themselves.  An indication that a back door is best found in open source, is the Borland database debacle.  Borland (yes, the company that also makes pascal compilers !!) had constructed a back door in its own product in 1994, that went unnoticed and sold to many customers.  When Borland made the project "open source" and gave out the source code, the back door was seen within 6 months by people reading the code.  (the back door was an extra login: username: politically and password: correct).

In a few cases, compilers were compromised so that they put a back door in all the code they compiled.  A more spectacular and recent example is the corruption of XCode in the Apple sphere.  Indeed, the vulnerability of open source software is that the source code is open to inspection, but you still need to compile it with a compiled compiler.  Even if the compiler is open source itself, if that source is compiled with a back doored compiler, you have all back doored executables with all clean source code.  A solution to this catch 22 is presented here

Attack strategies

An attacker will use different attack strategies, using the attack vectors at his disposal, and using the different types of attack surface, to reach his goal.  The strategies can be complex, because a single attack vector on a single attack surface may not lead directly to the goal of the attacker.  A strategy consists in setting up a sequence of attacks, where systems are compromised and serve as zombies to attack other systems, which in their turn become zombies, until the final goal is reached.

Hypothetical attack strategy in a hypothetical example (somewhat inspired by stuxnet):

Suppose that firm X has a remote chemical plant Y in a far-away country where chemical mixtures are produced for big customers.  The director of plant Y will produce chemicals for a given customer with a given composition only on order of the CEO of firm X, which will sign cryptographically the order.  The plant director will verify the signature of each order he receives and execute it if the signature is correct.  The CEO is aware of the importance of the private cryptographic key which can sign these orders, and therefore, he keeps this key on a special computer in a safe, which has no network connection.   In fact, the CEO types his order on his office laptop, saves the document on a USB thumb drive, takes this thumb drive with him in the safe, uses the offline computer to sign the document on the thumb drive, and then, back on his office laptop, sends the signed document off to the plant director.  This procedure seems to be waterproof to protect the secret key, as this key only resides on the non-networked computer in a safe and is used for nothing else but to sign these documents.

Suppose that firm X has potentially a very big deal with firm Z for the production of certain chemicals and that the contract will depend on the quality of a sample they will receive.  An entity W would like this deal of which it has heard, to fail.  The plan is to send erroneous instructions to the plant director Y, so that the chemicals sent to Z do not work out, and the contract is lost to firm X (which is the goal of entity W, for whatever reasons).

For this, the secret signing key of the CEO has to be stolen.  But how ?  In fact, the strategy would be to corrupt the CEO's laptop on which he plugs the USB key with the signed documents.  If the corrupted laptop can install a BadUSB kind of firmware on the USB key, then the next time that the CEO will plug a key in his laptop, the firmware of the USB key will be altered.  When that thumb drive is then plugged into the signing computer in the safe, it will secretly copy the secret key to a hidden file on the thumb drive.  Back at the CEO's laptop, the malware on that laptop will send this file off to a website somewhere on the other side of the world.  Taking off the file, and closing the website, there will hardly be any trace left by the time the CEO will realise that his signing key is compromised.

With the stolen key, the attacker can now spoof an e-mail from the CEO to the plant director with instructions to mess up the production for Z.

In order to corrupt the CEO's laptop, the attacker can use social engineering of some or other member of personnel, and have him install some malware that gives the attacker access to his computer over the internet.  Through this computer connected on the intranet, the attacker might seek for a vulnerability in the CEO's laptop and gain access to it, to install the malware on the CEO's laptop.

It is only when Z will receive that mess, and inform the CEO of the bad quality, after investigation, that firm X will realise that the key got compromised.   By the time that forensic investigation comes along, the attacker has long erased any trace that might lead to him, using essentially the same road, to erase the malware.

Such an attack is of course not easy to set up.  But something very similar has been done with the stuxnet attack on the Iranian enrichment plants.

Bugs as vulnerabilities and disclosure

The conceptual error I made before I got more involved in cyber security, was to think that, yes, there are a whole lot of bugs in the software on a system that allow for cyber attacks to take place, but that only a handful of very capable hackers are able to use them.  Indeed, it is in most cases not easy to transform a bug into a useful system access, because of course the bug was not intended for this.  If you place yourself as the naked attacker, simply knowing about the bug, it takes a lot of specialized knowledge to transform this knowledge in an exploitable system attack.  My erroneous idea was that the few hackers in the world that found out about a bug, and had the capacity of using it "correctly" to gain access to a system, are so few and far in between, that the probability that they had anything against me, insignificant being, was next to zero, so I shouldn't care.

Nothing can be further from the truth.  Although it is true that finding unknown vulnerabilities is very difficult, and indeed, finding a way to turn them into a useful attack can be even more difficult, and only a small amount of people have these capacities in the world, known vulnerabilities are made public, and the ways to use attacks, often with the software that goes with it, is publicly available.  So you don't need to be able to find them, and to write the software tricks, to use them !  Any body can do so.  This enlarges significantly the amount of potential attackers of course !  There are free (and commercial) software packages available that help you set up an attack on a machine with a known vulnerability.  There are even systems that scan systematically for vulnerabilities on potential systems.  That's a wholly different picture than the lone genius hacker that found out a vulnerability and is going to use it against 3 targets !   Thousands of people have at their disposal software that has the knowledge incorporated of thousands of vulnerabilities and the way to turn it into a useful attack !

Some terminology:

  • a piece of software or a technique that renders useful a vulnerability into a kind of access to the vulnerable system is called an exploit
  • an exploit and/or the associated vulnerability can be publicly known, or can be only known to their discoverers and a few people they informed about it.  In the last case, we call the exploit a zero-day exploit.
  • the normal reaction to a publicly known exploit is a security patch, that is, a modification (a new version) of the original vulnerable system that doesn't contain the vulnerability any more.

There are three different attitudes towards zero-days:

  1. full disclosure.  From the moment that one discovers a zero-day exploit, he makes it public.  This means that all people over the world are informed about the fact that the vulnerability exists, and also, that all attackers over the world learn how to attack these vulnerable systems.
  2. responsible or coordinated disclosure.  From the moment that one discovers a zero-day exploit, one only informs the software maker containing the vulnerability, so that he can bring out a security patch.  After a certain time, one makes the exploit public (after the software maker brings out the patch, so that in principle, systems don't contain the vulnerability any more if they keep up with security patches).
  3. non-disclosure.  You are a hacker and zero-day exploits are the most valuable attack vectors.  You use it to attack systems, or you sell it for big money to those wanting to use it.  Or you are a naive and pretentious person, thinking that you are the only one capable of finding this error, so if you won't tell about it, nobody will know it.

The arguments for full disclosure are that it is one zero-day exploit less that hackers who surely know already about it, can hide, and that it is the means of maximum pressure on the software maker to provide a security patch.  The arguments for coordinated disclosure are similar to the arguments of full disclosure, except that it would be a good thing that there were a patch before a lot of hackers that didn't know the zero-day are now able to use the exploit.  The argument against coordinated disclosure is that it puts much less pressure on the software maker to release a security patch, hence leaving still valuable time to hackers in the know.   The argument here is that discovery by you probably implies that many hackers already discovered it and are enjoying their zero-day exploit.  The faster this can stop, the better.  Also, it is important to put users in the know of a potential attack as soon as possible, so that they can, even lacking a patch, protect their equipment in one or another way.  Giving time to the software maker to develop a patch implies also leaving the users some extra time in the dark about a potential attack that they might mitigate in another way (by shutting down their system if they really need the security for instance).  The debate between full disclosure and coordinated disclosure is a heated one.

In any case, this implies something for the user: apply immediately all security patches