There's a fine line between censorship and filtering, with both occurring for different but related reasons. Determining when to stop is difficult and I'll discuss some of the reasons here.
The Oxford English Dictionary defines censorship as "the suppression or prohibition of any parts of books, films, news, etc. that are considered obscene, politically unacceptable, or a threat to security.". Clearly there are moral issues when it comes to deciding to censor - whose ethics or moral code is determining what's obscene?
Another consideration is personal bias. For example, I don't drink alcohol (personal choice, I don't like the taste) so would never consider alcohol valid in the workplace. Indeed, I find it completely confusing that business meetings sometimes include wine and beer - surely you want everyone on top form? That's my thinking and rationale, but if I were to block alcohol related websites it would be a problem for the hospitality teams running the customer bar.
What gets filtered / censored?
Typically access to certain websites is filtered by organisations, for multiple reasons. Additionally, email is filtered to remove junk messages (spam) and malicious emails (those with viruses and malware attached).
Who makes the decision?
Where I work at the moment it's often the case that I don't have to make the decision on whether to grant access to / block a resource - that's often decided by a manager. Using my alcohol example from earlier, one of the bar managers approved access to the wine seller's website so I just had to change the setting.
Sometimes decisions are made by the management team responsible for corporate governance. Such groups tend to only exist in larger organisations whereas in smaller companies these decisions are often made by the main management board.
Generally, in England at least, it's considered inappropriate for children to be exposed to sexual nudity in the form of pornography. Preventing access to such content is done to prevent harm that the material could cause to the child. Naturally that's decided based on someone's perception of what that content could do, although I'm sure there have been studies conducted (a Google search offered a number of results that I leave you to review should you be interested).
Censorship can also happen because the controlling entity (government, management board, etc.) wishes to stop access to controversial material or resources that go against the controller's regime or policies.
Filtering is performed for a number of reasons which vary dependent on the organisation type. When I worked in a school we filtered out (blocked) sites that were considered non-educational or inappropriate. Sites like Mini Clip, who back in the day had some fantastic Flash games, were filtered because they caused a distraction (I had a few friends that played games on Mini Clip at the back of their lessons while we were in school, you know who you are!). Pornographic sites were filtered because they were considered inappropriate in an environment consisting primarily of minors (children).
In the workplace sites can also be blocked because they provide a distraction. Social networking is often blocked for this reason, and while that worked years ago it's less effective blocking sites now - staff will just get out their mobile phones and browse that way.
Controversially, I'd consider putting a block in place to prevent distraction was the use of technology to solve a people problem. Managers should know if their staff are wasting time (be that online or during excessive tea breaks) and should discuss that with their employees. Tea breaks are harder to notice when the workforce is spread across many sites, but work output can still be observed and prolonged reductions in output should prompt conversations. A number of caveats exist when investigating users so it's best to solve a people problem with a people solution.
While thinking about drops in work performance, it's worth remembering those could be due to what's happening in a person's personal life. A good manager should consider that too. If excessive Internet use is observed it may be an employee's way of searching for support.
Filtering is also used to protect the environment from malicious actors. We use known lists of malicious sites to prevent access to such content, helping to keep our systems as safe as possible. I've never met anyone who had a problem with filtering for this reason, although it could be considered censorship.
Government can intervene to require blocking of content. It's expected, I believe even required by law, that we prevent access to extremist and terrorist material where I work. Arguably this is censorship, rather than filtering, but the aim is to protect the whole of society and those that are particularly impressionable.
When a site is incorrectly categorised "pornographic" when it's actually a site about "pornographic addiction" that's a false positive. A more appropriate classification may be "charity" or "self-help". Equally, our email filter was giving false positives for "sexual" emails when a building supplies catalogue was attached - unsurprisingly screws are sold for use in building projects! The system objected to the word "screw" as it's a slang term for "to have sex with".
False positives can cause issues during investigations as there is an implication that a person visited a banned site. This is worth considering before making a decision on a case.
Can filtering cause problems?
Absolutely! Filtering systems often have to apply blanket rules across users which can disrupt workflow. Going back to my time working in schools, it wasn't unusual for the psychology department to request temporary unblocks to allow students to research particular topics. In my present role it's not unsurprising that I need to browse sites categorised as "hacking"; in fact I actively download "hacking tools" such as Kali Linux, Wireshark and Nmap. Separate policies are required to allow members of different teams to do their jobs effectively.
Defining an action as censorship or legitimate filtering is subjective and heavily context based. If my employer asked me to block access to a website that openly spoke against my organisation I would consider that censorship. (It's not prevention of free speech because the site hasn't been taken down, you just couldn't view it here.)
Similarly, requesting access to terrorist sites I'd consider filtering as, when applying the "man on the street test" I'd expect the majority would consider such sites offensive.
For me, filtering is legitimate if the purpose is to protect individuals but only when that protection passes the aforementioned test. Censorship can fall into that definition too but I'd consider something aggressively censored when the motives don't pass that test.
Banner image a word cloud based on some words that sprang to mind.
 "man on the street test" - seeking the viewpoints of random, normal, people in order to determine what the average person considers reasonable. This differs from surveying a particular group as the group would largely share the same, biased opinion. For example, if I asked a gun club their views on gun ownership I'd potentially get a different, consistent, response compared to speaking to people on the city high street where it's more likely to get a range of views.