(ISC)² Security Congress 2021 - day one

My reflections on the first day of the (ISC)2 Security Congress conference.

(ISC)² Security Congress 2021 - day one

The last conference I attended in person was the Towards a safe and secure smart world one (blog post here) held at Canterbury Christ Church University in January 2020, where I also had the honour of speaking.  I've missed hearing good talks during lockdown, despite there being some events online, so when (ISC)² announced Security Congress 2021 I bought an early-bird ticket.  This is the first conference I've paid to attend (aside from travel costs) and it spans three days.  As you can imagine, I've been looking forward to this conference for some time and I'm going to be blogging about each day, time permitting.

Last year's Security Congress was, understandably, made online only due to the pandemic.  Initially this year's event was a hybrid in-person, based in Florida, event with online participation available.  Sadly for many the event was moved to online only.  I was never going to be flying to Florida, but I can completely understand missing the networking opportunities is a bind.  I'll devote another blog post, post conference, to the pre-conference and online experience, so I won't go into that in this post.

My schedule

Screenshot of diary for 18th - 20th October 2021, showing many sessions from 13:00 - late evening.

I took annual leave (holiday) off of work to attend the conference, plus was given one day of work time to attend (if you're reading, thanks boss), so between taking time off and paying to attend the conference I was keen to get maximum benefit from it.  Above you can see my schedule for the three days and you'll notice they're pretty packed.  Also, times above are British Summer Time (BST) - the conference itself being Florida hosted means Eastern Time was the "home" time zone.

Clar Rosso: Kick off & welcome

(ISC)² CEO Clar Rosso welcomed everyone to the conference and talked about the staffing gap / workforce shortfall.  Clar highlighted that we're all so used to seeing big numbers that they almost weren't useful metrics - instead we should focus on whether or not our organisations have the staff they need.  If they don't, what would we do differently with an extra person?  What would we defend better?

Diversity in the industry is clearly one of Clar's focuses, and from hearing the Town Hall segment I can see the board is focusing on inclusion too.  For the record, I fully support diversity.  A statement of Clar's that stuck with me was that diversity is absolutely not about removing opportunities for white males.  Food for thought, because sometimes it can feel that way.  We've all got an obligation to ensure the right people are hired for the job, and feel they have the opportunity to apply.

On the subject of hiring, it's not uncommon to see entry-level positions advertised, with entry-level salaries, asking for excessive experience or qualifications.  For example, asking the applicant has a CISSP or similar qualification.  Make sure the job description and requirements match the title and salary.

Clar also suggested hiring people with the soft skills and experiences that cannot be taught, not focusing on the technical side of things, and then training employees in appropriate areas.  I've heard this suggested elsewhere too and I think the idea has merit, and creates opportunities.

Opening keynote, Chris Krebs: Defend today, secure tomorrow

I'll be honest, I found this talk quite dry and not particularly engaging.  I think that's because the talk felt incredibly set in the American context.  That's understandable given the speaker has worked with (what I believe are) American government bodies.

Chris spoke a lot about the threat landscape, offering some examples from 2016 (Russian interference in US elections and Ukranian power) to 2021.  He made it clear that over the next five years there were going to be significantly more connected devices, not fewer.  Consequentially the attack surface is going to change and enlarge, and we need to plan for that.

There's a need to do the basics better, with examples given as multi-factor authentication and backups.  It's important that we transition our systems to being resilient, so a compromise of one device doesn't cause a failure of the whole system.  Zero Trust Architecture, and the "assume breach" mantra, will play a big part in this transition.

Ira Winkler & Tracy Celaya-Brown: Human security engineering - a strategy to address "the user problem"

This dovetailed nicely with the previous talk, with Ira and Tracy stressing that a single user shouldn't be able to bring your entire system down.  What additional protections are there are in place to prevent a single "user initiated loss" from causing a catastrophic failure of confidentiality, integrity or availability?

Ira brought the viewpoint that users could only do what system administrators have given them the ability to do, and stressed that the user was not the problem.  It's our job as system administrators, and particularly security professionals, to ensure we stop malicious items getting to our users.  It's not their fault if they clicked a link in a phishing email, the attacker is an expert at crafting phishing emails but your users aren't likely employed full time to spot such tactics.  Importantly, "social engineering is not a fair fight".

After an incident look at what caused the situation.  How did the user come into contact with the mailicous artefact?  The problem is not the fact the user clicked the link, that's a symptom of the problem.  The problem was the fact the user received the link at all.  Consider why the user was able to trigger the payload.  Were their permissions suitably restrictive to reduce damage?

On user training and policies, we are reminded to be very clear and explicit - what is acceptable?  What is not?  Do users know how to report incidents?  An awareness campaign is great, but it's important to back it up with "nudges" - posters in the offices (even in the toilets!) or user interface items.

Keith Barry's virtual brain hacking experience

A fun, interactive, session with a mentalist / magician performing a set of tricks over an hour.  Was a bit different and broke up the talks quite pleasantly.

Town hall

During this hour, people from (ISC)²'s board answered questions from members and attendees in a panel event format.  CEO Clar Rosso announced a new entry-level security accreditation was going to be piloted soon that would act as a stepping stone to the CISSP.  A further change is that two additional (cheaper) membership grades are being planned, student and candidate, to allow people to participate in (ISC)² activities right from the beginning of their information security journey.

Diversity came up again and questions were raised around the cost of training and exams which would limit access to some people.  This is being looked at, but the cost of the exams themselves can't be changed.  Instead, (ISC)² are looking at cheaper qualifications earlier in the information security journey.  Another task to inclease inclusion is the localisation of exams.

Interestingly, but not surprisingly, the CCSP (Certified Cloud Security Professional) accreditation is growing the fastest.  CISSP remains the gold standard, and is also growing, but not as quickly.

Tony Gee: What are you leaking?  Practical steps in knowing your OPSEC

OPSEC, or Operational Security, is actually a military term but it's now being used by people to refer to being aware of the information they leak or how they / their organisation operates.  Leaks in your OPSEC are found via Open Source Inteligence (OSINT) which is something I've mentioned in lectures I've given in the past, so it's always reassuring to get some validation from another speaker.  I also find it really fascinating to see what tools other professionals use.

Tony mentioned some basic examples, such as sharing photographs of organisation ID badges online, as well as more technical examples where organisations have published certificates that show their internal asset names.  All of this information is useful for the reconnaisance stage of a penetration test, so organisations need to be aware of what's out there.

When conducting OSINT Tony highlighted how it was important to do so from a system that was separate to your main environment.  For example, use a virtual machine that you can return to a clean state once the investigation has concluded.  It's important to verify your findings, and you don't want to taint your research with either your own files or your past investigations!  A Linux virtual machine would be ideal, as many tools work readily in Linux.

Browser based tools exist including SecurityTrails.com, allowing you to look at DNS records and historical information.  There's also Google, which, being a search engine, can help you find useful, and potentially sensitive, information by use of "Google dorks".  These are search parameters allowing you to search for specific file types or terms, allowing you to extract further information.

On the command line, and of particular interest to me, are tools such as exiftool (for finding (or removing) metadata from files), a Python3 script called sublist3r (to run DNS queries to feed into further research), and gowitness (which takes a list of DNS names and screenshots them if they're running a website).

For the organisation, the next challenge is working out what information leaks they need to address.  Clearly you can't delete all your DNS records - stuff would break! - but you can remove out dated ones or those that perhaps shouldn't be present.  Tony also suggested publishing some false DNS records, perhaps a TXT record that looks like a Microsoft validation record, or a Google one.  This can help misdirect an attacker.  Also consider a social media policy, but be explicit about what isn't permitted.

Randall S Brooks & Jon-Michael Brook: Cloud top threats case studies & (im)proving your security?

My last talk of the day and, I'll be honest, I found this talk a disappointment as there was very little on the case study front.  Instead the speakers were talking about a threat modelling deck of cards[1] to be used while thinking about your risks and vulnerabilities.  Don't get me wrong, these are important things to consider, but I really felt the talk had lost its way and didn't meet what I understood of the brief.  Looking at the chat, others found the session useful though so perhaps it was just me...

There were some useful reminders though:

  • Data exfiltration and extortion are on the increase as part of a ransomware attack
  • Cloud misconfigurations can be the cause of a breach
  • Remember the cloud has a shared security model - you're still responsible for some areas
  • Use automated controls provided by the cloud vendor wherever possible, for example preventing someone creating a globally accessible data store
  • Just because a vulnerability isn't exploitable today, doesn't mean it won't be tomorrow!

Day one conclusions

Some excellent talks to kick off the conference and it was nice to see some fun elements in there - I certainly hadn't expected a mentalist magic show.  I'm looking forward to tomorrow, and to looking at some of the side activities like the careers centre.  I've got just over four pages of notes from day one and don't doubt I'll make more notes tomorrow.

Right now, however, I need to sleep as it's gone 23:00 here.  Fortunately I can use my morning to look at the side activities.


Banner image: Screenshot of the virtual conference venue landing page.

[1] If you've come across Backdoors & Breaches by Black Hills Information Security then you're thinking of the right idea, but Backdoors & Breaches looks a lot more polished.