thoughts/data/vantage.md

223 lines
9.3 KiB
Markdown
Raw Permalink Normal View History

2024-08-05 18:24:56 +00:00
## Key Takeaways
* Monitoring the technology infrastructure is a key element for
situational awareness in both security and IT operations.
* A 2020 infrastructure should use a modern application layer
reverse proxy such as Pomerium in front of all services. Leave
all clients outside.
* The threat landscape should be the focus when shaping a
defendable infrastructure.
<small><i>Disclaimer: If you have outsourced all your equipment
and information to "the cloud", this post is a sanity check of the
relationship with your vendor. The primary audience of this post
is everyone willing to invest in people and knowledge to provide a
best possible defense for their people and processes, and the
technology supporting them.</i></small>
## Introduction
I cannot start to imagine how many times Sun Tzu must have been
quoted in board rooms around the world:
> If you know the enemy and know yourself, you need not fear the
> result of a hundred battles. If you know yourself but not the
> enemy, for every victory gained you will also suffer a
> defeat. If you know neither the enemy nor yourself, you will
> succumb in every battle.
However much repeated, the message has not come across. Why is
that? Because this is a hard problem to solve. It is in the
intersection between people as a culture and technology.
If all used reverse proxies in a sensible way I would probably
have a lot less to do at work. Time and time again it turns out
that organisations do not have configuration control over their
applications and infrastructure, and the reverse proxy is a
central building block in gaining it. To an extent everything is
about logs and traceability when an incident occurs.
## Beyondcorp and The Defendable Infrastructure
The lucky part of this hard-to-solve problem is that Google has
already prescribed one good solution in its Beyondcorp whitepapers
[1].
But this was in some ways described in the Norwegian Armed Forces
before that in its five architecture principles for a defendable
infrastructure. These were published by its former Head of Section
Critical Infrastructure Protection Centre [2]:
1. Monitor the network for situational awareness
2. A defender must be able to shape the battleground to have
freedom of movement and to limit the opponent's freedom of
movement
3. Update services to limit vulnerability exposure
4. Minimize the infrastructure to limit the attack
surface
5. Traceability is important to analyze what happened
I know that Richard Bejtlich was an inspiration for the defendable
infrastructure principles, so the books written by him is relevant
[4,5].
Defendable infrastructure is a good term, and also used in a 2019
Lockheed article which defines it well [3]:
> Classical security engineering and architecture has been trying
> to solve the wrong problem. It is not sufficient to try to build
> hardened systems; instead we must build systems that are
> defendable. A systems requirements, design, or test results cant
> be declared as "secure." Rather, it is a combination of how the
> system is designed, built, operated, and defended that ultimately
> protects the system and its assets over time. Because adversaries
> adapt their own techniques based on changing objectives and
> opportunities, systems and enterprises must be actively defended.
The development of these architecture principles happened before
2010, so the question remains how they apply in 2020. We may get
back to the other principles in later posts, but the rest of this
article will focus on monitoring in a 2020-perspective.
## Monitoring - a Central Vantage Point
One thing that has developed since 2010 is our understanding of
positioning monitoring capabilities and the more mainstream
possibility of detection on endpoints. The historical focus of
mature teams was primarily on the network layer. While the network
layer is still important as an objective point of observation the
application layer has received more attention. The reason for it
is the acceptance that often it is were exploitation happens and
the capabilities as commercial products has emerged.
With that in mind a shift in the understanding of a best practice
of positioning reverse proxies has occured as well. While the
previous recommendation was to think: defend inside-out. The focus
is now to defend outside-in.
The meaning of defending outside-in, is to take control of what
can be controlled: the application infrastructure. In all
practicality this means to position the reverse proxy in front of
your server segment instead of the whole network, including
clients.
[ Application A ]
[ Client on-prem ] |
] ---> [ Reverse proxy ] ---> [ App gateway ]
[ Client abroad ] ^ |
risk assessment [ Application B ]
Previously, by some reason, we put the "client on-prem" on the
other side of the reverse proxy, because we believed we could
control what the user was doing. Today, we know better. This is
not a trust issue, it is a matter of prioritizing based on the
asset value and the defending capacity.
A reverse proxy is also a central vantage point of your
infrastructure. In a nutshell if you are good detecting security
incidents at this point, you are in a good position to have
freedom of movement - such as channeling your opponent.
The modern reverse proxy have two integration capabilitites that
legacy proxies do not:
* Single sign-on (SSO), which provides strong authentication and
good identity management
* Access control logic (Google calls this the access control
engine)
In fact, Google in 2013 stated it uses 120 variables for a risk
assessment in its access control logic for Gmail [6]. In
comparison most organisations today use three: username, password
and in half the instances a token.
> Every time you sign in to Google, whether via your web browser
> once a month or an email program that checks for new mail every
> five minutes, our system performs a complex risk analysis to
> determine how likely it is that the sign-in really comes from
> you. In fact, there are more than 120 variables that can factor
> into how a decision is made.
I imagine that Google uses the following factors for comparison to
the sole username/password approach (they state some of these in
their article):
- Geo-location with an algoritmic score of destination of last
login to current location was part of this. The k-means distance
could be a good fit.
- Source ASN risk score
- Asset subject to access
- User role scored against asset subject to access
- Device state (updated, antivirus installed and so on)
- Previous usage patterns, like time of day
- Other information about the behavioural patterns of relevant threats
Another nice feature of a reverse proxy setup this way is that it
minimizes the exposure and gives defenders the possibility to
route traffic the way they see fit. For instance, it would be hard
for an attacker to differentiate between a honeypot and a
production system in the first place. One could also challenge the
user in cases where in doubt, instead of plainly denying access as
is sometimes done.
One challenge is what protocols need support. The two clear ones
are:
* HTTP
* SSH
* Application gateways between micro-segments
I have scoped out the details of micro-segmentation from this
post. Micro-segmentation is the basic idea of creating a fine mesh
of network segments in the infrastructure so that no asset can
communicate with another by default. The rest is then routed
through e.g. a gateway such as Pomerium, or in high-performance
cases an application gateway - which may be a gateway for a
specific binary protocol. The reason is control of all activity
between services, being able to shape and deny access in the
terrain.
Even though this post is not about implementation I will leave you
with some examples of good open source starting points: Pomerium
is an reverse proxy with the SSO-capability, and the default
capabilities of SSH takes you far (ssh-ca and JumpHost).
-----------> [ syslog server ] <------------
| | |
2024-08-05 19:33:47 +00:00
| | |
o | | |
2024-08-05 18:24:56 +00:00
/|\ [ Client ] -------> [ example.com ] <-----> [ app001.example.com ]
/ \ | https - pomerium |
2024-08-05 19:33:47 +00:00
| | - SSH JumpHost |
| | |
| | |
2024-08-05 18:24:56 +00:00
[ HIDS ] |-------------------> [ NIDS ]
Figure 1: Conceptual Defendable Infrastructure Overview
Now that a checkpoint is establish in front of the infrastructure,
the rest is a matter of traceability, taking the time to
understand the data to gain insight and finally develop and
implement tactics against your opponents.
Until next time.
[1] https://cloud.google.com/beyondcorp
[2]
https://norcydef.blogspot.com/2013/03/tg13-forsvarbar-informasjonsinfrastrukt.html
[3]
https://www.lockheedmartin.com/content/dam/lockheed-martin/rms/documents/cyber/LM-White-Paper-Defendable-Architectures.pdf
[4] Tao of Network Security Monitoring, The: Beyond Intrusion
Detection
[5] Extrusion Detection: Security Monitoring for Internal
Intrusions
[6]
https://blog.google/topics/safety-security/an-update-on-our-war-against-account/