Your Virtual Machines

First steps


There is no better way to learn new stuff than hands-on, right? For a good start, let's analyze your credentials and test the connections. In the e-mail where I sent you VPN creds, you should find also a purplelabs_vadminX_creds.txt file that includes a list of credentials for your OWN dedicated VMs. For simplicity of console listings, we are going to use the above $X = 11 credentials set by default in every lab. Saying that you can assume that whenever you see vadmin11-related logs, then probably it's me :) Anyway, use your own $X credentials set for all the lab instructions. 


Let's see what the credential set looks like for sample identifier X = 11:

  • VADMIN_X:

    • vadmin11

  • PASSWORD_X:

    • vadmin11

  • BECOME ROOT:

    • sudo su -

  • KALI_X:

    • KALI_X_IP = 192.168.38.X

    • KALI_X_HOSTNAME = kali11

  • PRD_X [CentOS 7]:

    • PRD_X_IP = 192.168.38.X

    • PRD_X_HOSTNAME = prd11

  • DEV_X [CentOS 8]:

    • DEV_X_IP = 10.7.0.10 [192.168.39.X]

    • DEV_X_SSH_PORT = XXX

    • DEV_X_HOSTNAME = dev11

  • FUBU_X [Ubuntu 21.04]:

    • FUBU_X_IP = 10.7.0.10 [192.168.39.X]

    • FUBU_X_SSH_PORT = XXX

    • FUBU_X_HOSTNAME = fubu11

  • VPS_X [Ubuntu 20.04 + Katoolin]:

    • VPS_IP = XXX

    • VPS_SSH_PORT = 22

    • VPS_X_USERNAME = root

    • VPS_X_PASSWORD = passw0rd123

    • VPS_X_HOSTNAME = vps

RECOMMENDED SSH/CLI TERMINALS

  • Linux:

    • Terminator

  • Mac OSX:

    • Tabby

  • Windows:

    • MobaXterm


My general advice is to have a group of open tabs per PRD_X / DEV_X, etc as we will really need a huge number of active SSH connections per every single box. For a good start, get initial access to all your boxes:

KALI_X INITIAL ACCESS


A group of KALI_X VMs is running on a physical server #h1 where the default internal network subnet is 192.168.38.0/24:
 
1. Log in to your KALI_X VM via ssh and look around:

# uname -a
Linux kali12 5.18.0-kali5-amd64 #1 SMP PREEMPT_DYNAMIC Debian 5.18.5-1kali6 (2022-07-07) x86_64 GNU/Linux

# lsb_release -a             
No LSB modules are available.
Distributor ID:	Kali
Description:	Kali GNU/Linux Rolling
Release:	2021.3
Codename:	kali-rolling

# cat /etc/resolv.conf 
nameserver 8.8.8.8

# ifconfig -a
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.38.132  netmask 255.255.255.0  broadcast 192.168.38.255
        inet6 fe80::a00:27ff:fee3:fd35  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:e3:fd:35  txqueuelen 1000  (Ethernet)
        RX packets 219093  bytes 237814489 (226.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 122664  bytes 66388962 (63.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# ip r s
default via 192.168.38.1 dev eth1 onlink 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.38.0/24 dev eth1 proto kernel scope link src 192.168.38.132 

# ps uax
# netstat -antp


PRD_X INITIAL ACCESS


A group of PRD_X VMs is running on a physical server #h1 where the default internal network subnet is 192.168.38.0/24:

1. Log in to your PRD_X VM via ssh and look around:

# uname -a
Linux prd11 3.10.0-1160.25.1.el7.x86_64 #1 SMP Wed Apr 28 21:49:45 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

# lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.9.2009 (Core)
Release: 7.9.2009
Codename: Core

# cat /etc/resolv.conf 
nameserver 8.8.8.8

# ifconfig -a
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.38.232  netmask 255.255.255.0  broadcast 192.168.38.255
        inet6 fe80::a00:27ff:fea2:9a1d  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:a2:9a:1d  txqueuelen 1000  (Ethernet)
        RX packets 5040990  bytes 1387587154 (1.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8892009  bytes 4640629425 (4.3 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# ip r s
default via 192.168.38.1 dev enp0s8 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.18.0.0/16 dev br-139635b83a45 proto kernel scope link src 172.18.0.1 
172.19.0.0/16 dev br-d3fd91bb0118 proto kernel scope link src 172.19.0.1 
172.20.0.0/16 dev br-d30704f23e34 proto kernel scope link src 172.20.0.1 
172.21.0.0/16 dev br-9a473b9f791c proto kernel scope link src 172.21.0.1 
192.168.38.0/24 dev enp0s8 proto kernel scope link src 192.168.38.232 metric 100 

# ps uax
# netstat -antp

DEV_X INITIAL ACCESS


A group of DEV_X VMs are running on a physical server #H2 where the default internal network subnet is 192.168.39.0/24. #H2 is connected to #H1 by VPN, however, there is no direct routing to 192.168.39.0/24 from 192.168.38.0/24 (It works in the opposite). That's the reason why the IP address of DEV_X and FUBU_X is the same IP 10.7.0.10 for everybody but the difference is in ports definition (NAT port-forwarding). Iptables-based port forwarding for SSH service has been configured to get easier access to the private subnet of #H2.

1. Log in to your DEV_X VM via ssh and look around:

# uname -a
Linux dev12 4.18.0-305.3.1.el8.x86_64 #1 SMP Tue Jun 1 16:14:33 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
CentOS Linux release 8.4.2105

# cat /etc/resolv.conf 
nameserver 8.8.8.8

# ifconfig -a
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.39.32  netmask 255.255.255.0  broadcast 192.168.39.255
        inet6 fe80::a00:27ff:fe0b:a516  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:0b:a5:16  txqueuelen 1000  (Ethernet)
        RX packets 6193  bytes 512401 (500.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24949  bytes 9485538 (9.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# ip r s
default via 192.168.39.1 dev enp0s8 proto static metric 101 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.36 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.38.0/24 via 10.0.2.2 dev enp0s3 proto static metric 100 
192.168.39.0/24 dev enp0s8 proto kernel scope link src 192.168.39.32 metric 101 

# ps uax
# netstat -antp


FUBU_X INITIAL ACCESS


A group of FUBU_X VMs is running on a physical server #H2 where the default internal network subnet is 192.168.39.0/24. #H2 is connected to #H1 by VPN, however, there is no direct routing to 192.168.39.0/24 from 192.168.38.0/24 (It works in the opposite). That's the reason why SSH connections point to the same IP 10.7.0.10 for everybody but to different ports. Iptables-based port forwarding for SSH service has been configured for easier access to the private subnet of #H2.

1. Log in to your FUBU_X VM via ssh and look around:

# uname -a
Linux fubu12 5.11.0-49-generic #55-Ubuntu SMP Wed Jan 12 17:36:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 21.04
Release:	21.04
Codename:	hirsute

# cat /etc/resolv.conf 
nameserver 8.8.8.8

# ifconfig -a
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.39.52  netmask 255.255.255.0  broadcast 192.168.39.255
        inet6 fe80::a00:27ff:fe6d:f4b6  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:6d:f4:b6  txqueuelen 1000  (Ethernet)
        RX packets 1858722  bytes 488301094 (488.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2850459  bytes 1664015163 (1.6 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# ip r s
default via 192.168.39.1 dev eth1 onlink 
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.38.0/24 via 10.0.2.2 dev eth0 
192.168.39.0/24 dev eth1 proto kernel scope link src 192.168.39.52 

# ps uax
# netstat -antp

VPS_X INITIAL ACCESS


1. Log in to your VPS_X via ssh, and look around. You can use this machine as a C2 server whenever you want to egress connection to an external public IP.

# uname -a
Linux vps 5.4.0-100-generic #113-Ubuntu SMP Thu Feb 3 18:43:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

# ip a s 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether c2:55:80:97:21:5e brd ff:ff:ff:ff:ff:ff
    inet XX.XX.XX.XX/24 brd 185.203.117.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a07:5741:0:336::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::c055:80ff:fe97:215e/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:39:da:c2:ec brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever


2. Find out what python versions are available by pyenv. 

# pyenv versions
* system (set by /root/.pyenv/version)
  2.7.18

3. Find current connections/port listings:

# ss -antp