Kevin's Blog [Index]

Return to main site

Accessing data from WD My Book Live HDD

[linkstandalone]

Recently, I was asked to access data from a HDD which was previously inside of a WD My Book Live enclosure. I encountered a problem when attempting to mount the HDD on my Linux machine. The drive appeared as a block device with 4 partitions. The first 2 partitions showed as "linux_raid_member", the third appeared as swap and the fourth which was the biggest partition, presumably holding the data, appeared as ext4. Here is a picture of what I saw with lsblk:

When I attempted to mount the 1.8T ext4 partition it would give me a "wrong fs type" error message. The first thing I wondered was whether the filesystem was corrupt and I ran e2fsck. It found some errors, however, I could still not mount the drive.

Noticing that I could still not mount the drive, I attempted to get at the data using debugfs. I was finally able to see the folders inside the filesystem. At this point I understood two things. One, the data still exists and is potentially uncorrupted and two, there was something else funny going on with the filesystem.

My next step was to run dmesg to see if there was anything of use in there. What I found was a bunch of errors referencing block size. This was something new to me. I began to do some googling and found that apparently mount has problems mounting filesystems with a blocksize over 4096. I began to investigate the filesystem further using an assortment of tools. Both dumpe2fs and tune2fs told me that I was dealing with a block size of 65536. This was shocking to discover as I had never really seen different block sizes being used, especially not one so large. I also attempted to retrieve the blocksize using blockdev, however, for some reason I got a 4096 blocksize which I knew was incorrect. I am not entirely sure why this was. Below are pictures of the results.


Upon further research I learned that the kernel has had an ongoing problem of dealing with large block sizes. It has to do with page size not being large enough, since its 4096 by default on most machines, along with other misconfigured parameters in the kernel. The solution to be able to retrieve this data is to either modify your kernel parameters to account for this new block size, which is not necessarily realistic all the time, or to use another solution I found, fuseext2. It will know how to handle the weird block size and will allow you to retrieve your data.

The issue of block size is an interesting one. WD most likely chose this bigger block size since this is a drive meant for backups. What this means is potentially big files being transferred. A larger block size can provide better performance for such a situation. Below are some additional resources to read about the large block size problem with Linux:

Sat, 14 Aug 2021

Configuring OpenSSH to use Kerberos Authentication

[linkstandalone]

This article is a continuation of the last article about setting up an MIT krb5 server. We will configure OpenSSH to work using tickets from this server.

Modern OpenSSH uses GSSAPI to communicate with Kerberos. What this means is that despite the fact that there are configuration options that start with the word Kerberos, we should not be using them. These options are legacy options that only work over SSHv1 (now deprecated).

  1. Set a proper hostname
  2.   hostnamectl set-hostname client.kevco.virt
      
  3. Ensure time is synced using an NTP server
  4. By default, CentOS should have chronyd started and enabled, however, you may want to set up an ntpd server. It is very important that the kerberos server and clients have their time synced up. Otherwise, you will have problems authenticating.

  5. Install the Kerberos packages
  6.   yum install krb5-workstation krb5-libs
      
  7. Edit /etc/krb5.conf
  8. Configure this file in a similar manner to the server. Replace the example domain and realm with your domain and realm. Also make sure that you point to the correct kdc and admin server.

  9. Add an entry into /etc/hosts (optional if DNS configured)
  10. If you do not have DNS configured with the proper SRV and A records, you should add an entry pointing to the hostname of the kerberos server. Make sure that this hostname is the same as the Service Principal Name (SPN) you gave the server. You cannot have an entry in your /etc/hosts that is kerberos instead of kerberos.kevco.virt if you do not have an SPN matching host/[email protected] in your KDC.

  11. Create a service principal for this machine and add it to this machines keytab
  12. Each machine must have its own service principal and have its key stored in its own keytab.

      kadmin -p admin/admin -q "addprinc -randkey host/server.kevco.virt"
      kadmin -p admin/admin -q "ktadd host/server.kevco.virt"
      
  13. Edit /etc/ssh/sshd_config and insert the following
      GSSAPIAuthentication yes
      GSSAPICleanupCredentials yes
      GSSAPIStrictAcceptorCheck yes
      

    As stated before GSSAPI is the interface used by SSHv2 in order to authenticate with kerberos so it must be enabled. The second option is very important. GSSAPICleanupCredentials ensures that your credentials are destroyed on logout instead of staying in the cache. The reason this is important is that if an attacker gets into your machine, they can steal the ticket from this machine and Pass The Ticket to another server to which these credentials may provide access to. Finally, we enable the StrictAcceptorCheck which verifies that the SPN matches the hosts hostname. You can disable this if you have multiple aliases. You should probably disable password authentication at this point as well to reduce the attack surface.

  14. Add approved users to the ~/.k5login file or create a user
  15. There are two options you can use to allow users to log in to an account on your server using kerberos. The first option is to create a .k5login file in the home folder of the user you want the kerberos user to be allowed to log in as. In this case we will put it in the root users folder as this is an example (Please do not allow root user login to your SSH servers). You will place one User Principal Name (UPN) per line:

      [email protected]
      

    The second option is to simply create a new user that matches the username of the User Principal Name (UPN) that will be logging in. For example, [email protected] will be able to log in to the kdiaz user on the server.

  16. Configure the client using steps 1-5 without forgetting to add the SPN matching hostname of the ssh server to your /etc/hosts file as well
  17. Edit the /etc/ssh/ssh_config on the client device
  18.   GSSAPIAuthentication yes
      GSSAPIDelegateCredentials no
      

    Once again we enable GSSAPI authentication so that we can use Kerberos. We also, depending on the environment, will disable GSSAPIDelegateCredentials. For this example, we do not need it. However, if you need the server to to obtain tickets on behalf of you, you can enable it. This may be important/useful in certain scenarios. If you do not need it, keep it off as an infected machine with the ability to request tickets on your behalf can cause you trouble.

  19. Get a ticket and test
  20.   kinit kdiaz
      ssh [email protected]
      

    If all is well, you should now be able to use your ticket to log in to the configured user on your server. It is important that you use the proper hostname that matches the servers SPN to avoid trouble. It is also important that the key version number (kvno) of your SPNs and UPNs match throughout the two machines youre trying to get to communicate. It can be a source of headache. Errors such as this one can be found by running the SSH server in debug mode and attempting to authenticate. If you get an error due to the kvno of your UPN not matching, you can clear your credentials from the cache using kdestroy and reinitialize them with kinit. Additional debugging help can be done by also running the ssh client in verbose mode using the -v flag.

Thu, 04 Feb 2021

Creating An MIT Kerberos 5 Server In CentOS

[linkstandalone]

Kerberos is an authentication protocol which is very commonly used throughout the world. It is most commonly seen through its implementation in Microsoft Active Directory. However, MIT has an implementation of the Kerberos protocol, krb5, which we can use on Linux. It uses symmetric encryption combined with a ticket based system in order to securely authenticate users. I will not spend much time describing the protocol as there are existing resources such as this one which explain it and the terminology in this article very well.

MIT krb5 can be used as a standalone product or can be integrated with a LDAP server, such as OpenLDAP, as a backend. In this article, I will only discuss krb5 as a standalone authentication product. In this configuration, there will be no identity tied to the Kerberos Ticket provided other than the User Principal Name (UPN). If you want a full identiy and authentication solution you should integrate krb5 with LDAP.

The main components of the krb5 server are the Key Distribution Center (KDC), the kadmin server, the database and the keytab file. The KDC is the main server and kadmin is the server that allows you to manage principals in the database as well as manage the keytab. There is also an additional service that is running as part of the kadmin service which is kpasswd. This allows users to reset their password using the kpasswd utility.

Installation and Configuration

  1. Set a proper hostname
  2.   hostnamectl set-hostname kerberos.kevco.virt
      
  3. Ensure time is synced using an NTP server
  4. By default, CentOS should have chronyd started and enabled, however, you may want to set up an ntpd server. It is very important that the kerberos server and clients have their time synced up. Otherwise, you will have problems authenticating.

  5. yum install krb5-server krb5-libs krb5-workstation
  6. Edit /etc/krb5.conf

    Uncomment and replace all lines with references to the example domain and realm. The standard realm name convention is to use your domain name capitalized. Below you will find an example config declaring the realm KEVCO.VIRT on a machine with the hostname kerberos.kevco.virt.

  7.   [logging]
       default = FILE:/var/log/krb5libs.log
       kdc = FILE:/var/log/krb5kdc.log
       admin_server = FILE:/var/log/kadmind.log
    
      [libdefaults]
       default_realm = KEVCO.VIRT
       dns_lookup_realm = false
       dns_lookup_kdc = false
       ticket_lifetime = 24h
       renew_lifetime = 7d
       forwardable = true
       rdns = false
    
      [realms]
       KEVCO.VIRT = {
         kdc = kerberos.kevco.virt
         admin_server = kerberos.kevco.virt
       }
    
      [domain_realm]
       .kevco.virt = KEVCO.VIRT
       kevco.virt = KEVCO.VIRT
     
    Here I set the log file locations in the logging section. In the libdefaults section, the default realm is set to KEVCO.VIRT as you can define multiple realms for a KDC. I disabled DNS lookup as there is no DNS server in this scenario. I also disabled rdns since reverse DNS is not set up in this scenario (because there is no DNS server). Finally, I declared the realm KEVCO.VIRT and provided the hostnames for the kdc and kadmin server which happens to be this same machine. The final section simply defines translations from domain name to realm name. For any additional information check man krb5.conf or MIT documentation.

  8. Edit /var/kerberos/krb5kdc/kdc.conf
  9. This is the file that holds the main configuration for your KDC. Replace the example realm with your own and set any other options you would like. Below is an example of a config you can use. For available options reference the documentation. In this example, I leave the default encryption types enabled, however, you may want to disable the likes of des, des3, and RC4 in favor of AES if possible.

     [kdcdefaults]
     kdc_ports = 88
     kdc_tcp_ports = 88
    
     [realms]
      KEVCO.VIRT = {
       master_key_type = aes256-cts
       acl_file = /var/kerberos/krb5kdc/kadm5.acl
       dict_file = /usr/share/dict/words
       admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
       supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
      }
     
  10. Edit /var/kerberos/krb5kdc/kdm5.acl

    This is the ACL file that determines who will be able to do which actions on the kadmin server. You should add permissions for the admin/admin service principal as can be seen below. Without this, you will not be able to do anything on the server remotely, including pulling down the keys into the keytab of a client. In order to restrict permissions down to certain actions see the documentation.

     admin/[email protected]   *
     
  11. Create the kerberos database
  12.  kdb5_util create -s
     
  13. Create the admin service principal
  14.  kadmin.local -q "addprinc admin/admin"
     
  15. Start and enable the kdc and kadmin
  16.  systemctl start krb5kdc kadmin
     systemctl enable krb5kdc kadmin
     
  17. Create a service principal for this computer with a random key and add the keys to the local keytab
  18. All systems that you want to use kerberos authentication should have a service principal (SPN). The standard is host/hostname_in_dns. You can add multiple principals as aliases if you have more than one name for your machine. You must have your own keys stored in your local keytab. You will also need to add that clients own generated keys from their SPN to their keytab if you want things to work properly.

     kadmin -p admin/admin -q "addprinc -randkey host/kerberos.kevco.virt"
     kadmin -p admin/admin -q "ktadd host/kerberos.kevco.virt"
     
  19. Create your own principal and give it whatever access you need in the kadm5.acl file
  20.  kadmin -p admin/admin -q "addprinc kdiaz"
     
  21. Create a test ticket using kinit
  22. You need to get a ticket using kinit for an existing principal (admin in this case) and then you can view it and other stored tickets using klist. Finally, you can destroy this ticket and remove it from the cache using kdestroy.

     kinit kdiaz
     klist
     kdestroy -A
     
  23. Open the proper ports in the firewall

    Port 88 needs to be open primarily on 88/udp. However, you also need to open 88/tcp as kerberos will use this if the Tickets get too big. Other ports include 749/tcp for the kadmin server and 464/udp for the kpasswd service.

       for port in {88/tcp,88/udp,749/tcp,464/udp};do
         firewall-cmd --permanent --add-port $port;done
       firewall-cmd --reload
     
  24. (Optional) Add DNS SRV records
  25. If you have DNS configured in your environment you should add records for your kerberos server. The record names are self explanatory/if you are doing this you likely know what youre doing.

     $ORIGIN _tcp.kevco.virt.
     _kerberos-adm    SRV 0 0 749 kerberos.kevco.virt.
     _kerberos        SRV 0 0 88  kerberos.kevco.virt.
    
     $ORIGIN _udp.kevco.virt.
     _kerberos        SRV 0 0 88  kerberos.kevco.virt.
     _kerberos-master SRV 0 0 88  kerberos.kevco.virt.
     _kpasswd         SRV 0 0 464 kerberos.kevco.virt.
     
Tue, 02 Feb 2021

Linux Authentication Using G-Suite Secure LDAP

[linkstandalone]

Google's G-Suite has been dominating the field of cloud suite services for a long time in both the enterprise and the education world. It is a strong competitor to options such as Microsoft Office 365. Not only can it offer mail, storage, and other related apps which users expect, but it can also offer lots of features to help administrators. It has a very useful interface for centrally managing all of your chromebook devices which has become a large part of the technology used in the education space. It is already essentially an identification service. Google allows us to use this identification service for devices other than chromebooks and apps through Lightweight Directory Access Protocol (LDAP). In this blog post, I will discuss how I managed to set up SSSD to provide authentication via G Suite secure LDAP. This allows for the use of G Suite instead of having to duplicate all your users into a Microsoft Active Directory server simply for authentication or paying for a service. For the sake of brevity, I will only be showing how I did this in CentOS 7. However, it is really easy to adapt these instructions to the distro of your choice. The only real differences will most likely be related to installing the software and configuring SE Linux (since it is not enabled on all distros).

Installing required packages

yum install sssd sssd-tools sssd-utils unzip

Generating Cert and Key in G Suite

  1. Open your G Suite Console
  2. Navigate to Apps>LDAP
  3. Click on "Add Client"
  4. Give the client a name
  5. Either allow access to everyone in the organization or restrict it to certain org units
  6. Allow read permissions for both users and groups
  7. Click "Add LDAP Client"
  8. Download the zip file containing the cert and key
  9. Enable the creds by switching the LDAP client to "on" under "Service status"
  10. Upload the zip file to the client and unzip it
  11. Move the files somewhere such as /var/lib

Configuring sssd.conf

In order to set up /etc/sssd/sssd.conf, its easiest to copy the default config that google recommends and work off of that. You can find it here under the SSSD tab.

Make sure to replace the domain and location of the cert and key. After doing this, we do have to add a few other things so that we can better integrate SSSD as an authentication service across the system. Under the "sssd" section, add sudo at the end of the services option so that we can allow sudo to work with our domain creds. The next thing you can do is modify some settings for offline login. You can create a "pam" section and set numbers for "offline_credentials_expiration", "offline_failed_login_attempts", and "offline_failed_login_delay". These are the options that I have set in my VM, but there are a lot more you can use. Refer to the man page for sssd.conf or the Red Hat documentation linked in the testing section to see what else you can do. Finally, we have to make sure the system will be usable and that the user will not encounter any errors on login. We do this by setting two options to True in the "domain/YOUR_DOMAIN.com" section. The first option is "create_homedir" which ensures that the user will have a home directory created for them when they log in. The other option is "auto_private_groups" which helps with UID and GID errors that may occur since the UID and GID are set from G Suite instead of being locally stored in /etc/passwd. Below you will find the file I used to test in my VM. I replaced my actual domain with "yourdomain.com".

/etc/sssd/sssd.conf


[sssd]
services = nss,pam,sudo
domains = yourdomain.com

[domain/yourdomain.com]
ldap_tls_cert = /var/lib/ldapcreds.crt
ldap_tls_key = /var/lib/ldapcreds.key
ldap_uri = ldaps://ldap.google.com
ldap_search_base = dc=yourdomain,dc=com
id_provider = ldap
auth_provider = ldap
ldap_schema = rfc2307bis
ldap_user_uuid = entryUUID
ldap_groups_use_matching_rule_in_chain = true
ldap_initgroups_use_matching_rule_in_chain = true
create_homedir = True
auto_private_groups = true

[pam]
offline_credentials_expiration = 2
offline_failed_login_attempts = 3
offline_failed_login_delay = 5

Configuring nsswitch.conf

authconfig --enablesssd --enablesssdauth --enablemkhomedir --updateall
Open /etc/nsswitch.conf and add the line
sudoers:    files sss
Everything else should have been configured by the authconfig command.

Permissions and SE Linux

setenforce 0
chcon -t sssd_t ldapcreds.crt
chcon -t sssd_t ldapcreds.key
setenforce 1
chmod 0600 /etc/sssd/sssd.conf
If you are having problems getting things to work after attempting it this way, just disable SE Linux

Enable and start everything

sudo systemctl start sssd
sudo systemctl enable sssd

Testing

The easiest way to test if everything is working is to su into your user account on your domain and see if you can log in using your password. If this works, you should have a home folder created in /home and be able to try a sudo command. By default, it will say you are not allowed to run sudo since your account is not in the sudoers file. The easiest way to give access to sudo commands is to give a group permissions to do things. Your google groups will work just as if you were giving a local group sudo access. However, you can still just give individual users access the same way.

Alternatively, you can use sssctl to do a lookup on a user account in your domain. It is done as follows:

sssctl user-checks USERNAME
This and so many other tools and functionalities can be found in Red Hat's System-level Authentication Guide. If you are having problems make sure to check your /etc/sssd/sssd.conf config file so that it is accurate and has the proper permissions of 0600. Additionally, make sure that SE Linux is not causing you problems. Any other debugging can be done through reading of man pages (sssd, sssd.conf, etc), googling, and looking at google's Support Center page for Secure LDAP.

Mon, 10 Feb 2020

Powershell Remote Management From Linux

[linkstandalone]

When you are an avid linux fan/user in a windows environment, you try to find ways to avoid having to use a windows computer. As I was exploring different methods of remote administration for windows, I decided to learn about Powershell Remoting. I wanted to try and use the Powershell that is now available for linux, Powershell Core. With earlier versions, I was unable to do much, however, newer versions bring much more useful functionality. In this post, I will talk about how to get set up to remotely administer windows systems from Linux using Powershell Core.

The first step is to install the proper version of Powershell. In order for this to work, you need to have a newer version of Powershell installed. As of writing this post, the current version on which this works is 6.2.3. The reason for this is that its a relatively new feature for linux powershell.

Installation

CentOS/RHEL
Add the Microsoft repo
    curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo

Install the package
    sudo yum install powershell

By default we do not have NTLM authentication so install this package
    yum install -y gssntlmssp
Arch Linux using yay AUR Helper
Install the package
    yay -S powershell-bin

By default we do not have NTLM authentication so install this package
    yay -S gss-ntlmssp
Ubuntu
Download the deb from Microsoft according to your linux version
  wget https://packages.microsoft.com/config/ubuntu//packages-microsoft-prod.deb

Install the package to register the Microsoft repo GPG keys
  dpkg -i packages-microsoft-prod.deb

Update your repo database
  sudo apt update

Install the package
  sudo apt install powershell

By default we do not have NTLM authentication so install this package
  sudo apt install gss-ntlmssp
Debian 9
Install some pre-reqs if you do not have them already
    sudo apt-get install -y curl gnupg apt-transport-https

Import the Microsoft repo GPG keys
    curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

Add the Microsoft repo
    sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main" > /etc/apt/sources.list.d/microsoft.list'

Update your repo database
  sudo apt update

Install the package
  sudo apt install powershell

By default we do not have NTLM authentication so install this package
  sudo apt install gss-ntlmssp
For any other distro please refer to Microsoft's Documentation

Setting up the client with physical access to the system

  1. Check if PS Remoting is enabled
  2.      Get-PSSessionConfiguration
         
  3. Enable PS Remoting
  4.     Enable-PSRemoting -Force
        
  5. Check trusted hosts
  6. In order for you to be able to remotely manage a computer using this method, you must be part of the systems trusted hosts. This serves as a form of access control so that even if a malicious actor gains credentials, they cannot simply remote into the system and start running commands. The next few steps will show you how to manage these trusted hosts.

        Get-Item WSMan:\localhost\Client\TrustedHosts
        
  7. Remove all trusted hosts, if any exist, to allow for a clean slate
  8.     Clear-Item WSMan:\localhost\Client\TrustedHosts
        
  9. Add yourself as a trusted host
  10.     Set-Item WSMan:\localhost\Client\TrustedHosts -Force -Value IP_OR_HOSTNAME_HERE
        winrm s winrm/config/client '@{TrustedHosts="IP_OR_HOSTNAME_HERE"}'
       

    Alternatively you can allow all hosts to PSRemote into this system by setting the "Value" flag to the * wildcard instead of defining a specific IP. This is NOT recommended for security reasons.

  11. Restart the remote management service and make it start at boot
  12.     Restart-Service -Force WinRM
        Set-Service WinRM -StartMode Automatic
        

Setting up the client using PSExec (Windows)

Using psexec, it is possible to remotely execute commands on a system that has the $admin SMB share exposed and open. This is more common than you might think and can be very dangerous. Using psexec, you can run commands as NT/System which is the most powerful user account on a windows computer. This account has more power than the administrator account on your computer. If you are able to use this method without the need for credentials, be aware that a malicious actor will be able to do the same. Passing captured/stolen hashes using psexec is a common tactic used by attackers to pivot to other systems on your network after initial compromise. Unfortunately, I will only cover this from the windows perspective as I have yet to find a modern, working Linux equivalent to these tools. There is the winexe project, but that is outdated and did not work for me on Windows 10 clients. That being said, there are definitely ways to do it from Linux.

In order to get psexec, you need to download PsTools from Microsoft. You will unzip it and find psexec.exe in the extracted folder. After opening a cmd or powershell window and navigating to this folder, you can run the commands from the previous section of this blog just as if you had real physical access to the system using the format shown below.

Without credentials
    psexec.exe \\RemoteComputerGoesHere -s powershell Enable-PSRemoting -Force

With credentials
    psexec.exe \\RemoteComputerGoesHere -u UserName -s powershell Enable-PSRemoting -Force

Opening a remote powershell session

When you are running commands from linux, it is important that you set authentication to negotiate in the flags (as can be seen below). Without this flag, authentication between your Linux machine and the windows machine cannot occur properly.

Save the credentials in a secure environment variable
    $creds = Get-Credential -UserName ADMIN_USERNAME_HERE

Start remote shell with environment variable creds
    Enter-PSSession -ComputerName IP_HERE -Authentication Negotiate -Credential $creds

Start remote shell with username and creds at runtime
    Enter-PSSession -ComputerName IP_HERE -Authentication Negotiate -Credential USERNAME

Invoking commands on a client

Invoke-Command -ComputerName IP_HERE -Authentication Negotiate -Credential $creds `
-ScriptBlock {COMMAND_HERE}

Invoking a PS1 script on a client

Invoke-Command -ComputerName IP_HERE -Authentication Negotiate -Credential $creds `
-FilePath C:\Path\To\Scripts\script.ps1

Managing several clients

You can run either "Enter-PSSession" or "Invoke-Command" with the -AsJob flag and it will run in the background. You will be returned a job id which you can later use to retrieve the job's output using
Receive-Job -id JOB_ID_HERE
If you forgot the job id, you can check it using
Get-PSSession
If you started a background PSSession you can work with it as follows
Accessing session
    Enter-PSSession -id SESSION_ID
Execute command with session
    Invoke-Command -Session (Get-PSSession -id SESSION_ID) -ScriptBlock {COMMAND_HERE}

You can also use other methods such as storing a list of clients in a CSV file or pulling them straight from your Active Directory server.

Running remote commands on several machines from csv of "ComputerName, IP"
    foreach($row in $devices.IP) {
        Invoke-Command -ComputerName $row -Authentication Negotiate `
        -Credential $creds -ScriptBlock{COMMAND_HERE}
    }

Running remote commands on several machines at a time using AD and pipe
    Get-ADComputer -Filter *  -properties name | select @{Name="computername";`
    Expression={$_."name"}} | Invoke-Command -ScriptBlock {COMMMAND_HERE}

Killing background sessions

If you wanted to kill a background session, you would normally run
Get-PSSession -id SESSION_ID | Disconnect-PSSession
However, unfortunately Linux powershell core, at least on 6.2.3, does not have Disconnect-PSSession available as a command. This means that the only way to end a background session is to enter the session and manually type exit. Alternatively you can find and kill the PID of the process.

Where to learn more

There is a lot of information here, some of which may not make sense to you if you have little experience with remote administration over the command line. I highly recommend you start up a windows virtual machine or two and practice the techniques discussed in this post. Additionally, you can use the resources I used to learn the things I am talking about in this post linked below.

Microsoft Powershell Remoting Blog Series
Powershell from Linux
Tue, 14 Jan 2020

QEMU Port Forwarding Using Iptables

[linkstandalone]

Normally, there is no website when I go to my debian server's IP address in my browser. However, I have a web server running in a QEMU VM on that server and would like to access it from my laptop. After following the steps in this guide, I am able to access that web server by going to the IP of my debian server as if it was installed on the server itself. Unless you give the VM its own real IP address from the router, we cannot access that VM from another computer. That being said we may not want to give the VM its own TAP and IP address. The alternative is to forward all requests to the host computer on a specific port to the port on the VM for the service you want to access. I used iptables to do this port forwarding just like we do port forwarding on our home routers. Our routers use NAT to do port forwarding allowing us to access services in our homes from across the internet. We can replicate this in our VM's with the below iptables rules.

NOTE:In the case described below, 192.168.1.250 is the IP address of the debian server and 192.168.122.215 is the ip address of the VM. Both of these devices are on a /24 subnet. The interface on which the debian server connects to my home network is enp2s0.

First we enable this NAT functionality by setting the MASQUERADE option.
sudo iptables -t nat -A POSTROUTING -j MASQUERADE
Then, we set a PREROUTING rule which lets the host device detect any incoming connections on port 80 for our network interface and redirect it to the VM's IP address on port 80 instead of attempting to connect to the hosts own port 80.
sudo iptables -t nat -A PREROUTING -d 192.168.1.250/24 -i enp2s0 -p tcp --dport 80 -j DNAT \
--to-destination 192.168.122.215:80
Finally, we set a FORWARD rule which ensures the packet actually gets sent to port 80 on the VM and that the VM is open to accepting that packet.
sudo iptables -I FORWARD -p tcp -d 192.168.122.215/24 --dport 80 -m state --state \
NEW,RELATED,ESTABLISHED -j ACCEPT
Wed, 08 May 2019

QEMU Host Only Networking

[linkstandalone]

Often times, it is useful to use a host-only network in a lab environment, especially when dealing with certain security labs. What a host-only network allows is for your virtual machines to communicate with each other on their own independent network and communicate to the host computer/hypervisor. However, the VM's will not be able to reach out to other devices on your network and devices on your network will not be able to reach them. In order to set up this isolated environment, you need to create a bridge and tap just like any other VM networking set up. There are two methods to do this, one is manually and the other automatically using the libvirt XML format.

Libvirt XML Method

In order to do this the easy way, you can create an XML file whose contents are the below. This is simply the default network setup but without the forward tags. This makes it so the network is limited to the virtual environment. I renamed the bridge to "virbr1" instead of the default "virbr0" so there is no conflict. I also changed the last byte of the mac address and set an appropriate DHCP range and IP address as to not interfere with the other network. Here I simply changed the IP from the 192.168.122.0/24 subnet to 192.168.123.0/24. In the DHCP range, do not forget to leave out the .255 address since this IP is used for broadcast. Finally, I changed the name to secnet to help me identify it. I called it that because this is the network I use for security labs, often with vulnerable systems, which I want no where near my real network.

    <network>
      <name>secnet</name>
      <uuid>8f49de66-0947-4271-85a4-2bbe88913555</uuid>
      <bridge name='virbr1' stp='on' delay='0'/>
      <mac address='52:54:00:95:26:26'/>
      <ip address='192.168.123.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.123.30' end='192.168.123.254'/>
        </dhcp>
      </ip>
    </network>
After creating this file, simply run virsh net-define file_name.xml and virsh net-start file_name. If all is well you have officially set up the network and can configure the client. You can do this either through virt-manager by changing the NIC settings from the default networks bridge to your bridge, in this case, virbr1, or you can do virsh edit domain and look for a line with <interface type='bridge'> and modify the value for source. If you have no NIC set up at all, add the following lines to your code, modifying certain values such as the mac address and pci slot under the address tag as necessary. When you boot up the client it should automatically get a DHCP address.
    <interface type='bridge'>
      <mac address='52:54:00:8c:d0:7e'/>
      <source bridge='virbr1'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </interface>

Manual Method

Although you can do the above method and have it work well, there are times when you want to learn what actually happens behind the scenes and how it all works. In order to do this, you have to understand that the virtual network is made up of a bridge and a tap where the tap acts as a virtual NIC for the VM. The bridge is what hosts the network and acts as a router. We need to create this bridge, assign it an IP, then create the tap and make the bridge a slave of the tap. At this point, a functioning network will be established with static IP addresses. If you want DHCP, you can do so using dnsmasq. Finally, add the appropriate settings to the VM as described above in the XML method. From there on out, everything else is simply a matter of configuring the client itself. The only negative to this manual method is that you have to start and stop the network manually, but this is easily scriptable.

    # Create the virtual bridge and name it secnet and bring the interface up
    sudo ip link add secnet type bridge; sudo ip link set secnet up

    # Create the tap and name it secnet-nic (you can call it whatever you want)
    sudo ip tuntap add dev secnet-nic mode tap

    # Bring up the interface in promiscuous mode
    sudo ip link set secnet-nic up promisc on

    # Make secnet-nic a slave of secnet
    sudo ip link set secnet-nic master secnet

    # Give bridge secnet an IP address of 192.168.123.1
    sudo ip addr add 192.168.123.1/24 broadcast 192.168.123.255 dev secnet
DHCP With dnsmasq

Setting up DHCP with dnsmasq is simple. You can either write out the config within the command, as is shown below, or you can create a config file which you can read from, also shown below. The important steps in running dnsmasq are setting the correct interface and DHCP range as well as setting the -p option to 0. When the -p option is set to 0, the DNS function of dnsmasq will no longer start which removes conflicts with any active DNS servers on your host computer. I have this problem since I use dnscrypt on my laptop, however, you may not encounter such conflict. There is no real need to host our own DNS server so nothing is lost by doing this.
Note: The secnet config provided below was generated by libvirt when using the XML method taught in this blog.

    sudo dnsmasq --interface secnet -p 0 --bind-interfaces --dhcp-range=192.168.123.10,192.168.123.254

    or

    sudo dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/secnet.conf --leasefile-ro

    # secnet.conf
    strict-order
    pid-file=test.pid
    except-interface=lo
    bind-dynamic
    interface=secnet
    dhcp-option=3
    no-resolv
    ra-param=*,0,0
    dhcp-range=192.168.123.30,192.168.123.254,255.255.255.0
    dhcp-no-override
    dhcp-authoritative
    dhcp-lease-max=225
    dhcp-hostsfile=test.hostsfile
    addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts

If you did not set up DHCP using dnsmasq you will have to manually set up the IP and default gateway route on the client. It is simply two lines if you use the command line and a few changes within the settings panel of whatever desktop environment you're using in the vm.

    # Assign an ip address to the interface
    ip addr add 192.168.123.2/24 dev eth0

    # Add the default gateway route which is the secnet address
    route add default gw 192.168.123.1 eth0

    # Test network connectivity
    ping 1.1.1.1

    # There is no DNS assigned so you have to manually add a server in resolv.conf
    echo 1.1.1.1 > /etc/resolv.conf

    # Final network connectivity test with DNS resolution
    ping google.com

If you followed all the steps in this guide, you will have successfully created an isolated host-only network. To confirm this, try pinging a device in your network such as your phone. This should not work, but you should be able to ping the host computer at 192.168.123.1 and other VM's on the same network.

Fri, 03 May 2019

Running VMware Images in QEMU

[linkstandalone]

In order to keep true to the FLOSS philosophy, I prefer to virtualize things using QEMU KVM. This also allows me to practice with the tools used in a lot of Linux environments for virtualization. Although it is a great tool for deploying your own virtual machines, it gets in the way when I want to open up a VM image for a security lab or some pre-made tool such as SIFT Workstation since they usually come in either virtual box or VMware images. While I could just download those tools, I prefer not to add more programs to my computer unless it is absolutely necessary. Of course, I also just like to find new ways of doing things. I hate seeing people online responding with "just download XYZ program and be done with it". Yes I can download VMware workstation if I needed it at work for something really quick and it will most likely work, but when it comes doing things at home, that mindset is soooo boring.

Normally, VMware images come in a *.ova file. The first thing to realize is that if you run file on the ova, you will notice that it is simply seen as a tar archive. The ova holds multiple files inside including the actual image, normally in a. The ova holds multiple files inside including the actual image, normally in a *.vmdk file, and a *.ovf file which is an XML file with information pertaining to the VM, comparable to the QEMU XML used to configure your VM settings. You may also find other files in there such as an ISO or a file with hashes. The only file we care about though is the *.vmdk file as that is the one with the actual image. If there are more than one, the file which has the name most comparable to the original *.ova filename should be the correct one. If it turns out this one does not work after the following process, you can always try the other one.

We will be converting the vmdk to qcow2. I chose this format simply because its the one I use with my other images and it works well with this conversion process. To convert it, you need to use the qemu-img and its convert function. After this point, we will be able to load the qcow2 image as a regular disk image in QEMU. You can do this through virt-manager, virt-install or copy another VM's XML and change the source for the disk as well as other options like the name, the UUID, and the MAC address. Something else you can try for a quick test is qemu-system-x86_64 but this can sometimes be very slow unless you set a ton of argument options.

Here are the actual steps:

  1. tar -xvf original.ova
  2. qemu-img convert -O qcow2 original.vmdk original.qcow2
  3. Run the qcow2 image in QEMU
  4. If it does not boot, try the other vmdk file if there is one

As you can see, it is pretty simple to do this and so far have used it on 3 different VMware images flawlessly. However, you have to realize it may take some experimentation. Do not give up on it right away and you will be able to avoid downloading extra software and avoid looking for the correct free trial version or getting an expensive license.

Wed, 30 Jan 2019

Detecting The Effect Of A Phishing Attack On Your G Suite Domain

[linkstandalone]

One of the things we have to be weary of as administrators is security. Phishing attacks are constantly becoming harder to detect and defend against. Other times, it is quite easy to detect. In this post I will tell you what to do when you detect a phishing attack on your domain and how to mitigate.

Recently our domain received a phishing attack which told users that they had a new voicemail from someone and to click on a link to view it. When clicked, you were redirected to an outlook login page with your email address already entered in the username field. None of the IT department received the email, but a lot of employees did. We received a question about it from one employee and did not think much of it. I simply recommended that they not open it as I thought it was an isolated incident. I now realize that I should have done more in response. No less than an hour later, I received two more questions about the same email. Luckily, those two employees realized it looked sketchy and did not click on the link. I instantly knew that this was a phishing attack on the domain. All of the emails had the same sender, but slightly different subject lines so I knew that the sender was the constant I needed to use to run my audit.

To run my audit, these are the steps I took:

  1. Log in to G Suite dashboard
  2. Go to "Reports" tab
  3. Scroll down to the audit section on the sidebar and select "Email Log Search"
  4. Enter your desired search parameters, in this case the senders email address
  5. Select an appropriate time frame to check. I checked the last 7 days since it was recent
  6. Save the results as a google sheets file in the upper right corner and share it with your team
Now that we have the logs, we can start mitigating the problem. The first thing I did was to cut the head off the snake by setting a global block on the email address in the G Suite admin console. Afterwards, I prepared an email advising all employees of the situation, what to look for, what happens when you click on the link, and what to do if they have received the email. Using the logs, I was able to also individually verify with those affected if they had clicked on the link and then reset their passwords. Lastly, I was able to detect the time and date of incident which was actually the night before. I was not alerted to it until the next evening so it is possible that multiple people clicked on the link.

Takeaways

There is not a lot we can do about these attacks except deal with them after they occur. A password compromise of a super admin account via a phishing attack could be devastating for your domain as the attacker will have complete control over everything. This is also one of the reasons you should educate your users and administrators on the dangers of such attacks and how to detect them. Next time, I will be sure to run an audit as soon as I see the first message since it is so easy and quick to do. It will help me reduce the number of people clicking on such emails. I will also look into some possible defensive cyber education for users.

Thu, 22 Nov 2018

Using Projectors with i3wm

[linkstandalone]

I use arch linux with i3-gaps on an old w520 I saved from being recycled. This computer and environment has truly helped me increase my efficiency. The downside, is that everything has to be manually configured. I do not mind this at all, however, because it has helped me understand how computers do certain things. If you were to just plug in the vga cord on a computer running i3wm, nothing would happen. You would then run xrandr to see if the vga connection even shows up and you will realize it has not. This is because there is no program running in the background to detect these display changes. You have to manually start the VGA output in xrandr. So, you run xrandr --output VGA-1 --mode 1024x768 and find that now it works, but it looks really wierd. You try scaling the display and shifting it but find no success. Why is this?

If we take a look at a windows computer, or even one running an actual DE like gnome, we can see that the screen resolution for both the output screen and the laptop display both change. In fact they set themselves to the same resolution as one another This seems like a simple idea, but unless you are actually paying attention when you plug in a vga cable, you do not really notice it. What I ended up doing to solve this problem was writing 2 small scripts. One I run when I plug in a vga output and another when I remove the vga output. To activate it, I use xrandr --output LVDS-1-1 --mode 1024x768 --output VGA-0 --mode 1024x768. To disable it, I use xrandr --output LVDS-1-1 --mode 1920x1080 --output VGA-0 --off. One thing I did notice is that the script for turning on the vga output will not work until you run xrandr on its own. It may have something with letting xrandr try and detect the new connection. There is probably a method to automate this process, but until I figure it out, this is what I will do. I only use vga occasionally to test projector and smartboard functionality after a repair.

Mon, 22 Oct 2018

First Entry

[linkstandalone]

This is my first blog entry. In this blog I plan to occassionally update it with tutorials on things I use in my day-to-day life, have learned or practiced in my home lab. I may possibly speak my opinion on certain topics in technology.

Sun, 21 Oct 2018