This past weekend BSides PR was held in the Puerto Rico Convention Center, this is the first Security Hacking Con in the island. Most conferences before this one have been vendor focus or the ones by ISSA had been about purely compliance. The hacker is ethos is to question the why of things, to find out how it works and to share this information. I'm so proud I was let to help with some of the conference and be able to speak at it, the conference was a total success. Before I continue I wan to to first give credit to the heroes of the conference that made it all possible and that it was executed with no problems:
Jose Quiñones - He took the rains of everything in the conference and was the fearless leader of the team. He kept it moving forward and took the greets risks for it to happen.
Jose Arroyo - He always provided the "Lets go do it" attitude and his passion never waiver and kept motivating the team to go forward and worked hard getting permits, paper work and other matters so it would happen.
Johanna Martinez - What can I say, this lady did every thing to make this happen, contacted media, contacted sponsors, organized, wrote articles in news papers, always had a hard charging attitude, even when others where down she kept charging and rallied the troops. If Jose Quiñones is the brains, Jose Arroyo the passion of the con this lady is the heart.
Angel Colon and Daniel Mattei - This where our AV guys, I have to say other than my projector at my workshop there where 0 glitches for the speakers to do their presentations, this guys went to bed late and woke up very early to make sure everything worked as it should for all the presentations, without their hard work it would have not have gone so well.
Then next set of thanks go to the sponsors, we have to say it came to a shock for us how many local big technology companies and even small ones said no to us, even when we offered trade, even do that happened the purpose of the conference is to build community and for this I have to tanks:
Fortinet - Silver
Tenable Security - Bronze
SDIF del Caribe - Bronze
Mocapp Bronze - Bronze
Black Hills Information Security - Bronze
BG Consulting - Broze
CSPPR - Broze
Twins Computers Design - Bronze
Intech - Donor
We also had companies that supported us in other ways from helping with some costs to providing services and process for attendees, we are super grateful to them:
Accuvant Labs
ISSA PR Chapter
El Nuevo Dia newspaper
Executrain
eLearnSecurity
Qualys
I do have to say I'm supper impressed with the speaker lineup we where able to have for our first conference, I do have to say thanks to many of them when I contacted them I was thinking that most would say no but they where more than happy, also some of the CFP we got where so good we had problems saying no to the ones we could not fit. The list of presentations where:
Royce Davis: Owning Computer Systems (English)
Moises Delgado: APN functions for e-commerce security (Spanish)
Jaime A. Restrepo-Gomez: Forensic Analysis on iOS Devices (Spanish)
Vaagn Toukharian: WebSockets unPlugged (English)
Emilio Escobar: Resurrecting Ettercap: Past, Present and Future of Malicious Routing (Spanish)
Michael Landeck: Overcoming Objections to Security in the SDLC with a Special Section on Weaponizing QA Test Scripts (English)
Jose Hernandez: Don’t be Fooled, Scanning Web Applications is not Pen-Testing (Spanish)
Jayson Street: Love letters to Frank Abagnale (English)
Jaime A. Restrepo-Gomez: Pentesting in the POST-PC era (Spanish)
Chris Campbell: Addition by Subtraction: How Networked Appliances Affect your Security Posture (English)
Matt Graeber: Practical Persistence With PowerShell (English)
Albert Campa: The Anomaly of when a vulnerability assessment is better than a pentest. (English)
Eric Milam & Martin Bos: Advanced Phishing Tactics – Beyond user awareness(English)
I even had the chance to meet Royce Davis, I have been a super fan of his work and the blog he contributes to pentestgeek.com , got a chance to spend time with Raphal, Erick, Martin, Chris, Emilio, Valerie and the other speakers, best thing even was the sharing of ideas and seen new people to the industry asking them question and them taking of their time to help, this has to be the most humble lineup of speaker I have seen in any con and I'm happy it was this one. You rock!
The videos of the presentation should be out soon and those that took my PowerShell WorkShop thanks for the great time :)
In my previous blog post where I covered Execution Policy and Code Signing I mentioned that these steps where only useful for content that is downloaded from the internet and to prevent accidental execution of scripts. Microsoft when they designed PowerShell they placed the control over it in to the user account control, in other words PowerShell will execute and have access to what the account has access to and the control stop at that. I will be honest I find the lack of control in terms of setting permission on Cmdlets and what can be executed a flawed way of implementing security, the though process that Microsoft used is that ones malware or a attacker is already present on the system there is not much one can do. I do not agree with this train of thought and I would like to explain why :
PowerShell is a very powerful environment that allows an attacker on a system to do many things that with simple command line tools is very difficult un less the attacker uploads the tools, so the lack of control just provides greater flexibility to the attacker.
Many post-exploitation tasks taken by an attacker and/or malware are automated, when these fail an attacker must take more time or find another method to achieve his task, this exposes him to detection to his actions in failing against a existing control and having to try one or more methods of escalation or bypass of controls. The event log system in Windows is a rich one that allow for proper event to get logged and reported, this in conjunction with a good log analysis engine and rules can mean the difference between a successful compromise of other system and critical data to detecting an attack in progress before he is able to reach that information that he so desires.
Those are my main points and I know many will disagree but I wanted you the reader to know where I stand on the lack of controls and to even providing such simple way to bypass the control that are in place that could have been easily been extended to cover more that accidental execution and/or downloaded code.
Bypassing Execution Policy from the Command Line
Powershell.exe provides several ways to bypass the Execution Policy on the system quite easily, in fact one only needs to look at the executable help message to see, execute the following either in a PowerShell session or in a Command Prompt Window:
powershell.exe -h
You will see several options that can be set during execution and different ways to execute commands or scripts using the executable. In PowerShell the Execution Policy is set in the variable $env:PSExecutionPolicyPreference and it can be manipulated by a user in the session:
When executing PowerShell we can use the –ExecutionPolicy parameter, here is the help description of the parameter:
-ExecutionPolicy
Sets the default execution policy for the current session and saves it
in the $env:PSExecutionPolicyPreference environment variable.
This parameter does not change the Windows PowerShell execution policy
that is set in the registry.
The option will allow an attacker or a user just execute a PowerShell script disregarding the policy settings, remember the execution policy is to minimize accidental execution of code not for control. Lets look at a simple Hello World Script:
PS C:\Users\Carlos\Desktop> powershell.exe .\HelloWorld.ps1
. : File C:\Users\Carlos\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 cannot be loaded because running
scripts is disabled on this system. For more information, see about_Execution_Policies at
http://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:3
+ . 'C:\Users\Carlos\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
.\HelloWorld.ps1 : File C:\Users\Carlos\Desktop\HelloWorld.ps1 cannot be loaded because running scripts is disabled on
this system. For more information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ .\HelloWorld.ps1
+ ~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
PS C:\Users\Carlos\Desktop> powershell.exe -executionpolicy remotesigned .\HelloWorld.ps1Hello World
As we can see the policy was bypassed ad designed. An attacker with knowledge would not want to write his script to the file system since an AV might catch it so an option he has is to encode the script in to a Base64 string allowing him to have all kind of characters that would not be possible in a command line passing the command to powershell with the –command &{<command>} option. The PowerShell team is nice enough to provide you with an example on how to encode it:
# To use the -EncodedCommand parameter:$command = 'dir "c:\program files" '$bytes = [System.Text.Encoding]::Unicode.GetBytes($command)$encodedCommand = [Convert]::ToBase64String($bytes)powershell.exe -encodedCommand $encodedCommand
The use of an encoded command or script is useful since it will not write the file to disk and execute it, but it suffers from Windows limitation of length of a command as stated in http://support.microsoft.com/kb/830473 the limitation is of 8190 characters on modern versions of Windows that are able to run PowerShell. I have PowerShell module where I keep most of my security related functions and cmdlets called Posh-SecMod and it is located in https://github.com/darkoperator/Posh-SecMod in this module I have some simple functions for encoding commands and scripts. The function we will look at is under the PostExploitation submodule of Posh-SecMod and it is called ConvertTo-PostBase64Command:
PS C:\Windows\system32> help ConvertTo-PostBase64Command
NAME
ConvertTo-PostBase64Command
SYNOPSIS
Converts a given PowerShell command string in to an Encoded Base64 command.
SYNTAX
ConvertTo-PostBase64Command [-Command] []
ConvertTo-PostBase64Command [-File] []
DESCRIPTION
Converts a given PowerShell command string in to an Encoded Base64 command.
RELATED LINKS
REMARKS
To see the examples, type: "get-help ConvertTo-PostBase64Command -examples".
For more information, type: "get-help ConvertTo-PostBase64Command -detailed".
For technical information, type: "get-help ConvertTo-PostBase64Command -full".
Here is the code for the function:
function ConvertTo-PostBase64Command
{
[CmdletBinding()] Param
(
# Command to Encode [Parameter(Mandatory=$true,
ValueFromPipelineByPropertyName
=$true,
ParameterSetName
="command",
Position
=0)] [String]$Command, # PowerShell Script to Encode [Parameter(Mandatory=$true,
$bytes = [Text.Encoding]::Unicode.GetBytes($contents) $encodedCommand = [Convert]::ToBase64String($bytes) # If to long tell the user if ($encodedCommand.Length -gt 8100)
{
Write-Warning "Encoded command may be to long to run vian -EncodedCommand of Powershell.exe"
}
}
End
{
$encodedCommand
}
}
The function is quite simple it can take either a command or a script file and encodes it as a Base64 String
If the script is to big we can compress the script and use PowerShell .Net capabilities to decompress it in memory and execute it with Compress-PostScript:
PS C:\Windows\system32> help Compress-PostScript
NAME
Compress-PostScript
SYNOPSIS
Will compress a script for use in Post-Exploitation with Powershell.exe
SYNTAX
Compress-PostScript [-File]
The code for the function is:
function Compress-PostScript
{
[CmdletBinding()] Param
(
# Param1 help description [Parameter(Mandatory=$true,
ValueFromPipeline
=$true,
Position
=0)] [ValidateScript({Test-Path $_})] $File
)
Begin
{
}
Process
{
# Get Content of Script $contents = [system.io.file]::ReadAllText($File) # Compress Script $ms = New-Object IO.MemoryStream $action = [IO.Compression.CompressionMode]::Compress $cs = New-Object IO.Compression.DeflateStream ($ms,$action) $sw = New-Object IO.StreamWriter ($cs, [Text.Encoding]::ASCII) $contents | ForEach-Object {$sw.WriteLine($_)} $sw.Close() # Base64 encode stream $code = [Convert]::ToBase64String($ms.ToArray()) $command = "Invoke-Expression `$(New-Object IO.StreamReader (" + "`$(New-Object IO.Compression.DeflateStream (" + "`$(New-Object IO.MemoryStream (,"+ "`$([Convert]::FromBase64String('$code')))), " + "[IO.Compression.CompressionMode]::Decompress)),"+ " [Text.Encoding]::ASCII)).ReadToEnd();" # If to long tell the user if ($command.Length -gt 8100)
{
Write-Warning "Compresses Script may be to long to run via -EncodedCommand of Powershell.exe"
The compression idea came from the evil mind of Dave Kennedy founder and CEO of Trustsec, he is also the author of SET (Social Engineering Toolkit) this toolkit has several payloads for us in phishing campaigns where PowerShell is used to create backdoors and inject payloads in to memory.
Detecting Abuse
I know there are many other ways to Base64 encode and run the commands via the command line but the ones I just covered are the main ones. The first thing that needs to be done is to enable process tracking so we are able to know what ran on box a a given time. To enable Process tracking one would go in the Group Policy settings Local Computer Policy –> Computer Configuration –> Windows Settings –> Security Settings –> Local Policies –> Audit Policies
Now when any new process is started it will be logged in the event log, you should also configure the event log properties on how you want to retain logs but that will vary by environment and in some cases even by regulations. On modern versions of Windows (not XP or 2003) it will generate events in the security log with the EventID of 4688. We can use PowerShell it self to find these events:
C:\Windows\system32> Get-EventLog -LogName security -InstanceId 4688 | where {$_.message -like "*powershell*"}
Index Time EntryType Source InstanceID Message
----- ---- --------- ------ ---------- -------
14674 Mar 19 10:23 SuccessA... Microsoft-Windows... 4688 A new process has been created....
14661 Mar 19 10:22 SuccessA... Microsoft-Windows... 4688 A new process has been created....
14640 Mar 19 10:20 SuccessA... Microsoft-Windows... 4688 A new process has been created....
The only problem with process tracking is that it does not cover the command line arguments used to start the process. This information is critical when trying to figure out what an attacker did when PowerShell was just started and ended, when the process is left running say in the case of a reverse shell we can use the information to figure out what it is doing. Lets take for example a PowerShell reverse shell from SET when it runs on a target hosts a PowerShell.exe process is left behind and we can retrieve the code that was ran by using WMI to get the full command line used for the process:
We can take that Base64 encoded script and decode it to see what it is happing, sadly most attacks leveraging PowerShell do not do it this way they run and exit, so a monitoring solution that can keep a log of the process command line would be ideal.
If you have upgraded the version of Windows Management Framework on all your servers and desktops (with the exception of Sharepoint 2010 and Exchange 2010 since PSv3 is incompatible at the moment) Microsoft added the ability to log the objects used in the pipeline for selected PowerShell Modules allowing us to track some of the actions taken in PowerShell.
In the Group Policy Management Console you can create a policy to manage PowerShell settings and go to Computer Configuration –> Policies –>Administrative Template–> Windows Components and double click on Turn Module Logging
Click on Enable and the on Show to set the filter for the modules:
Create filters for the most common modules that may be abused like:
Powershell Core commands
Bits Transfer
Job Scheduling
CIM Cmdlets
WinRM Remoting
Server Manager
Active Directory
Group Policy
Your environment will dictate what you monitor for.
Click on Ok and then OK on the next window. You will see we can also control the default execution policy settings, you may have to set several different policies for execution policy depending on your environment.
Now when cmdlets are executed we can see events with ID 800 that logs the objects that went over the pipeline. for example if a user executes Get-Process the objects go internally thru the pipeline to be shown in the console and we can see in the event log the objects that went thru it:
Controlling Execution
As you may have seen controlling execution is not easy, the old and tried method of setting ACL (Access Control Lists) on the PowerShell.exe is always an alternative, but not the best since an attacker can upload his own copy of PowerShell. Microsoft has 2 solutions available for administrators to manage the execution of software in Windows and these are:
Software Restriction Policies (SRP)
Application Control Policies (Applocker)
Each has it pros and cons and require very well planned implementation. The first thing is what each technology supports using GPO, Applocker:
In my personal experience I prefer AppLocker just because I can assign rules by user groups and can also control future versions of the application, but it is limited by it’s limited support of the versions of Windows it support. If you have a standard build and have an environment very well controlled this would be the best solution but sadly some environments do not have this level of organization or have suffered from the new BYOD (Bring Your Own Device) policies that have become popular and made it very difficult to control configuration and enforce polices in many environment. SRP can be used but organization will depend on Organizational Unit design since the policies would be linked to each and controlled by WMI filters so as to be able to target the proper systems with the policies. So in very few words one requires more planning and the other is limited in terms of versions of Windows that are supported.
Which ever solution you decide to use one thing to remember if your users run with administrative privileges in their boxes they can bypass and disable the policies. Not only that but if they are tricked in to running code it will run as administrator so do plan accordingly, monitoring and flagging on attempted executions of the PowerShell executable will allow you to react in time, so you can look at this as a an early warning system and when you put together other types of logging information it will provide you with a good set of information from which to act upon and make decisions.
As you can see locking down who can actually can execute PowerShell will take a bit of planning and testing. I recommend you do this in a lab and use the solution that best fits your needs.
PowerShell v3 PSLockdownPolicy
In PowerShell v3 a feature that was added was the __PSLockDownPolicy environment variable which allows to control of object types that can be used and created in PowerShell. This is used by Windows 8 RT to lockdown the PowerShell environment. Sadly Microsoft has not documented or made any information publicly available officially on the levels that can be set and what each of the levels can do. This does not stop us from using it. When used the constraint mode will block most post exploitation scripts and commands I have seen in publicly available toolkits so I would recommend it as an extra lockdown step you can take for your hosts that use PowerShell v3. Now do take in to consideration that this will also block some module and legitimate scripts to run so do test and be careful. To use this we can create a new group policy and choose in it Computer Configuration –> Preferences –> Windows Settings –> Environment
Create a new entry for an Environment Variable and set it to Update so if not created it will and if it has another value it will change it back. Set it to __PSLockdownpolicy and set the value to 4
You can click on OK and then apply then link the policy to any OU you want and use WMI filters to control on which host it is applied on. Now you should use this in conjunction with WMI filters so as to target only those systems where you have deployed PowerShell v3. Again like with AppLocker and SRP if the user is able to run PowerShell with Administrative privileges he may be able to change the variable value for he current session by passing the control, but it will more than likely block most automated tools out there. Thanks to Matt Graeber for recommending the use of the variable and helping me test what scenarios it would apply to.
As Always I hope you found the blog post useful and informative.
Windows Management Instrumentation Query Language also known as WQL is the language we use to express queries against WMI to extract information from it. he language is like a stripped down version of SQL which should make it very familiar for programmers and for most IT/Security professionals. In fact WQL is Microsoft's implementation of the CIM Query Language (CQL), a query language for the Common Information Model (CIM) standard from the Distributed Management Task Force (DMTF). The query language is read only there is no INSERT or UPDATE keywords in it. In PowerShell we are not obligated to use WQL to pull information from WMI like shown in the other blog posts in the series where we queried information using Cmdlets parameters. We find that many of the examples we will see out there in Python, Perl, VBScript, C# and others actually WQL for performing their queries since there is some additional controls on what data to pull and to express the query in a more familiar way for others to understand what it is being done. One of the advantages of using WQL is that the processing of the query and filtering of parameters and methods is done on the target hosts side and only the information that you requested is returned. The ability to have the filtering happen on the target side means that when querying either large amounts of hosts or objects that we only pull the information we want reducing both bandwidth and time for manipulating the data returned. One of the things we have to keep in mind is that WQL differs from the normal PowerShell lexicon in terms of formatting and expression, this means that operators, wildcards and string management is not the same, it is as the name implies a language all on its own.
Basics of a Select WQL Data Query
Lets start with the simples of query where we query information from a class:
SELECT [Property Names] FROM [WMI Class]
When we want to apply filtering one uses the "WHERE" Keyword:
SELECT [Property Names] FROM [WMI Class] WHERE [EXPRESSION]
SELECT [Property Names] FROM [WMI Class] WHERE [VALUE] LIKE [Wildcard Expression]
SELECT [Property Names] FROM [WMI Class] WHERE [VALUE] [IS|IS NOT] NULL
In PowerShell both the WMI and CIM Cmdlets allow the use of queries. For WMI Cmdlets we use the Get-WMIObject Cmdlets :
Get-WmiObject -Query "SELECT * from Win32_Process"
With CIM cmdlets we use the Get-CimInstance cmdlet, this cmdlet also allows us to specify the Query Dialect we want to use if it is Microsofts WQL or the DMTF CQL:
Get-CimInstance -Query "SELECT * from Win32_Process"Get-CimInstance -Query "SELECT * from CIM_Process" -QueryDialect WQL
When we build our query with the SELECT statement we can only use either the name or comma separated list of names of a property of the object or we can use the * as a wildcard that specifies all properties should be select and brought back.
When we start looking at filtering either using the WHERE statement we need to know what Operators we can use to perform the comparisons and filtering, lets look at the comparison operators:
The Boolean values in WQL are TRUE and FALSE. We can chain comparisons using the Boolean Operators of AND and OR:
Type and Wildcard Query Keywords:
One thing to keep in mind is that the IS and IS NOT comparison key words are only used to check for NULL, they can not be used to compare content or Class Type. The ISA keyword can be used in queries to check the Class Type for what it is being returned. Now when we use the LIKE keyword for wildcard matching, the Wildcards we can use are not the same we use in PowerShell as stated before but the same ones used in the SQL ANSI Standard:
When working with string values in WQL one may need to scape characters, for this we would use the \, this differs from PowerShell use of the ` character for escaping.
Lets look at several examples of using comparison operators in WQL and then how we would do it using Powershell it self. Lets start with a simple example of looking for services that have a state of running:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE state='running'"
We can negate the query and now get only services that are not running:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE NOT state='running'"
Now lets look at using the OR operator so we can select only the services that are running or paused:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE state='running' OR state='paused'"
We can have more control and mix comparisons using parenthesis, these will be evaluated first and their return use in the next comparison from left to right, lets find all services that are set Manual, are in a State of Running or Paused:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE startmode='manual' and (state='running' OR state='paused')"
If we want to execute the same with the PowerShell Cmdlet we would use the –Filter parameter and just pass it everything after the the WHERE keyword:
Get-WmiObject win32_service -Filter "state='running' OR state='paused'"
Using Wildcards with the LIKE Operator
The LIKE Operator allow us to use wildcards for matching strings for the value of a property we are trying to use for filtering. Lets start with working with the use of the wildcard character % this is one of the most used ones since it allow us to come up with simple quick expressions, for example I want all the services that start with Microsoft in their name, I can simply express this as microsoft% and it will match anything starting with the word and any characters after it:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE name LIKE 'microsoft%'"
Lets now mix character groups with wildcards to find all services that start with either a letter a or m:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE name LIKE '[am]%'"
We can also use ranges of characters and they are expressed as <first character> = <last character> using the order they are in the English alphabet. A quick note even do on the MS documentation in MSDN says that the = character is the only character for specifying the range I have noticed that – also works. Lets look for any service whose first letter is in the range from a to f in the alphabet:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE name LIKE '[a-f]%'"
We can also negate a group or range by appending at the beginning the ^ character when defining the range or group of characters:
Get-WmiObject -Query "SELECT * FROM win32_service WHERE name LIKE '[^a-f]%'"
For single characters we would use the underscore.
The same Wildcards and Operators are used with the –Filter parameter with the WMI And CIM Cmdlets, this means that we can transform a WQL query that we find on a reference and make it in to a :
Get-WmiObject win32_process -Filter "name LIKE '_md.exe'"Get-WmiObject win32_process -Filter "name LIKE 'power%'"
I hope you have found this blog post informative and useful as always.
One will see in many places in Microsoft documentation and in several books out there that PowerShell has security system called Execution Policy, I personally do not agree this is a security measure but just a simple control to protect from accidental execution of code not specifically allowed thru normal means. First lets cover what are the security minded default configurations that PowerShell has:
It does not execute scripts by double clicking on them by default.
All scripts must be digitally signed with a trusted digital certificate by the host system so as to be able to execute.
All script when executed in a PowerShell session must be executed by providing the path of the script wither relative or full they cannot be executed just by name.
Code is executed under the context of the user.
Code that is downloaded via a web browser or thru emails clients that mark the file as downloaded from the Internet in the file meta-data the file will blocked from execution unless specifically allowed.
These defaults settings provide the following protections:
Control of Execution - Control the level of trust for executing scripts.
Command Highjack - Prevent injection of commands in my path.
Identity - Is the script created and signed by a developer I trust and/or a signed with a certificate from a Certificate Authority I trust.
Integrity - Scripts cannot be modified by malware or malicious user.
Microsoft took great care and attention to minimize the attack surface of PowerShell when an attacker tries to trick a user in to executing a possibly malicious script. Once on the system things change since these controls cannot protect from:
Copy pasting the content of the script in to PowerShell.
Encoding the script in Base64 and running it from the command line as an argument to the powershell.exe
Enter each command by hand and execute it.
Sadly PowerShell does not provide a way to block specific cmdlets or .NET APIs from users to do a more fine grained control on system. This allows say malware already present on the system or an attacker that has been able to get a foothold on the system to leverage PowerShell. An example of this is the first known use of Powershell Code as Malware in the wild http://nakedsecurity.sophos.com/2013/03/05/russian-ransomware-windows-powershell/ in addition to this PowerShell has also been added to what I call dual purpose tools like Metasploit and Social Engineering Toolkit that are written primarily for Penetration testers and researchers but sadly can also be used by a malicious attacker does why I refer to them as dual purpose tools.
Changing Execution Policy
To control the validation of scripts and cmdlets that be use the Set-ExecutionPolicy cmdlet is used. There are several Policies that can be used:
Restricted No Script either local, remote or downloaded can be executed on the system.
AllSigned All script that are ran require to be digitally signed.
RemoteSigned All remote scripts (UNC) or downloaded need to be signed.
Unrestricted No signature for any type of script is required.
Each of these policies can be applied to different scopes to control who is affected by them, the scopes are:
MachinePolicy: The execution policy set by a Group Policy for all users.
UserPolicy: The execution policy set by a Group Policy for the current user.
Process: The execution policy that is set for the current Windows PowerShell process.
CurrentUser: The execution policy that is set for the current user.
LocalMachine: The execution policy that is set for all users.
The default scope is LocalMachine and it will apply to everyone on the machine when set via PowerShell it self. To get the current execution policy we use the Get-ExecutionPolicy cmdlet running it in a session as administrator and we give it the –list parameter to list all scopes
C:\Windows\system32> Get-ExecutionPolicy -List | ft -AutoSize
Typically for admin workstations I recommend RemoteSigned since any code downloaded from the internet I will not execute it by accident causing harm to my machine. Lets change it from RemoteSigned to Restricted, for this we use the Set-Executionpolicy and give it the policy name, we can use the –Force parameter so it will not ask for confirmation and we can conform by traying to execute a script:
C:\Windows\system32> Set-ExecutionPolicy Restricted -Force C:\Windows\system32> C:\Users\Carlos\Desktop\hello.ps1 C:\Users\Carlos\Desktop\hello.ps1 : File C:\Users\Carlos\Desktop\hello.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170. At line:1 char:1 + C:\Users\Carlos\Desktop\hello.ps1 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : SecurityError: (:) [], PSSecurityException + FullyQualifiedErrorId : UnauthorizedAccess
Code Signing
Code signing allows us to use cryptographic signatures to gain the following capabilities:
Provides an identity of the source of code.
Ensure detection of script modification.
To be able to add a digital signature to a script we must use a Authenticode Digital Certificate, the certificate can come from:
Certificate from a Certificate Authority - This type of certificate allows the signing and sharing of scripts. If a Commercial CA is used the script could be shared outside of an organization.
Self-Signed Certificate - This certificate is generated by a CA hosted in he computer it self where it is used.
Self-Signed Certificates
Lets look at generating a self signed certificate for use in code signing. We start by using the makecert.exe tool from the Windows SDK that can be downloaded from Microsoft for free.
Run a Windows Command Prompt as Administrator
Run the following command in the Command Prompt. It creates a local certificate authority for your computer:
makecert -n "CN=PowerShell Local Certificate Root" -a sha1 -eku 1.3.6.1.5.5.7.3.3 -r -sv root.pvk root.cer -ss Root -sr localMachine
When prompted for a Private Key password provide one of your choosing that you are able to remember and confirm the password.
When prompted again for the password enter the password you entered in the previous step.
Next we need to use MMC for opening the local certificate store and saving the certificate:
Open Windows MMC on Windows
In the console window click on File and select Add/Remove Snap-in
Select “Certificates” and click on “Add”
When prompted accept the default of “My user account” and click on “Finish”
Click on OK. This will give you a Management Console for your current user Certificate Store so we can look at the results from the commands and manage the certificates from the Windows GUI with ease.
Go in to the MMC Console and Select “Trusted Root Certification Authorities” -> “Certificates” and on the right pane ensure there is a Root Certificate for “PowerShell Local Certificate Root”
Run the following from a Command Prompt, give in the Common name filed a friendly name including your username/handle. It generates a personal certificate from the above certificate authority:
makecert -pe -n "CN=Carlos PowerShell CSC" -ss MY -a sha1 -eku 1.3.6.1.5.5.7.3.3 -iv root.pvk -ic root.cer
You will be prompted for the certificate Private Key Password, enter the password that you provided when creating the CA Private Key.
To verify that the certificate was created and stored in the proper location go in to the console MMC Console and Select “Personal” -> “Certificates” and on the right pane ensure there is a certificate with the name that you specified in the command to create the signing certificate
You are now ready to use the self signed certificate.
Exporting Self-Signed Certificate
If you want you can export the self signed certificate for use in other systems, for that follow these steps:
Expand “Personal”, right click on the appropriate code signing certificate and select “All Tasks” -> “Export…”.
Choose the option “Yes, export the private key” when prompted.
Accept the default options on the “Export File Format” screen.
Enter a password for the private key, which will need to be entered when importing the certificate
Save the certificate to an appropriate location.
Right click “Trusted Publishers” and select “All Tasks” -> “Import…”
Follow the wizard to import the exported certificate, and enter in the accompanying password that was used when the certificate was exported.
Accept all the default values for the remaining steps in the wizard.
If the certificate is no longer required to be imported by other machines, it is highly recommended that the exported file is deleted.
Verify that the certificate was properly installed under the correct location
Signing Certificates via Active Directory Certificate Services
From Administration Tools select the Certificate Authority Console on your Enterprise Root Certificate Authority
Right click on Certificate Templates and select Manage
Double click on the Code Signing template to open it’s Properties
Add the group that you want to be able to request code signing certificates
Allow Read and Enroll
Right Click on Certificate Templates -> New -> Certificate Template to Issue
Click on the Code Signing template
Click on OK and close the Certificate Authority Console
On the developers machine:
Open a new MMC console
From the File Menu select Add/Remove Snap-in
Select Certificates, click on Add and click on Ok
Make sure that My user account is selected
Click on Finish
Click on Ok
Right click on Personal Select All Tasks -> Request New Certificate
Click Next on the screen that appears
Select Active Directory Enrollment Policy and click on Next
Select the Code Signing certificate template
Expand Details and click on Properties
Select the Private Key
Select Make private key exportable
Click on Ok
Click on Enroll
Click on Finish
Using the Code Signing Certificate
Since the certificate store is mapped as a PSDrive automatically we can check if a code signing certificate is available directly from PowerShell. Having PowerShell have access to the certificate store allows to very easy manipulation and signing scripts and other files, the cmdlets for working with Authenticode are:
Get-AuthenticodeSignature checks the Authenticode signatures for files that support Subject Interface Package (EXE, PS1, PSXML, DLL, VBS ..etc.
Set-AuthenticodeSignature adds an Authenticode signature for files that support Subject Interface Package
In fact the cmdlets as we can see not only allow us to sign PowerShell Scripts and Modules but we can also sing several windows files.
To list the certificates we can just use the certificate store like any drive and ask to only show code signing certificates using the –CodeSigningCert parameter when Certificate Store PSDrive is used
For signin the certificate must be passed as a object to the Set-AuthenticodeSignature cmdlet so we may need to save it in to a variable. Signing of a script would be like this:
So I hope you liked the blog post and found it informative on the second part I will cover how to bypass the execution policy and how an attacker or malware may abuse it.
In the last blog post we covered the basics of importing Modules and PSSnapins to extend the shell, this provides us great flexibility in terms of expandability but at the same time depending on how we have configured our system this can pose functional and security risks. The main risk is a module overwriting another module function or cmdlet. Lets demonstrate why it is important to be aware of this, lets start with 2 simple modules with the same function that returns a date for us to use for naming log files:
Lets look at what happens when we import both modules in a default configuration of PowerShell with Execution Policy not set to restricted:
As we can see both modules loaded with no errors shown, but when we look at the function we see that by typing only the name we only have the function from the latest module that was imported available to us.
PowerShell allows us to access each element in a module be it a function, cmdlet, workflow ..etc by using the module name so we could call the functions directly from each module. But lets be honest not many people use it this way:
If we use the –Verbose parameter when loading the modules we will see that nothing is shown to tell use that something is wrong:
This is because for PowerShell this is normal behavior. If we do not want to replace a command with the other we use the –NoClobber parameter when we load a module, it will not show any message telling you a that a conflict was avoided unless you use the –Verbose parameter (I wish it would show a warning message):
In PowerShell v3 this is more of a danger since Modules are loaded automatically. We can control in PowerShell v3 the behavior of the autoloading thru the variable $PSModuleAutoLoadingPreference
All - Modules are imported automatically on first-use.
ModuleQualified - Modules are imported automatically only when a user uses the module-qualified name of a command in the module <Module Name>\<Cmdlet Name>
None - Automatic importing of modules is disabled in the session. To import a module, use the Import-Module cmdlet.
One way to counter this would be to set default parameters to the Import-Module command, this is done by setting the values of the parameters for specific commands in $PSDefaultParameterValues the variable is a dictionary where we set up the values where the key is <Module Name>:<Parameter> = <Default Value>:
Now when we try to load a module with Import-Module it will apply this values to it:
Now I know many of you are thinking “Why should I worry about this?” here is an evil example, lets say an attacker or malware has been able to get on your system, and you manage say Exchange, AD and/or Sharepoint server from your machine via PowerShell, many times you will use the Get-Credential cmdlet to enter alternate credentials because you are security conscious and have separation of privileges and use a separate account for administration so token abuse is minimized. What would happen if the attacker created a module in a hidden folder and hidden files in your system module path that would look like this:
As you can see the code replaces the Get-Credential cmdlet and if verbose is given it will return the clear text credential and password entered, an attacker would just save this to file, email them or do a post request somewhere, heck he could even do a DNS query with each as the host filed to a DNS he controls and exfil the information out of your network. This would look like so when ran:
And when we enter the credentials:
I hope you have found the blogpost useful and informative and thank you for reading it.