Quantcast
Channel: PowerShell – Splunk Blogs
Viewing all 25 articles
Browse latest View live

Scripting Your Way with Splunk for VMware

$
0
0

As you might have heard, we recently released a new product: Splunk for VMware.

As you are going through the install guide, you’ll come to a step where you are required to assign a list of twelve privileges to a user account by using the vSphere Client. In order to save time and to reduce the risk of errors due to manual entry, I wrote a script that does the work for you. This script is especially useful if you have multiple vCenter servers and will need to apply the permission across several datacenters.

The script was tested against vSphere 5 with vCenter Server. This script will not work against an ESXi environment without vCenter, due to the read-only scripting API limit that VMware enforces in that case. In order to use the script (which runs on Windows only), you’ll need a copy of PowerShell (version 2 or later), and PowerCLI (any version, tested with v5.0.1).

The built-in help for the script is comprehensive, but just to be clear:

  • Create an account in Active Directory prior to running the script
  • Specify the AD account in the format “domain\userID”. SPN format will not work.
  • A VI role will be created. If you want to specify a role name, you can do so, otherwise, the default of “Splunker” will be used.

You can download the script from here: https://gist.github.com/3033791

And the finished results look like this:

If you have any feedback on the script, please let us know in the comments.

Happy Splunking!


Working with Splunk Indexes using Windows PowerShell

$
0
0

In my last post, I talked about a way to use PowerShell to ease the installation of our Splunk App for VMware. This time, we’ll be using PowerShell in a much different way. As you might already know, the Splunk dev team has made a very robust set of REST API hooks for the product. What you may not know is that this enabled some other talented guys to build a PowerShell module which you can use not only to get data into and out of Splunk, but also to manage your Splunk infrastructure.

Now in my case, I have a goal in mind. I want to answer this question:

How much disk space is being consumed by indexes?

In my lab environment, I have one search head, and two indexers. It’s quite easy to find out the index size if you are running a single-server setup. Details about the indexes are easy to find in the Splunk manager web UI. But in a distributed environment, the indexers don’t have splunkweb turned on. You do have other options, particularly if you have server access. You can either go to the filesystem and look at space consumption that way, or you could execute a splunk CLI command to get the index settings.

But frankly, most of these methods are very Unixy (not that there’s anything wrong with that), even when running Splunk on Windows! I know my way around Unix fairly well, but at heart, I’m a Windows guy. I want to be able to solve my problems using Windows PowerShell, because that’s the tool that I’m most comfortable with.

In case you haven’t already done so, go grab the latest version of the Splunk PowerShell Resource Kit. Once you’ve got that installed, you’ll be able to follow along with my examples below.

Step one: Retrieve index objects

PS> $cred = Get-Credential
PS> $idx = 'bd-idx-01.bd.splunk.com', 'bd-idx-02.bd.splunk.com'
PS> $idx | Foreach-Object { Get-SplunkIndex -ComputerName $_ -Cred $cred }

And the (trimmed) output from the last line looks like this:

ComputerName            Name
------------            ----
bd-idx-01.bd.splunk.com _audit
bd-idx-01.bd.splunk.com _blocksignature
bd-idx-02.bd.splunk.com _audit
bd-idx-02.bd.splunk.com _blocksignature

Step two: Examine the output

Now that’s great, but where is the size? Remember—everything in PowerShell is an object. Let’s use the Get-Member cmdlet to examine the output from the Get-SplunkIndex cmdlet:

PS> $indexes = $idx | % { Get-SplunkIndex -ComputerName $_ -Cred $cred }
PS> $indexes | Get-Member -Name *size*

   TypeName: Splunk.SDK.Index

Name               MemberType   Definition
----               ----------   ----------
blockSignSize      NoteProperty System.Int32 blockSignSize=0
currentDBSizeMB    NoteProperty System.Int32 currentDBSizeMB=79
maxDataSize        NoteProperty System.String maxDataSize=auto
maxTotalDataSizeMB NoteProperty System.Int32 maxTotalDataSizeMB=500000
rawChunkSizeBytes  NoteProperty System.String rawChunkSizeBytes=131072

First, I’m grabbing all the indexes and assigning them to the variable “$indexes”, because I’m going to be manipulating them a bit more as we continue. Next, that variable gets piped to Get-Member, which spits out tons of stuff. Because my goal is to look at index size, I decided to filter what Get-Member would return to show only those members which have the word “size” in the name.

Step three: Output

Looks like “currentDBSizeMB” is what I need, let’s put that into a nice table!

PS> $indexes | Select-Object -First 2 | Format-Table Name, currentDBSizeMB -AutoSize

Name            currentDBSizeMB
----            ---------------
_audit                       79
_blocksignature               1

Step four: Working with an index object

Before I leave you, let’s do something a bit more useful. Here are my top 10 indexes by size, grouped by indexer.

PS> $indexes | Sort-Object -Property currentDBSizeMB -Descending | Select-Object -First 10 | Sort-Object -Property ComputerName | Format-Table -GroupBy ComputerName Name, currentDBSizeMB -AutoSize

   ComputerName: bd-idx-01.bd.splunk.com

Name            currentDBSizeMB
----            ---------------
servervirt_perf            5913
xenapp_perfmon             3865
_internal                  2573
cisco_ucs_perf             5921
main                      19895
perfmon                   10120
hyperv_perfmon             7255

   ComputerName: bd-idx-02.bd.splunk.com

Name            currentDBSizeMB
----            ---------------
xenapp_perfmon             2531
cisco_ucs_perf             5775
servervirt_perf            4413

One of my favorite things about PowerShell is the pipeline. Funny how a line of code in PowerShell looks pretty similar to a Splunk search command!

If you’d like to learn more about the PowerShell Resource Kit, be sure to read the README which has links to tons of resources. Also, I interviewed Brandon Shell a bit ago about the project on the PowerScripting Podcast episode 165.

Splunking Powershell and .NET Data Structures

$
0
0

We are currently rocking it at the Microsoft Exchange Conference (MEC) in Orlando and I’m being asked where we get our data from to handle the reporting and monitoring requirements for the Splunk App for Microsoft Exchange. Some of the sources are relatively straight forward – things like the Windows Event Log, IIS logs and Message Tracking logs, for example. But where do we get the rich user information? The answer lies in a series of Powershell scripts that run on a regular basis on each Exchange server. You see, Powershell has access to the whole of the .NET framework and that is where a lot of information lies.

Let’s take a quick example – splunking the Inbox Rules of all the users in Exchange. Our first step is to write a Powershell script to gather the required information. Since we are splunking the data, the only requirements are that we have a time stamp and it is in textual format. However, our best practice is to use KV pairs for the data and to put the data on one line if we can.

The Exchange Command Shell (which is Powershell with additional cmdlets) provides a cmdlet called Get-InboxRule to allow us to pull the information we need. This is really a wrapper around the .NET Framework ExchangeService.GetInboxRule method. You can find information on all the .NET Framework methods from MSDN.

As is common with Powershell, an object is returned by this cmdlet. We can iterate over the members to get the key-value pairs. Finally, we can output all that as a string to the console (which is where Splunk will read the data we produce). You can run this script within the Exchange Command Shell to see what sort of data we are looking at. I call this script “get-inboxrules.ps1″

$Mailboxes = Get-Mailbox -Server $Env:ComputerName
foreach ($Mailbox in $Mailboxes) {
	$Id = 0
	$UPN = $Mailbox.UserPrincipalName
	$Quota = $Mailbox.RulesQuota.ToBytes()
	$Rules = Get-InboxRule -Mailbox $Mailbox
	if ($Rules -ne $null) {
		$Rules | Foreach-Object {
			$O = New-Object System.Collections.ArrayList
			$D = Get-Date -format 'yyyy-MM-ddTHH:mm:sszzz'
			[void]$O.Add($D)
			[void]$O.Add("Mailbox=`"$UPN`"")
			[void]$O.Add("Quota=`"$Quota`"")
			[void]$O.Add("InternalRuleID=$Id")
			foreach ($p in $_.PSObject.Properties) {
				$Val = ""
				if ($_.PSObject.Properties[$p.Name].Value -ne $null) {
					$Val = $_.PSObject.Properties[$p.Name].Value
					$Val = $Val.Replace("`"", "'")
				}
				[void]$O.Add("$($p.Name)=`"$Val`"")
			}
			Write-Host ($O -join " ")
			$Id++
		}
	}
}

Our first step is to get a list of mailboxes (or users) on the mailbox server we are running on. One of the things we do for performance is to ensure that we don’t traverse the network to get information. Now we have a list of target users, we get a list of Inbox Rules for each mailbox using the Get-InboxRule cmdlet. For each rule, we output a line that gives us all the properties of that rule. The real work of making the output Splunk ready is in the Get-Date cmdlet and the join. The Get-Date cmdlet gives the event a time stamp, and the join allows us to provide an array of key-value pairs and sends the output to Splunk as a string.

Splunk does not run Powershell natively, so we have to help it out. In addition, The Exchange Command Shell brings in the Exchange cmdlets before you run scripts. We have to employ a wrapper cmd script to do this. The script just needs to work out where Exchange is installed and then call powershell with the right arguments. I call this script “exchangepowershell.cmd”

@ECHO OFF
SET SplunkApp=TA-Exchange-2010-MailboxStore
:: delims is a TAB followed by a space
FOR /F "tokens=2* delims=	 " %%A IN ('REG QUERY "HKLM\Software\Microsoft\ExchangeServer\v14\Setup" /v MsiInstallPath') DO SET Exchangepath=%%B
Powershell -PSConsoleFile "%Exchangepath%\bin\exshell.psc1" -command ". '%SPLUNK_HOME%\etc\apps\%SplunkApp%\bin\powershell\%1'"

What this script does is firstly to look up where Exchange Server 2010 is installed and then to start up Powershell with the appropriate Exchange cmdlets preloaded. Now that we have the right scripts, we can run the following:

splunk cmd exchangepowershell.cmd get-inboxrules.ps1

It should produce the same results as when running the powershell script in the Exchange Command Shell. Our final piece of the puzzle is to actually grab the data. For this, we use a scripted input, defined in inputs.conf, where we tell Splunk to run our script on a daily basis.

[script://.\bin\exchangepowershell.cmd get-inboxrules.ps1]
index=msexchange
source=Powershell
sourcetype=MSExchange:2010:InboxRule
interval=86400

The magic is grabbing the .NET data is in utilizing powershell for the heavy lifting. This same magic is used in the Splunk App for Exchange and the Splunk App for Active Directory – both are free downloads from splunkbase.com.
With these simple techniques, you can pull data from the internal .NET data structures for any of your Windows applications – SQL Server, Sharepoint, System Center and Lync all are within your reach. It really gives you transparency for your Windows environment.

If you happen to be at the Microsoft Exchange Conference, drop by Booth #18 and ask me how you can get better data from your Windows systems. There is more useful data than just the logs.

Splunk with PowerShell? Yes, Please

$
0
0

Do you manage Windows servers? If the answer is yes, then the likelihood is that you utilize PowerShell in your daily operations. As many know, PowerShell is an extraordinarily powerful shell command language that Microsoft invented to manage their most complex server applications. Exchange, SharePoint, Lync, SQL Server and Active Directory can all be managed through PowerShell; and that’s just the start. The Splunk App for Exchange and the Splunk App for Active Directory both use this facility to get inventory and usage information from the depths of the systems.

But it isn’t easy. Scripted inputs are, well, expensive. Firstly, you have to wrap the PowerShell executable inside a CMD batch file. When it executes, you are running a CMD prompt plus a full PowerShell environment. You are incurring this start up cost whenever the scripted input fires, and the start up cost can be significant. In addition, you have to cook the output into events yourself and deal with a lot of the scaffolding. Just take a look at the Splunk App for Exchange for an example of this. A lot of the code inside the PowerShell scripts is dealing with output formatting.

When Splunk 5.0 was released, it included a feature that will be a major advantage for this sort of thing – Modular Inputs. A modular input is a long-running data input that starts when the Splunk server starts and remains continually running. We can use this feature to remove the double process and the start-up cost. Instead of executing a batch file that runs PowerShell, we can instantiate a PowerShell run-time at server initiation and then run the scripts within that run-time. In addition, the modular input can handle all the scaffolding for us. PowerShell is an object-passing command language. Instead of doing all the output processing within the script, we can do it within the modular input and just pass objects back to the modular input.

Thus was born the Splunk Add-on for PowerShell. You can download it today for free from Splunkbase. It only works on Windows servers and we recommend you deploy it to universal forwarders. You need to install the .NET Framework 4.5 and PowerShell 3.0 first.

What can you do with it? Well, you can do a lot with it. It can replace WMI, for example. One of the things that we often use WMI for is to retrieve services. Try this input stanza to get the same effect:

[powershell://Running_Services]
script = Get-Service | Select Name,DisplayName,Status
schedule = 0 */5 * ? * *

You will note that we have our PowerShell script embedded in the inputs.conf. You can run this one-liner from the PowerShell prompt on your Windows host. We also have a schedule. The schedule takes a cron schedule (just like scripted inputs) and not an interval (unlike scripted inputs). This allows you to schedule your inventory scripts for the early morning.

Something more complex? We’ve been playing around with SQL Server and one of the things we want to do is to get an inventory of the SQL Databases. This is more complex. We place the code in a script:

[Reflection.Assembly]::LoadWithPartialName(‘Microsoft.SqlServer.Smo’)

function Get-SQLInstances {
	(Get-ItemProperty ‘HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server’).InstalledInstances |
	Select @{n=’Cluster’;e={$env:ComputerName}}, `
		@{n=’Name’;e={$_}}, `
		@{n=’State’;e={‘Active’}}, `
		@{n=’VirtualServerName’;e={$env:ComputerName}}, `
		@{n=’ServerInstance’;e={{“{0}\{1}” –f $env:ComputerName,$_)}}, `
		@{n=’Node’;e={$env:ComputerName}}
}

function Get-SQLDatabases {
	[CmdletBinding()]
	Param([Parameter(Mandatory=$True,ValueFromPipeline=$True,ValueFromPipelineByPropertyName=$True)]
		$Instance
	)
	PROCESS {
		Foreach ($i in $Instance) {
			$s = New-Object(‘Microsoft.SqlServer.Management.Smo.Server’) $i.VirtualServerName
			$s.Databases | %{ Write-Output $_ }
		}
	}
}

Get-SQLInstances | Get-SQLDatabases | Select ID,Name,Owner,UserName,Version

Obviously, this is not a straight-forward script, nor is it complete – it does not deal with clusters and it only outputs a handful of the many parameters that are available. You can place this in the bin directory of a Splunk app, and then add the following inputs.conf stanza:

[powershell://My_DBInventory]
script = . “$SplunkHome\etc\apps\MyApp\bin\dbinventory.ps1”
schedule = 0 30 3 ? * *

Now the script will execute, outputting the list of databases per host to the index at 3:30am every morning.

The Splunk Add-on for PowerShell isn’t good for everything. One of the things it isn’t good for is long-running scripts. This is primarily because the output is not sent to the Splunk server until the end of the script when the modular input receives the generated objects. Instead of a long-running script (which is best suited for a scripted input still), we use a short-lived script and keep state. For example, let’s say we want to output the windows server boot time every 24 hours or when it reboots, whichever is first, we can use this script:

$State = Import-LocalStorage “Boot.xml” –Module “MyApp” -DefaultValue (New-Object PSObject –Property @{PE=[DateTime]::MinValue})

$wmi = gwmi –Class Win32_OperatingSystem –Property LastBootUpTime
$lastboot = $wmi.ConvertToDateTime($wmi.LastBootUpTime)
if ($State.PE.AddHours(24) –ge [DateTime]::Now –and $State.PE –ge $lastboot)
{
	return
}

$State.PE = [DateTime]::Now
$State | Export-LocalStorage “Boot.xml” –Module “MyApp”

$wmi

In this case, we only output an object at the prescribed time. We use a set of new cmdlets to import and export the state. The state is simply an object and we can put whatever information we want in that object, as long as it can be serialized. Now our inputs.conf stanza looks like this:

[powershell://LastBoot]
script = . “$SplunkHome\etc\apps\MyApp\bin\lastboot.ps1”
schedule = 0 * * ? * *

This executes the script once a minute, allowing us a 1-minute resolution on the boot time.

With the Splunk Add-on for PowerShell, you get the full power of the Windows .NET framework and PowerShell v3.0 for generating events. These can be as complex or as simple as you want them to be. Look for future Microsoft apps to take advantage of this new powerful data input.

Learn More about PowerShell and Modular Inputs

$
0
0

For over five years, I have been working with co-host Jonathan Walz on the PowerScripting Podcast, a weekly Internet radio show. The primary topic of the show is the Windows PowerShell scripting language. We like to talk about news, tips, and resources related to the PowerShell community, but the biggest part of most shows is the interview. We’ve had a wide variety of guests on the show, ranging from prolific scripters who enjoy sharing their work, to PM’s, architects, and engineers from largest software and hardware vendors in the world, including Microsoft, IBM, Intel, NetApp, and more.

Recently, we caught up with Joel Bennett, a Windows PowerShell MVP awardee, who also happens to be my teammate on Splunk’s BD Labs team. Joel is the lead developer for the Splunk Add-on for Microsoft PowerShell, a modular input for Splunk 5 which enables you to easily and efficiently add data to Splunk. Please visit powerscripting.net to listen to the full episode or subscribe to the podcast feed.

Playing with the Splunk C# SDK–from PowerShell

$
0
0

As those who know me know, I Am Not A Developer. I could convincingly play one on TV, but that’s not the point. The point is this: I don’t have a copy of Visual Studio, and I don’t want to! When in Windows, PowerShell is my language of choice (and for good reason). This blog post will show you, in pretty short order, how to take the newly released Splunk SDK for C#, and use it to connect to a Splunk search head or indexer, but doing so from PowerShell instead of C#.

First, let me acknowledge that we do have a very cool Splunk PowerShell Resource Kit that you can download today. It includes over 40 PowerShell-Splunk cmdlets that support numerous search, deployment, and configuration scenarios. However, it connects to the REST API directly using HTTP, which means there’s a fair bit of redundant code that would’ve been saved, had the C# SDK existed when the resource kit was written. PowerShell, like C#, is built on top of .NET, and it can execute C# code “natively” without much (if any) performance penalty, so there’s no reason not to use the technique that I’m about to explain.

I have published a sample PowerShell module on github called Splunk2, so as not to conflict with the resource kit. Today, there’s only two functions: Connect-Splunk and Disconnect-Splunk, but as you’ll see, this is enough for you to at least get started down the path.

To make this code work, all you have to do is create a Splunk2 folder in your PSModulePath (defined on MSDN), and place inside:

You don’t need any of the other files from the SDK, but you may find the Examples folder interesting. It contains C# code of course, but the code is similar enough to PowerShell that given a bit of study, you might be able to convert the examples to PowerShell. And that’s why I can play a developer on TV.

I went so far as to create proper help and examples in the module, because PowerShell makes that stuff easy. Open a PowerShell prompt, type

Import-Module splunk2

…and connect to Splunk! Note that the module requires PowerShell version 3 because I didn’t want to use workarounds for things which have been fixed since version 2. (For those curious, I’m referring specifically to $PSScriptRoot, and proper handling of a PSCredential object in the param() block of a function.)

Below is a transcript of my PowerShell session where you can see the code in action. The actual “hey, what can I do with this” part is bold and red. Can’t miss it. Also be sure to try piping the $SPLUNK_SERVICE object to Get-Member, and you’ll see there are several methods to play with.

PS C:\Users\hrottenberg> Import-Module Splunk2
PS C:\Users\hrottenberg> get-command -Module Splunk2
 
CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Function        Connect-Splunk                                     Splunk2
Function        Disconnect-Splunk                                  Splunk2
 
PS C:\Users\hrottenberg> help Connect-Splunk
 
NAME
Connect-Splunk
 
SYNOPSIS
Connects to a Splunk server
 
SYNTAX
Connect-Splunk [-ComputerName] <String> [-Port <Int32>] -Credential <PSCredential> [<CommonParameters>]
 
 
DESCRIPTION
This function connects to a Splunk server via the REST API and creates a service object called $SPLUNK_SERVICE.
This object can be used to interact with Splunk directly, or is used by other functions in this module to
share a persistent session.
 
 
RELATED LINKS
 
REMARKS
To see the examples, type: "get-help Connect-Splunk -examples".
For more information, type: "get-help Connect-Splunk -detailed".
For technical information, type: "get-help Connect-Splunk -full".
 
 
PS C:\Users\hrottenberg> Connect-Splunk -ComputerName 192.168.1.140 -Credential (Get-Credential)
 
cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential
 
Token   : Splunk 4e691cd33d3981054803ca9c5b62ba82
Version : 5.0.1
Host    : 192.168.1.140
Port    : 8089
Prefix  : https://192.168.1.140:8089
Scheme  : https
 
PS C:\Users\hrottenberg> help Connect-Splunk -Examples
 
NAME
Connect-Splunk
 
SYNOPSIS
Connects to a Splunk server
 
-------------------------- EXAMPLE 1 --------------------------
 
C:\PS>Connect to a Splunk server and list all indexes greater than 100 MB in size
 
Connect-Splunk -ComputerName splunk.company.com
$idx = $SPLUNK_SERVICE.GetIndexes()
$idx | Where-Object { $_.CurrentDBSizeMB -gt 100 } | Format-Table name, HomePathExpanded, CurrentDBSizeMB -AutoSize
 
PS C:\Users\hrottenberg> $idx = $SPLUNK_SERVICE.GetIndexes()
PS C:\Users\hrottenberg> $idx | Where-Object { $_.CurrentDBSizeMB -gt 100 } | Format-Table name, HomePathExpanded,
CurrentDBSizeMB -AutoSize
 
Name      HomePathExpanded                                   CurrentDBSizeMB
----      ----------------                                   ---------------
_internal /Applications/splunk/var/lib/splunk/_internaldb/db            4215
isilon    /Applications/splunk/var/lib/splunk/isilon/db                  624
main      /Applications/splunk/var/lib/splunk/defaultdb/db               156
 
 
PS C:\Users\hrottenberg> Disconnect-Splunk
 
Token   :
Version : 5.0.1
Host    : 192.168.1.140
Port    : 8089
Prefix  : https://192.168.1.140:8089
Scheme  : https
 
PS C:\Users\hrottenberg> $SPLUNK_SERVICE.GetIndexes()
The following exception occurred while trying to enumerate the collection: "The remote server returned an error: (401)
Unauthorized.".
At line:1 char:1
+ $SPLUNK_SERVICE.GetIndexes()
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : NotSpecified: (:) [], ExtendedTypeSystemException
+ FullyQualifiedErrorId : ExceptionInGetEnumerator

Monitoring Processes on Windows

$
0
0

We get a lot of questions here at the Splunk Microsoft Practice – not just on our apps (which are awesome starting points for common Microsoft workloads), but also how to do specific things in Windows. One of the things I recently got asked was “how do I get a top-10 type report of processes on a system and who is running them?” This should be fairly straight-forward. After all, Microsoft provides a perfmon object called “Process” – maybe I can just monitor that. Unfortunately, the owner is not available. Ditto with WMI. Once I’ve exhausted the built-in methods of getting information, I turn to my favorite tool – PowerShell.

There are two methods of getting the list of processes on a system. Get-Process is the de-facto standard for getting a process list from PowerShell, but I prefer the WMI approach – Get-WmiObject –class win32_process. The reason for the choice is that the objects that you get back have a bunch of useful methods on them, one of which is GetOwner() that retrieves the owner of the process – just what we are looking for. You can always get the list of things you can do by piping the command to Get-Member. For example:

Get-WmiObject -class win32_process | Get-Member

In order to get the owner information into the objects, we have to do a little work. Joel Bennett assisted with this small scriptlet:

Get-WmiObject –class win32_process |
    Add-Member -MemberType ScriptProperty -PassThru -Name Username -Value {
        $ud = $this.GetOwner();
        $user=$ud.Domain+"\"+$ud.User;
        if ($user -eq "\") { "SYSTEM" } else { $user }
    }

Although I have split this over multiple lines for readability, you should type this all on the same line. What this does is add an “Owner” property to each object in the pipeline, and it gets the value by called GetOwner() on the object. There is a special case when the process does not have an owner, and in this case, we set the owner to “SYSTEM”.

You will notice an awful lot of properties being returned when you run this command. We will fix that when we start importing it into Splunk. Speaking of which, how do we do that? We turn to one of my favorite addons – SA-ModularInput-PowerShell. You can download it from Splunkbase. This addon persists a PowerShell scripting host for running scripts and gathering the results. Any objects that are output by our script are converted into key-value pairs and sent on to Splunk. You need to install the .NET 4.5 framework and WinRM 3.0 as well as the Splunk Universal Forwarder for Windows.

Since the SA-ModularInput-PowerShell addon does not define any scripts, you need to add your script to the inputs.conf of an app. Our script would appear like this:

[powershell://Processes]
script = Get-WmiObject -class win32_process | Add-Member -MemberType ScriptProperty -PassThru -Name Username -Value { $ud = $this.GetOwner();  $user=$ud.Domain+"\"+$ud.User;  if ($user -eq "\") { "SYSTEM" } else { $user } }|select ProcessId, Name, Username, Priority, ReadOperationCount, WriteOperationCount, CreationDate, Handle, VirtualSize, WorkingSetSize, UserModeTime, ThreadCount
schedule = 0,15,30,45 * * ? * *
source = PowerShell
sourcetype = PowerShell:Process

Our script is fairly evident, but we have added a Select to limit the properties that are sent on to Splunk. I’ve picked some interesting ones around memory usage, thread counts and IOPS. The schedule will be recognizable as a cron-style scheduler. The SA-ModularInput-PowerShell is based on Quartz.NET – a well known open-source scheduling system for the .NET framework.

Once the data is flowing into Splunk (check the splunkd.log file if it isn’t), we need a search that will get us the processes at any given time. Here is my search:

sourcetype=PowerShell:Process |
    stats count as Polls,
        latest(Name) as Name,
        latest(Username) as Username,
        latest(Priority) as Priority,
        max(ReadOperationCount) as ReadOperationCount,
        max(WriteOperationCount) as WriteOperationCount,
        latest(Handle) as Handle,
        max(VirtualSize) as VirtualSize,
        latest(WorkingSetSize) as WorkingSetSize,
        latest(UserModeTime) as UserModeTime,
        max(ThreadCount) as ThreadCount by host,ProcessId,CreationDate

Again, run this all together on the same line – it’s just split up for readability. We need the CreationDate field because a ProcessId can be recycled on a given host. By utilizing the host, ProcessId and CreationDate, we get a unique key to identify each process. I normally place useful searches like this in a macro – either by editing my macros.conf file or in the Manager. I’ve named my macro “all-windows-processes”.

So, what about that top-ten processes. Well, it depends on how you measure the top ten. Here are some interesting searches using that macro:

Top 10 Processes run by users that have the largest virtual memory footprint

`all-windows-processes` | search Username!=”SYSTEM” | top VirtualSize

Top 10 Processes that have the largest amount of disk activity

`all-windows-processes` | eval DiskActivity = ReadOperationCount + WriteOperationCount | top DiskActivity

Top 10 Users that are running the most processes

`all-windows-processes` | stats count by Username,host | top count

Top 10 longest running user processes

`all-windows-processes` | search Username!=”SYSTEM” | top Polls

Hopefully, this gives you some ideas on what you can do to monitor processes on your Windows systems, and if you are wondering how to monitor something on your Windows systems, let us know at microsoft@splunk.com or use Ask an Expert – just look for my picture.

Catching Errors in PowerShell

$
0
0

I’ve been recently writing a lot of PowerShell for the SA-ModularInput-PowerShell addon. It’s amazingly flexible at capturing data that is embedded in the .NET framework and many Microsoft products provide convenient access to their monitoring counters via PowerShell. This modular input can replace perfmon, regmon, WMI and all the other things we used to use for monitoring Windows boxes. However, sometimes bad things happen. Scripts don’t work as expected. In the Splunk world, permissions, connectivity and other problems make the diagnosis of scripted inputs a problem. I can run the script myself and get the right stuff, but when I put it in an inputs.conf file, it breaks.

One way to get some diagnostics in there is to ensure the script throws exceptions when necessary and then use a wrapper script to capture those exceptions and produce log events from them. We use this a lot within new apps, and if you have signed up for the Splunk App for SQL Server Beta Program, you will know that all our PowerShell scripts are wrapped in this manner. You can download and view the script on Github, so I am not going to reproduce it here.

This script traps errors. Along the way it writes out two events for you. The first (with sourcetype=PowerShell:ScriptExecutionSummary) contains an Identity field (more on that later), InvocationLine and TerminatingError fields. The more important one from a diagnostics point of view is the second event (with sourcetype=PowerShell:ScriptExecutionErrorRecord) has a ParentIdentity (which matches the Identity field from the first event so you can correlate the two events), and all the Error information as fields. Just in case that wans’t enough, it adds timing information to the ScriptExecutionSummary so you can see how long your script is running.

Using this script is easy. In your addon, create a bin directory for your PowerShell scripts and place the above script in the bin directory as “Invoke-MonitoredScript.ps1” as well. Let’s take a look at the normal running of a script and the wrapped version. Here is our normal inputs.conf stanza for a typical script, taken from the addon for Microsoft SQL Server:

[powershell://DBInstances]
script = & "$SplunkHome\etc\apps\TA-SQLServer\bin\dbinstances.ps1"
schedule = 0 */5 * ? * *
index = mssql
sourcetype = MSSQL:Instance:Information
source = Powershell

Now let’s take a look at the modified version for producing the error information:

[powershell://DBInstances]
script = & "$SplunkHome\etc\apps\TA-SQLServer\bin\Invoke-MonitoredScript.ps1" -Command ".\dbinstances.ps1"
schedule = 0 */5 * ? * *
index = mssql
sourcetype = MSSQL:Instance:Information
source = Powershell

The script you want to run is not affected – only the execution of the script is adjusted. Now you will be able to see any errors that are produced within the monitored script. I have added an Errors dashboard that shows the errors I get combined with the parent invocation information to show timing as well.


PowerShell Profiles and Add-Path

$
0
0

I often blog about Splunk, but that’s not the only thing that is on my mind. One of the more common things on my mind is PowerShell and how it has affected how I do my work. It’s been hugely impactful. However, it does require a little bit of forethought in terms of setting up your environment. When you first get started with PowerShell, you double-click on the little PS icon and get a perfectly suitable environment for doing basic tasks. However, it can be improved. I used to be a Linux administrator and used the Korn shell for my work. In order to set up my environment, I used a .kshrc file. Similarly, PowerShell has a profile that you can use to customize your environment.

First things first – you need to create a place for your environment. This does it:

PS> mkdir ~\Documents\WindowsPowerShell

Now that you are there, you can edit your profile. You can see what file is going to be edited using:

PS> $profile
C:\Users\Adrian\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

There is a different file for the Integrated Scripting Environment (ISE), so you can have one profile for your PS> prompt and another for the ISE. What you put in there is up to you. One of the things that I do is to set up my development environment. I edit the XML and configuration files with Notepad++ (which you can download for free from their web site). However, that is not added to the PATH by default. I’ve added a short cmdlet for processing the PATH (called Add-Path) and then I use that to alter the path. It’s probably not the best way of doing this (and PowerShell purists can correct me if they like, or point me to http://poshcode.org). But – like many scripting languages – there are many ways of completing the same task, and this is mine. You can also find this function on my Github at https://gist.github.com/adrianhall/956311662fc2a218d9fa

function Add-Path {
  <#
    .SYNOPSIS
      Adds a Directory to the Current Path
    .DESCRIPTION
      Add a directory to the current path.  This is useful for
      temporary changes to the path or, when run from your
      profile, for adjusting the path within your powershell
      prompt.
    .EXAMPLE
      Add-Path -Directory "C:\Program Files\Notepad++"
    .PARAMETER Directory
      The name of the directory to add to the current path.
  #>

  [CmdletBinding()]
  param (
    [Parameter(
      Mandatory=$True,
      ValueFromPipeline=$True,
      ValueFromPipelineByPropertyName=$True,
      HelpMessage='What directory would you like to add?')]
    [Alias('dir')]
    [string[]]$Directory
  )

  PROCESS {
    $Path = $env:PATH.Split(';')

    foreach ($dir in $Directory) {
      if ($Path -contains $dir) {
        Write-Verbose "$dir is already present in PATH"
      } else {
        if (-not (Test-Path $dir)) {
          Write-Verbose "$dir does not exist in the filesystem"
        } else {
          $Path += $dir
        }
      }
    }

    $env:PATH = [String]::Join(';', $Path)
  }
}

Add-Path -Directory “C:\Program Files (x86)\Notepad++”
Set-Alias edit notepad++.exe

Add-Path –Directory “C:\Program Files\Splunk\bin”
Add-Path –Directory “C:\Program Files (x86)\PuTTY”

My profile gives me access to the handy Add-Path cmdlet, adds a few directories to my path to set up ssh (I use PuTTY), Splunk and my editor, and then sets up an alias so I can edit files with an “edit” command. Of course, I do other things in my PowerShell prompt, such as setting up GitHub and remote access privileges for my remote instances, giving me the ability to run “connect <host>” where the host is picked up from a CSV file and transitioned to an IP address – all designed to make my working time as productive as possible. Ultimately, what you put in your profile will depend on how you work and what you need.

Do you have an idea of something you put in your profile, or a favorite tool you can’t do without? Let me know via Twitter at @splunk_ahall.

Detecting Your Hypervisor from within a Windows Guest OS

$
0
0

Let’s face it – most of our applications run on hypervisors – Microsoft Hyper-V, VMware or Citrix XenServer seem to be the top contenders. This makes our technology stacks that much more complex since we have added a layer of abstraction between the application and the bare metal. Instead of a stack that includes Compute, Storage, OS, and Application, we’ve added Hypervisor to the mix. How do we correlate what is happening on the compute platform to what is happening on the application level? How do we understand which other applications are running on the same hypervisor? One common instance, for instance, is in memory management. An application runs out of memory, but the hypervisor has memory that has not been allocated to the guest because the memory metric from the hypervisor perspective doesn’t reflect the fact that the application is under memory stress (which is generally because the hypervisor has no visibility into how guests use the allocated memory, so they can’t see the difference between cache memory, paged memory, and pooled memory).

The key to all of this is understanding our correlation points. In the case of hypervisors, the most obvious correlation points are the MAC address of the guest OS and the type of Hypervisor that the guest is running on. For this work, we will turn to my favorite data input workhorse, the SA-ModularInput-PowerShell addon. With this addon, we can write small PowerShell scripts that run on a regular basis to capture the information. Since the SA-ModularInput-PowerShell is based on PowerShell 3.0, we have a couple of thousand PowerShell cmdlets to choose from. Normally, we will be monitoring the guest OS, so let’s get this correlation information from there.

Let’s start with getting the MAC address of the guest OS. One of the many cmdlets in the PowerShell 3.0 set is the Get-NetAdapter cmdlet. This returns an object per “real” interface. The command I use is:

Get-NetAdapter | Select-Object -Property Name,MacAddress,LinkSpeed

Here is an example output from my VMware server:

Name                            MacAddress                      LinkSpeed
----                            ----------                      ---------
Ethernet 8                      8A-AF-38-2E-D8-D1               1 Gbps
Ethernet 5                      0A-75-BB-0D-CF-D7               1 Gbps
Ethernet 7                      9A-0F-47-69-CD-D8               1 Gbps
Ethernet 6                      EE-2D-D1-D5-58-75               1 Gbps

This is all good information that we will need to accomplish the first task in our list. If you want a correlation between the IP address and the network adapter, then you can add ifIndex to the list of properties and use the following command to get the list of IP addresses:

Get-NetIPAddress | Where-Object PrefixOrigin -ne "WellKnown"

Our network adapter information does not include the hypervisor information. For this, we need WMI information – in this case, the Win32_ComputerSystem class. This has a property called Manufacturer that follows a standard format:

Get-WmiObject –query ‘select * from Win32_ComputerSystem’
Domain              : bd.splunk.com
Manufacturer        : Xen
Model               : HVM domU
Name                : BD-XD7-01
PrimaryOwnerName    : Windows User
TotalPhysicalMemory : 1069137920

This gives you a bunch of useful information, so much so that I do this query on all my Windows systems. For our purposes, I will note that Manufacturer line. This is a standard value:

 Manufacturer Value  Hypervisor
 Xen  Citrix XenServer
 VMware, Inc.  VMware ESXi
 Microsoft Hyper-V  Microsoft Hyper-V

If the host is not housed on a Hypervisor, then the manufacturer will be a PC manufacturer like “Lenovo”, “Dell, Inc.” or “Hewlett-Packard”. Now that we have that, we can add the hypervisor information to our network adapter information to get a combined lookup:

Get-NetAdapter | `
    Select-Object Name,MacAddress,LinkSpeed | `
    Add-Member -PassThru -Name HWManufacturer -Value (gwmi -query 'Select * From Win32_ComputerSystem').Manufacturer

Even better, we can correlate the hypervisor, IP Address and Mac Address together for a great correlation lookup:

Get-NetIPAddress | Where Prefix-Origin -ne "WellKnown" | `
    Select IPAddress,AddressFamily, `
        @{n='MacAddress';e={(Get-NetAdapter -InterfaceIndex $_.ifIndex).MacAddress}}, `
        @{n='Manufacturer';e={(Get-WmiObject -query 'SELECT * FROM Win32_ComputerSystem').Manufacturer}}

This syntax may be a little unusual to the PowerShell novice. It is known as a computed property and allows you to use the results of other cmdlets (or indeed any PowerShell script) as a value in the object that is created.

Now that we have our little script ready, we can run this on a regular basis – say, at 2am each day – by adding it to an inputs.conf file:

script = Get-NetIPAddress | Where Prefix-Origin -ne "WellKnown" | Select IPAddress,AddressFamily, @{n='MacAddress';e={(Get-NetAdapter -InterfaceIndex $_.ifIndex).MacAddress}}, @{n='Manufacturer';e={(Get-WmiObject -query 'SELECT * FROM Win32_ComputerSystem').Manufacturer}}
schedule = 0 0 2 * ? *
sourcetype = PowerShell:NetAdapter

Yes – that script line needs to be typed all on the same line. You will get four fields in each event – an IP address, address family (IPv4 or IPv6), a MAC address and a manufacturer. Now you can create a lookup within Splunk for easy correlations:

sourcetype = PowerShell:NetAdapter | stats values(MacAddress) as MacAddress, values(Manufacturer) by as Manufacturer by host,IPAddress | outputlookup HostIPInformation

Turn this search into a saved search and run it every 24 hours to get the right information. Finally, we need to use this information. Let’s say you have a search that outputs an IP address and you want to know if it’s on a hypervisor, how about something like this:

`mysearch` | lookup HostIPInformation src_ip as IPAddress OUTPUT Manufacturer,MacAddress | eval IsHypervisor=case(Manufacturer=="*VMware*",true,Manufacturer=="*Xen*",true,Manufacturer=="*Hyper-V*",true,host=*,false)

You can use this information to correlate the applications running on the guest OS to the hypervisor it is in by using the Splunk App for VMware or the Splunk App for Server Virtualization.

Splunk Universal Forwarders and the Domain User

$
0
0

One of the things that you have to decide right up front on Windows is how to run the Universal Forwarder. For most situations, running as the Local System account is adequate, providing access to all necessary resources. Other times, you need to run as a domain user; either because of local security policies or because what you are monitoring requires a domain account. For example, SharePoint, SQL Server and remote WMI access all require a domain account. I’ve blogged about how to do the necessary security changes using GPO before, but GPO has some drawbacks. The most notable one is that you cannot have different group policies managing the user rights because the last group policy will overwrite the earlier ones.

As a result, many organizations decide to leave the user rights assignment to the local security policy, which means you now have to go through all of your Windows hosts that require a domain account to run Splunk and update the local security policy. What we all need is a scripted method of doing all the changes necessary to install the Splunk Universal Forwarder so we can install to hundreds of hosts using a remoting method like PowerShell.

Fortunately, Microsoft likes large enterprises and has provided tools to allow us to do this. We first need to create a single system with the right local security policy. Just log on to your favorite test machine and do the changes to the local security policy. Then open up a PowerShell prompt as the Administrator and run the following command:

secedit /export /cfg splunk-lsp.inf /areas USER_RIGHTS

Secedit is a useful command that exports and imports the security configuration. This command will create a small text file for us to edit. Before we edit the exported file, we need to know the Security Identifier (or SID) of the user that will run Splunk, normally specified as DOMAIN\user – in my case, it’s BD\sp-domain. I can find the SID by using this PowerShell snippet:

$user = New-Object System.Security.Principal.NTAccount("BD\sp-domain")
$user.Translate([System.Security.Principal.SecurityIdentifier]).Value

This will produce a string starting with S- and with a whole lot of numbers after it. We will need this number to recognize our user in the inf file we created in the first step. Our next step is to edit the splunk-lsp.inf file so that it only includes the local security rights we are interested in. Here is my resulting file:

[Unicode]
Unicode=yes

[Privilege Rights]
SeTcbPrivilege = *S-1-5-21-2882450500-3417635276-1240590811-1206
SeChangeNotifyPrivilege = *S-1-1-0,*S-1-5-19,*S-1-5-20,*S-1-5-21-2882450500-3417635276-1240590811-1206,*S-1-5-32-544,*S-1-5-32-545,*S-1-5-32-551,*S-1-5-90-0
SeBatchLogonRight = *S-1-5-21-2882450500-3417635276-1240590811-1206,*S-1-5-32-544,*S-1-5-32-551,*S-1-5-32-559
SeServiceLogonRight = *S-1-5-21-2882450500-3417635276-1240590811-1206,*S-1-5-80-0
SeSystemProfilePrivilege = *S-1-5-32-544,*S-1-5-80-3139157870-2983391045-3678747466-658725712-1809340420
SeAssignPrimaryTokenPrivilege = *S-1-5-19,*S-1-5-20,*S-1-5-21-2882450500-3417635276-1240590811-1206

You will note that this file has six privileges, not five as per the Splunk installation manual. That’s because there is not a one-to-one relationship from the displayed privileges in the Local System Policy to the security policy underlying those privileges. You can read all about the other policy decisions in the file C:\Windows\inf\defltsv.inf.

Now that you have the security policy file, you have one more task before bulk installation. You have to add the designated user to your local administrators group. This can be done through a GPO but you can do this with the following PowerShell ADSI command:

([ADSI]"WinNT://${env:COMPUTERNAME}/Administrators,group").Add("WinNT://BD/sp-domain")

Now you can create an installer script for your Splunk Universal Forwarder. Most organizations have a software repository that is mounted automatically. I mount mine at S:\ and the Splunk stuff is in the S:\Splunk area. My installer script is called “installad.ps1″, and here it is:

secedit /import /cfg S:\Splunk\splunk-lsp.inf /db C:\splunk-lsp.sdb
secedit /configure /db C:\splunk-lsp.sdb
Remove-Item C:\splunk-lsp.sdb
([ADSI]"WinNT://${env:COMPUTERNAME}/Administrators,group").Add("WinNT://BD/sp-domain")
msiexec.exe /i splunkforwarder.msi AGREETOLICENSE=Yes DEPLOYMENT_SERVER="sp-deploy:8089" LOGON_USERNAME="BD\sp-domain" LOGON_PASSWORD="changeme" INSTALL_SHORTCUT=0 /quiet

With a little planning and preparation, you can deploy the Splunk Universal Forwarder across your domain in a very automated fashion.

PowerShell version 2

$
0
0

By now, you are probably aware that I love PowerShell as a method of getting things on Windows. It’s your one stop method for getting all sorts of nice things. However, our SA-ModularInput-PowerShell module had certain limitations. Most notably, it could only work with .NET 4.5 and CLR4 – aka PowerShell v3. This was great for your one-off scripts where you weren’t adding in any plug-ins. In particular, Microsoft applications such as SharePoint 2010 and Exchange 2007 require PowerShell v2 support because their plug-ins are distributed for .NET Framework 3.5.

I’m happy to announce that one of our PowerShell MVPs – Joel Bennett – has updated the Splunk Addon for Microsoft PowerShell to support .NET 3.5 and CLR 2.

There are a couple of common gotchas. The first is in handling PowerShell snap-ins using the Add-PsSnapIn cmdlet. If the cmdlet is run twice in a row, then an error occurs. The problem is that our resident PowerShell host continually runs. The major performance increase obtained between the SA-ModularInput-PowerShell and a standard scripted input is that you aren’t spinning up a PowerShell executable every time – it’s always running. That also means that any snapin that you load is perpetually in memory.

You can ignore errors by utilizing the -ErrorAction parameter, like this:

Add-PsSnapIn Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

The second problem that is common is that you actually need to use Select-Object at the end to ensure that the modular input knows what to log and what not to log. There are a lot of properties and methods on a typical PowerShell object and most of them are ignored. For example, check out this simple usage:

[powershell2://test-service-health]
script = Get-Service | Select-Object Name, DisplayName, Status
schedule = 0 0/5 * ? * *

In this input, we are grabbing the services that are running on the local machine. However, if we don’t select the properties we want, the call will fail because one of the properties is a ServiceHandle, which is not available and we get an error instead. Rule of Thumb is to always end your pipeline with a Select-Object to get the things you are interested in.

My final advice is on errors. We now have two PowerShell hosts, each with different requirements. When installed on a standard Windows Server 2008R2 host with no updates, only PowerShell2.exe will be running because the .NET Framework 4.5 is not available. You will see errors in the splunkd.log pertaining to the inability to start the PowerShell.exe. In a similar manner, when installed on a standard Windows Server 2012 host, only PowerShell.exe will be running because the .NET Framework 3.5 is not available. All our logs are available in the _internal index, so you can do a search for “index=_internal powershell” to find all the problems with PowerShell scripts.

Finally, check out my other posts on using PowerShell!

Monitoring Scheduled Tasks with PowerShell

$
0
0

I did the unthinkable yesterday. I combed through my posts for non-spam comments. I apologize to everyone whom I didn’t answer – we get a lot of comment spam that I have to wade through when I do this. However, there were a couple of requests in there for future topics and I’ll try and cover those requests in the next few weeks.

The first request was for monitoring scheduled tasks. I’m going to read this as “given a Windows host, how do you determine what scheduled tasks are enabled and whether they are failing or succeeding?”. That’s a tall order, so I looked to my favorite tool – PowerShell – for the answer.

PowerShell v3 has a bunch of cmdlets that manage scheduled tasks. The first – Get-ScheduledTask – gets a list of scheduled tasks along with some information about them. Looking at the Get-Member results, we see the following:

PS> Get-ScheduledTask | Get-Member

   TypeName: Microsoft.Management.Infrastructure.CimInstance#Root/Microsoft/Windows/TaskScheduler/MSFT_ScheduledTask

Name                      MemberType     Definition
----                      ----------     ----------
Clone                     Method         System.Object ICloneable.Clone()
Dispose                   Method         void Dispose(), void IDisposable.Dispose()
Equals                    Method         bool Equals(System.Object obj)
GetCimSessionComputerName Method         string GetCimSessionComputerName()
GetCimSessionInstanceId   Method         guid GetCimSessionInstanceId()
GetHashCode               Method         int GetHashCode()
GetObjectData             Method         void GetObjectData(System.Runtime.Serialization.SerializationInfo info, Sys...
GetType                   Method         type GetType()
ToString                  Method         string ToString()
Actions                   Property       CimInstance#InstanceArray Actions {get;set;}
Author                    Property       string Author {get;set;}
Date                      Property       string Date {get;set;}
Description               Property       string Description {get;set;}
Documentation             Property       string Documentation {get;set;}
Principal                 Property       CimInstance#Instance Principal {get;set;}
PSComputerName            Property       string PSComputerName {get;}
SecurityDescriptor        Property       string SecurityDescriptor {get;set;}
Settings                  Property       CimInstance#Instance Settings {get;set;}
Source                    Property       string Source {get;set;}
TaskName                  Property       string TaskName {get;}
TaskPath                  Property       string TaskPath {get;}
Triggers                  Property       CimInstance#InstanceArray Triggers {get;set;}
URI                       Property       string URI {get;}
Version                   Property       string Version {get;set;}
State                     ScriptProperty System.Object State {get=[Microsoft.PowerShell.Cmdletization.GeneratedTypes...

You can see from this that it's just getting the information from WMI (CIM is the new WMI in PowerShell v3 and above). Thus, we can easily get a list of the scheduled tasks using the following script:

Get-ScheduledTask | Where State -ne "Disabled" | Select TaskName,TaskPath,Source,Description,Author,State,URI,Version

That gets us the first part of the problem. Now we need the second part - how do we know when they ran and the status of the last run. There is another cmdlet for this: Get-ScheduledTaskInfo. We can run this by using the following script:

Get-ScheduledTask | Where State -ne "Disabled" | Get-ScheduledTaskInfo | Select TaskName,TaskPath,LastRunTime, LastTaskResult,NextRunTime,NumberofMissedRuns

To actually implement a monitor for scheduled tasks, I would schedule these differently. My inputs.conf (using the handy SA-ModularInput-PowerShell) would look like this:

[powershell://scheduled-tasks]
script = Get-ScheduledTask | Where State -ne "Disabled" | Select TaskName,TaskPath,Source,Description,Author,State,URI,Version
schedule = 0 30 2 ? * *
source = PowerShell
sourcetype = Windows:ScheduledTask

[powershell://scheduled-taskinfo]
script = Get-ScheduledTask | Where State -ne "Disabled" | Get-ScheduledTaskInfo | Select TaskName,TaskPath,LastRunTime, LastTaskResult,NextRunTime,NumberofMissedRuns
schedule = 0 45 * ? * *
source = PowerShell
sourcetype = Windows:ScheduledTaskInfo

The first input stanza runs at 2:30am local time and the second input stanza runs every 60 minutes. Our list of scheduled tasks won’t change very much, so let’s create a lookup to enhance our work. This will turn a host, TaskName and TaskPath into the associated information. The search to run is this:

sourcetype=Windows:ScheduledTask |
    stats latest(Source) as Source,
        latest(Description) as Description,
        latest(Author) as Author,
        latest(State) as State,
        latest(URI) as URI,
        latest(Version) as Version
        by TaskName,TaskPath,host |
    outputlookup WindowsScheduledTask.csv

As normal, enter this all on one line. Turn this into a lookup (either through the manager or via the configuration files) and you are ready to go.

There are three things we can with the scheduled task information. Each will require its own search.

  1. Show Failed Tasks
  2. Show Missed Tasks
  3. Show Last Run Time of all Tasks

The two interesting ones are the failed tasks and missed tasks. Failed tasks can be found by looking at the LastTaskResult. The LastTaskResult is 0 on success and an error code otherwise. Run this search over the last 60 minutes:

sourcetype=Windows:ScheduledTaskInfo LastTaskResult!=0 |
    lookup WindowsScheduledTask host,TaskName,TaskPath OUTPUT Source,Description,Author,URI,Version |
    table host,TaskName,TaskPath,Description,Author,URI,LastRunTime,NextRunTime

The missed tasks search uses the NumberOfMissedRuns instead:

sourcetype=Windows:ScheduledTaskInfo NumberOfMissedRuns!=0 |
    lookup WindowsScheduledTask host,TaskName,TaskPath OUTPUT Source,Description,Author,URI,Version |
    table host,TaskName,TaskPath,Description,Author,URI,NumberOfMissedRuns,LastRunTime,NextRunTime

I mentioned earlier that the Get-ScheduledTask series of cmdlets use the CIM/WMI underneath. However, they apparently only work on NT 6.2 and above; also known as Windows Server 2012 or Windows 8. Unfortunately, this is one area of Microsoft land that changes frequently. For earlier versions, there is a WMI interface (Win32_ScheduledJob) that can be used, but it provides different information. Also, there is a log file that is maintained by the scheduler (C:\Windows\Tasks\SchedLgU.txt). However, the log file has an issue – it is exactly 32Kb in size and the system locks it and overwrites the contents constantly. Once it gets to the end, it starts at the beginning of the file again. This is good for diagnosis, but not good for monitoring purposes. Hopefully Microsoft will maintain the PowerShell cmdlets “as is” for future versions of Windows!

Export Search Results with PowerShell

$
0
0

A while back, I wrote an introduction to how you could play with our C# SDK from PowerShell. And just the other day, Adrian wrote a post talking about how you could export really large result sets to CSV, using the REST API. It was a good read, but there was one problem: this was a somewhat Windows-centric post (talking about SharePoint data in his case), but he used curl to get the data out! We can most certainly do better than that for our Windows community, so that’s what I’m here to help solve.

What I ended up doing was to take an example from our dev docs about the search/jobs/export REST endpoint that looks like this:

$ curl -k -u admin:changeme https://localhost:8089/services/search/jobs/export
       --data-urlencode search="search index=_internal | stats count by sourcetype"
       -d output_mode=json -d earliest="rt-5m" -d latest="rt"

…and I turned it into PowerShell. The code is posted on gist.github.com with some comments and syntax highlighting and all that good stuff.  I also took the liberty of fleshing it out a little bit into something you might put into a script, rather than a one-liner, but PowerShell can certainly do concise as well. In fact, there is a built-in alias called ‘curl’ that maps to the PowerShell cmdlet Invoke-WebRequest. However, in this case, there’s another cmdlet that’s even more well-suited called Invoke-RestMethod. This cmdlet (built-in alias ‘irm’) does what ‘iwr’ does, but on top, it adds an output parsing layer that will turn JSON or XML text into PowerShell objects. So, it’s got that going for it.

Also note that you do not need to worry about url-encoding your search queries, the cmdlet does that for you. And one more note: PowerShell and .NET are very picky when it comes to self-signed SSL certificates. There’s not even a flag to override this behavior, which has long been a pet peeve of mine. I’ve included a quick workaround for that in my code sample, in case you are in the group like me that thinks that self-signed certs are perfectly reasonable in many internal use cases.

And…one more important note that I just discovered: don’t do a realtime search this way. Because of how both Invoke-RestMethod and Invoke-WebRequest work, they are not going to output anything until the request completes, and of course a realtime search won’t quit until you complete/cancel the search. There’s definitely a way to tackle streaming like this using PowerShell (probably with the system.net.webclient class), but that will have to be another blog post!

Anyway, check out the gist link for the longer form. And here below is a one-liner version, along with some sample output:

PS C:\> irm -Method Post -Uri https://server:8089/services/search/jobs/export -Body @{
  search="search index=_internal | stats count by sourcetype"
  output_mode="json"
  earliest="-5m" } -Credential (Get-Credential)
 
cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
{"preview":true,"offset":0,"result":{"sourcetype":"splunk_web_access","count":"9624"}}
{"preview":true,"offset":1,"result":{"sourcetype":"splunk_web_service","count":"152"}}
{"preview":true,"offset":2,"result":{"sourcetype":"splunkd","count":"88494"}}
{"preview":true,"offset":3,"result":{"sourcetype":"splunkd_access","count":"15277"}}

Monitoring Windows Shares with Splunk and PowerShell

$
0
0

I sometimes get emails after blog posts. One of the (fair) criticisms is that I sometimes do something in PowerShell that can be quite legitimately done via another data input like WMI. While this is true for simple cases, it’s not always true. Take the request, for example, of monitoring network shares. There are three parts to this. Firstly, producing a monitor of the share itself; secondly, producing a monitor of the permissions on the share; and finally, monitoring the file accesses utilizing that share. I’ve already blogged about the last one. Let’s take a look at the first two.

You can actually monitor the share itself using WMI. Network Shares are exposed via a WMI class Win32_Share. However, I wanted to go a little further – I wanted to show that we can expose the shares in a way that allows us to monitor changes to the shares. As is most often the case, I’m going to use the SA-ModularInput-PowerShell data input for this purpose. This modular input has a key feature we are going to use – the ability to save state between script executions.

Let’s take a look at the code for getting the data first. It’s relatively simple:

Gwmi Win32_Share | Where Type –eq 0 | Select Name,Path,Status,MaximumAllowed,AllowMaximum

The Share Type 0 is a “standard Windows share” and not something else (like an admin connection or a printer share). If we were doing a WMI connection, then we could construct this to allow us to output this with a WMI query and it would be logged every X minutes. However, we want to go further – we want to show off changes as well. To do this, we utilize the LocalStorage module that is distributed with the SA-ModularInput-PowerShell addon. The basics are simple. First, we set up a LocalStorage hash to use:

$State = Import-LocalStorage ‘Win32_Share.xml’ –DefaultValue (NewObject PSObject –Property @{ S = @{}})

Note the default value – we are setting up a hash that will be persistently stored on each server and will be used to store the current settings. At the end of our script, we want to ensure that we store any updates we made:

$State | Export-LocalStorage ‘Win32_Share.xml’

In between these statements we can handle all the stuff we need. Here is the complete script:

$State = Import-LocalStorage "Win32_Share.xml" -DefaultValue (New-Object PSObject -Property @{ S = @{} })

$shares = (Get-WmiObject -Class Win32_Share | Where-Object Type -eq 0 | Select-Object Name,Path,Status,MaximumAllowed,AllowMaximum)
foreach ($share in $shares) {
    $Emit = $false

    if (-not $State.S.ContainsKey($share.Name)) {
        $Emit = $true
    } else {
        $cache = $State.S.Get_Item($share.Name)
        if (($cache.Path -ne $share.Path) -or 
            ($cache.Status -ne $share.Status) -or
            ($cache.MaximumAllowed -ne $share.MaximumAllowed) -or
            ($cache.AllowMaximum -ne $share.AllowMaximum)) {
            $Emit = $true
        }
    }

    if ($Emit -eq $true) {
        Write-Output $share
        $State.S.Set_Item($share.Name, $share)
    }
}

$State | Export-LocalStorage "Win32_Share.xml"

What we are basically doing here is saying “if the share does not exist in our cache or anything has changed about the share compared to the cache, then output the share to the pipeline and store the new share in the cache. To run this, you will need to add a stanza to inputs.conf. I’ve added this script to my script repository in TA-windows-local/bin, so here is my stanza from that same app:

[powershell://Win32_Share]
script = . “$SplunkHome\etc\apps\TA-windows-local\bin\win32_share.ps1”
schedule = 0 0/5 * ? * *
index = win
sourcetype = Windows:Win32_Share

The output looks like this:

Name=Drivers
Path=C:\Drivers
Status=OK
AllowMaximum=True

There are a couple of improvements we could do to this script. Firstly, adding a “last emitted time” to the event and storing that in the cache would allow us to add a condition that states “if the share has not been emitted in the last 24 hours, then emit the event”. This allows us to restrict the search parameters we use when utilizing this data source to the last 24 hours. Secondly, we can do a second pass – over the cache instead of the shares – and see if any cache entries are not in the share list. This allows us to detect share deletions as well.

Next week, I will cover the second part of this problem – getting the permissions for each share. Until then, keep those ideas for Windows data inputs coming!


Monitoring Windows File Share Permissions with Splunk and PowerShell

$
0
0

I stopped my last blog post on Windows File Shares noting that there was still more to do. Monitoring Windows File Shares is a three part puzzle:

  1. Accesses
  2. Share Changes
  3. Permission Changes

We have already handled the first two, so this blog post is all about the final one – monitoring permission changes.

Let’s first consider how one would do this generically. As with the file shares, there is a WMI class for monitoring permissions, but it’s harder to use. You need to do it on a per-share basis, like this:

gwmi Win32_LogicalShareSecuritySetting -Filter "Name='$shareName'"

The Win32_LogicalShareSecuritySetting is a complex beast. Fortunately, we only need to know a couple of things. The most important one is the security descriptor. You can get the security descriptor like this:

$ss = gwmi Win32_LogicalShareSecuritySetting -Filter "Name='$shareName'"
$sd = $ss.InvokeMethod('GetSecurityDescriptor',$null,$null)

Once you have the security descriptor, the ACLs are in a property called DACL (which is actually an array – one for each entry in the ACL), and the user or group is embedded in another property inside the DACL called Trustee. If you need more information on this object, I suggest reading the excellent blog post by Andrew Buford.

To aid me in this, I created a short script. You can download it from github. It contains two cmdlets that are fairly central to this process – Get-NetShare encapsulates WMI call for obtaining the list of network shares. I use this to feed into the Gte-NetShareSecurity cmdlet, which produces more objects. Now I can do the following:

Get-NetShare | Get-NetShareSecurity

There is more going on within the script though, as it is meant to be run as part of the SA-ModularInput-PowerShell addon. Specifically, it encapsulates the logic from last week for emitting the shares only when they change. I’ve done a few changes – I’ve added a checksum field so that I only have to store and check the checksum. I’ve also added a type – is it a new share, updated share or just a periodic emission? Finally, I’ve handled deletions as well by checking the cache against the current list of shares.

I do pretty much the same thing for the permissions. In the file share example, the share name is the primary key. In the permissions example, we have to construct a primary key – I’ve used the share name and the Security ID (SID) of the user or group as the primary key. Other than that, it’s exactly the same code.

One final note – since this script is outputting two different types of data, I leverage a feature of the SA-ModularInput-PowerShell that allows me to set the source type within the object. The property for this is called SplunkSourceType. You can use Add-Member to add this to the objects you are emitting.

If you are going to .conf 2013 next week, feel free to stop by the Microsoft booth on the third floor in the Apps Showcase and chat with me about Microsoft, PowerShell and getting data into Splunk.

Splunking Windows PowerShell Commands

$
0
0

This years user conference was another great conference and we got a ton of questions from you during the conference. Some of them I couldn’t answer at the time – I’m making up for that in between blog posts about new features. The first one was “Is there any way I can splunk what PowerShell commands are being executed on a server?”

There are two pieces of this puzzle: firstly – can I turn on an audit log that includes all the PowerShell commands that are executed within the system? We do that normally through group policy. Open up the group policy management console and take yourself to:

Computer Configuration\Administrative Templates\Windows Components\Windows PowerShell

In this group policy container there is a setting called “Turn On Module Logging”. It’s either enabled or disabled – enable it to turn on logging. You also need to set the list of modules that are logged. Wildcards are allowed, so feel free to set this to *. Apply your group policy change to the list of servers that you want to log and wait for the change to propagate (or run GPUPDATE /FORCE on the target systems).

Now that you have module logging turned on, the PowerShell commands appear in a Windows Event Log called “Microsoft-Windows-PowerShell/Operational” – you will most certainly want to install a Splunk 6 Universal Forwarder on each server that you are targeting to read this event log. You can do this by utilizing the following inputs.conf stanza:

[WinEventLog://Microsoft-Windows-PowerShell/Operational]
disabled = false

Push that out to your target servers and you will start getting events like the following back:

10/23/2013 10:20:43 AM
LogName=Microsoft-Windows-PowerShell/Operational
SourceName=Microsoft-Windows-PowerShell
EventCode=4103
EventType=4
Type=Information
ComputerName=EX-BES10.bd.splunk.com
User=a-ahall
Sid=S-1-5-21-2882450500-3417635276-1240590811-1179
SidType=1
TaskCategory=Executing Pipeline
OpCode=To be used when operation is just executing a method
RecordNumber=133
Keywords=None
Message=ParameterBinding(Get-Service): name="Name"; value="SplunkForwarder"


Context:
        Severity = Informational
        Host Name = ConsoleHost
        Host Version = 3.0
        Host ID = e6323c96-aa4d-48c3-87a1-b97e01c63afa
        Engine Version = 3.0
        Runspace ID = b2be7033-a9e5-43c1-b356-fedb9ccd34cf
        Pipeline ID = 20
        Command Name = Get-Service
        Command Type = Cmdlet
        Script Name = 
        Command Path = 
        Sequence Number = 42
        User = BD\a-ahall
        Shell ID = Microsoft.PowerShell

From this, you can see all the information that you need to determine what was run, who ran it, what machine it was run from and when it was run. You will need to do the normal extractions to get this information – remember that this is a multi-line event, so ensure you use the ?gms version of the extractions in props.conf to handle multi-line regular expressions.

As for the cmd prompt – sorry, there is no equivalent log for that.

Logging DMVs from Microsoft SQL Server with PowerShell

$
0
0

Some systems are easy to monitor and diagnose – just Splunk the log file or performance counter and you are pretty much done. Others take a little more work. Take, for example, Microsoft SQL Server. Many of the best bits of management information are stored in Dynamic Management Views, or DMVs. Getting to them is not so straight forward.

In order to get those nuggets, we need to do some pre-work. Firstly, install a Splunk Universal Forwarder on the SQL Server. Then fire up the SQL Server Management Studio and add the LOCAL SYSTEM account to the sysadmin role. This will allow the local machine access to all the information you need to monitor any database within the instances that are installed. If you have multiple instances on the server then make sure you add the LOCAL SYSTEM to the sysadmin role on each instance. Finally, push or install the SA-ModularInput-PowerShell to the Splunk Universal Forwarder. This will allow you to grab the information you need.

Now that we have the pre-work out of the way, we can start concentrating on the basics. In the Splunk App for SQL Server, I have a PowerShell module that simplifies the SQL Server access. For instance, it has a command to list the instances on the server:

PS#> Import-Module .\\Common.psm1
PS#> Get-SQLInstanceInformation

This will list out the instances, most notably a field called ServerInstance. You can feed this to another cmdlet to get database information:

PS#> Get-SQLInstanceInformation | Get-SQLDatabases

We want to get access to the Dynamic Management Views. These are accessed via SQL statements. To assist with this, I have another module called SQL.psm1. For instance, in the Splunk App for SQL Server, I include an indexhealth.ps1 script. This runs a DMV query to find out if any indices are suggested for any databases within the instance. Here is the basic process:

PS#> $conns = (Get-SQLInstanceInformation | `
    Where-Object { $_.ServiceState -eq "Running" } | `
    Open-SQLConnection
PS#> $conns | Invoke-SQLQuery -SourceType "MSSQL:DMV:Indexes" -Query $query
PS#> $conns | Close-SQLConnection

As you can see, it’s a three part process. Firstly, we open up connections to each of the running instances. Secondly, we execute our SQL query to retrieve the information. The Invoke-SQLQuery is a wrapper around Invoke-SQLCmd that also formats the objects to be Splunk-friendly. Finally, we close the connections. You can place this in a *.ps1 script and use the PowerShell modular input to execute it.

You can find both modules mentioned in the TA-SQLServer bin directory when you install Splunk App for SQL Server.

The real power here is the DMV-related SQL query. For the index health, here is the query:

SELECT
    DB_NAME(s.database_id) AS [DatabaseName],
    OBJECT_NAME(s.[object_id]) AS [ObjectName],
    i.name AS [IndexName],i.index_id,
    user_seeks + user_scans + user_lookups AS [Reads],
    user_updates AS [Writes],
    i.type_desc AS [IndexType],
    i.fill_factor AS [FillFactor]
FROM sys.dm_db_index_usage_stats AS s
INNER JOIN sys.indexes AS i ON s.[object_id] = i.[object_id]
WHERE i.index_id = s.index_id AND i.index_id != 0

You can find a wide array of DMV related SQL scripts sites such as MSSQLTips.com – particularly this blog post, and to get you started, here is a list of five more queries.

Now, go forth and monitor that SQL Server!

Active Directory Replication and Windows Server 2012 R2

$
0
0

If you have upgraded your Active Directory domain to Windows Server 2012 R2 and use the Splunk App for Active Directory, you may have noticed that the replication statistics script doesn’t work the same way as on older versions of Windows. Specifically, the ad-repl-stats.ps1 script takes forever to run and consumes just about as much memory as you can give it. This is because of a change in the implementation of the System.DirectoryServices.ActiveDirectory API that Microsoft provides. In prior releases of Windows Server, the API was lazy – data was only filled in within the objects when the data was requested. In Windows Server 2012 R2, those same objects filled in the data at instantiation. When we read the replication status object, all the replicated objects were loaded immediately, causing a major performance impact.

Fortunately, we’ve got all the facilities to correct this. As part of the PowerShell v3 release, we also got our hands on some new PowerShell cmdlets for managing Active Directory. These are contained in the RSAT-ADDS Windows Feature (which you will need to install on each domain controller). I created a new script called replication-stats.ps1 with the following contents:

Import-Module ActiveDirectory -ErrorAction SilentlyContinue

Get-ADReplicationPartnerMetaData -Target $env:ComputerName -PartnerType Inbound -Partition * | %{
    $src_host = Get-ADObject -Filter * `
        -SearchBase $_.Partner.Replace("CN=NTDS Settings,","") `
        -SearchScope Base -Properties dNSHostName

    New-Object PSObject -Property @{
        LastAttemptedSync = $_.LastReplicationAttempt
        LastSuccessfulSync = $_.LastReplicationSuccess
        Result = $_.LastReplicationResult
        transport = $_.IntersiteTransportType
        naming_context = $_.Partition
        type = "ReplicationEvent"
        usn = $_.LastChangeUsn
        src_host = $src_host.dNSHostName
    }
}

The primary source of information is the Get-ADReplicationPartnerMetaData, which provides details of the replication partnerships on the current host. We convert the partner into the source host using Get-ADObject. Now the output has exactly the same fields as the old ad-repl-stat.ps1 script. To run it, we need to schedule it using our SA-ModularInput-PowerShell add-on (which you will also need to install on each domain controller). Switch the scripted input for ad-repl-stat.ps1 for the following within inputs.conf:

[powershell://Replication-Stats]
script = & "$SplunkHome\etc\apps\ad-repl-stats\bin\replication-stats.ps1"
schedule = 30 */5 * ? * *
index = msad
source = Powershell
sourcetype = MSAD:NT6:Replication
disabled = false

Once you push out the change (including the required SA-ModularInput-PowerShell) and restart the forwarder (if you are installing SA-ModularInput-PowerShell), you will get the replication data flowing within five minutes. This will enable the replication status report to work for your Windows Server 2012 R2 servers again.

This change will be built into a future version of the Splunk App for Active Directory; for those who need it now, my advice is to create a new app with just this data input in it. Disable the ad-repl-stat.ps1 scripted input in the regular TA as well. This will enable a smooth upgrade when this data input is integrated into the Splunk App for Active Directory.

Install Splunk with PowerShell (2014 Edition)

$
0
0

One of our avid twitter followers asked how to reliably install the Splunk Universal Forwarder on a Windows host with PowerShell last week. I’ve posted about all the intricacies involved before but improvements in open-source tools for PowerShell have made it a whole lot easier. You can take a look at the original article, but follow along here instead. We’re going to walk through what’s involved.

Installing as a Local SYSTEM user is easy. Here is the recipe:

Invoke-Command –ComputerName S1,S2,S3 –ScriptBlock { `
New-PSDrive S –Root \\SPLUNK\Files -PSProvider FileSystem; `
Start-Process S:\splunkforwarder-6.1.1-207789-x64-release.msi `
    –Wait -Verbose –ArgumentList (`
        “AGREETOLICENSE=`”Yes`””, `
        “DEPLOYMENT_SERVER=`”SPLUNKDEPLOY:8089`”” `
        “/Liwem!”, “C:\splunkinstall.log” ) `
}

Let’s recap what you need to do to install a Splunk Universal Forwarder on a Windows host as a domain user:

  1. Add a new service account for the domain user
  2. Prepare the host to run Splunk
  3. Install the Splunk Universal Forwarder with the MSI

Step 1 can be a one-time activity if you run all your Splunk Universal Forwarders as the same user, so let’s do that first. In PowerShell 3, there are Active Directory domain utilities. I like to place my service accounts in an organizational unit called “OU=Service Accounts” off the top-level domain structure. You can use the following command:

New-ADUser –Name svc_splunk –SamAccountName svc_splunk `
    -Description “Service:Splunk UF” `
    –DisplayName “Service:Splunk UF” `
    -Path “OU=Service Accounts,DC=splk,DC=com” `
    -AccountPassword (Read-Host –AsSecureString “Account Password”) `
    -CannotChangePassword:$true `
    –ChangePasswordAtLogon:$false `
    -PasswordNeverExpires:$true `
    –PasswordNotRequired:$false `
    -SmartcardLogonRequired:$false `
    –Enabled:$true

This is fairly basic stuff for the modern domain admin – the important thing to note is you are prompted for a password – you will need it later. Aside from that, you may need to open up some firewall holes if you have the Windows Firewall enabled – port 8089 and 9997 (or whatever your receiving port is) on the outbound side. If I need to do it, I do it everywhere via a group policy.

Now comes the complicated part. Preparing the host to run Splunk means giving the svc_splunk user a lot of privileges. Specifically, you need to add the user to the Administrators group and give the user specific OS-level rights. This used to be really complicated, but we are going to simplify it by utilizing a set of open-source utilities called Carbon (you can download Carbon at http://get-carbon.org). Download the Carbon package and install them as directed on the website. I placed mine in my WindowsPowerShell\Modules directory so that they are always available. They are that useful.

Let’s take a look at adjusting those permissions the new way. First up is adding my service account to the Administrators group. This was already fairly easy, but with Carbon it’s even easier:

Add-GroupMember –Name “Administrators” –Member DOMAIN\svc_splunk

Adjusting the local security policy used to keep me up at night. It was fraught with peril. With Carbon, this is now easy:

Grant-Privilege –Identity DOMAIN\svc_splunk –Privilege `
    ( SeTcbPrivilege, SeChangeNotifyPrivilege, SeBatchLogonRight, `
    SeServiceLogonRight, SeAssignPrimaryTokenPrivilege )

The only gotcha here is that the privileges are case-sensitive, so be careful. Once we have done this, we have completed the host preparation. Now all we need to do is to install the binaries and install the service. I do this via the MSI installer with:

New-PSDrive –Name S –Root \\SPLUNK\Files -PSProvider FileSystem
Start-Process S:\splunkforwarder-6.1.1-207789-x64-release.msi `
    –Wait -Verbose –ArgumentList (`
        “AGREETOLICENSE=`”Yes`””, `
        “LOGON_USERNAME=`”DOMAIN\svc_splunk`””, `
        “LOGON_PASSWORD=`”MyPasswordHere`””, `
        “DEPLOYMENT_SERVER=`”SPLUNKDEPLOY:8089`”” `
        “/Liwem!”, “C:\splunkinstall.log” )

Note that I put the complete fully-qualified path to the MSI here – it’s important. The msiexec seems to break without it. I also set up a CNAME in DNS for the deployment server – it allows me to point it anywhere I want. This is the same command I install as a local system user, but with the additional parameters to specify the domain user.
Script it? Why, certainly. I just put the commands (including an “Import-Module Carbon”) into a ps1 script and put it on \\SPLUNK\Files. Now I can do this:

Invoke-Command –ComputerName S1,S2,S3 –ScriptBlock { `
    New-PSDrive –Name S –Root \\SPLUNK\Files -PSProvider FileSystem; `
    S:\InstallUniversalForwarder.ps1 }

There is a future here. Desired State Config is a newer feature available in PowerShell v4. We can check permissions, file locations, and service settings and both install and upgrade within a single entity. That, however, is a topic for another blog post.

Viewing all 25 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>