#15 What is wrong with that config file ?

Introduction

Okay, Something messed up the configuration file, and we don’t know what it is, and the services are not starting now.

We have 2 options, we can review each and every line of the configuration file for possible misconfigurations (Which could give you a big headache), OR we can just compare the config file to another one from a working environment to find discrepancies using the tools shown below.

Linux

If we have vim installed on your Linux machine, we can use the “vimdiff” utility to compare the 2 files, which will show clearly added, deleted and modified lines.

We can use the utility simply by running the below command:

vimdiff not_working.conf working.conf

The output will look something like the screenshot below. In this example, option2 was modified and new_option4 was added.

image006

Windows

On Windows, we can use the Notepad++ plugin “compare” which can be used for the same thing. The plugin can be installed from Notepad++ -> Plugins -> Plugin Manager.

image007

image009

To use it open the 2 files you want to compare, then go to Plugins -> Compare -> Compare.

image010

The output will look something like the screenshot below. In this example, option2 was modified and new_option4 was added.

image011

Note:

Vimdiff can be used for more than just comparing files. You can use it to compare the output of 2 commands. For example you can use the below command to compare the files under 2 directories.

vimdiff <( ls /dir1 ) <( ls /dir2 )

 

#14 Analyze JSON with jq

Introduction

JSON has become a very widely used file format. It is now being used for API data exchange, log files, configuration files, and many other applications. This tip gives a quick overview of JSON and how to analyze JSON data with a tool called “jq”.

What is JSON?

JSON is a lightweight text-based open standard designed for human-readable data interchange.

JSON vs XML

JSON

{"employees":[
    { "firstName":"John", "lastName":"Doe" },
    { "firstName":"Anna", "lastName":"Smith" },
    { "firstName":"Peter", "lastName":"Jones" }
]}

XML

<employees>
    <employee>
        <firstName>John</firstName> <lastName>Doe</lastName>
    </employee>
    <employee>
        <firstName>Anna</firstName> <lastName>Smith</lastName>
    </employee>
    <employee>
        <firstName>Peter</firstName> <lastName>Jones</lastName>
    </employee>
</employees>

 

  • JSON doesn’t use end tag
  • JSON is shorter
  • JSON is quicker to read and write
  • JSON can use arrays

JSON syntax

JSON syntax is derived from JavaScript object notation syntax.

  • Objects are in {}
  • Data in objects is represented in key/value pairs (dictionary).
  • Arrays are in []
  • Data in objects and arrays is separated by ,
  • Objects can be nested (e.g. Array of objects, Array of Arrays, … etc).

Supported Data types:

  • String
  • Number
  • Object
  • Array
  • Boolean (true, false)
  • null

Example

{
  "firstName": "John",
  "lastName": "Smith",
  "isAlive": true,
  "age": 27,
  "address": {
    "streetAddress": "21 2nd Street",
    "city": "New York",
    "state": "NY",
    "postalCode": "10021-3100"
  },
  "accounts": ["facebook","twitter","instagram"],
  "phoneNumbers": [
    {
      "type": "home",
      "number": "212 555-1234"
    },
    {
      "type": "office",
      "number": "646 555-4567"
    },
    {
      "type": "mobile",
      "number": "123 456-7890"
    }
  ],
  "children": [],
  "spouse": null
}

In the above example you can find:

  • String key/value pairs. e.g.:
  "firstName": "John",
  • Number. e.g.:
"age": 27,
  • Boolean. e.g.:
"isAlive": true,
  • Null. e.g.:
"spouse": null
  • Object. e.g.:
"address": {
    "streetAddress": "21 2nd Street",
    "city": "New York",
    "state": "NY",
    "postalCode": "10021-3100"
}
  • Array. e.g.:
"accounts": ["facebook","twitter","instagram"]
  • Array of Objects. e.g.:
"phoneNumbers": [
    {
      "type": "home",
      "number": "212 555-1234"
    },
    {
      "type": "office",
      "number": "646 555-4567"
    },
    {
      "type": "mobile",
      "number": "123 456-7890"
    }
  ],

jq

jq is a command line JSON parser. It can be used to format and filter JSON data.

Install

MacOS

brew install jq

Debian and Ubuntu

sudo apt-get install jq

Fedora

sudo dnf install jq

Windows

chocolatey install jq

For more details you can refer to https://stedolan.github.io/jq/download/

Format JSON .

The simplest jq program is the expression ., which takes the input and produces it unchanged as output. It can be used to nicely format JSON. For example, let’s take the below file that contains an IAM policy:

$ cat sample1.txt
{ "Version": "2012-10-17", "Statement": [ {  "Sid": "Stmt1507018975000", "Effect": "Allow", "Action": [ "ssm:PutParameter", "ssm:GetParameter" ], "Resource": [ "*" ] } ] }

This is very hard to read. However, If we pipe the output to “jq” It will be organized and colored.

$ cat sample1.txt | jq '.'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1507018975000",
      "Effect": "Allow",
      "Action": [
        "ssm:PutParameter",
        "ssm:GetParameter"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

NOTE: If the output is to long and you want to use less to scroll through it, but also want to keep the coloring you need to use the -C option with jq and the -R options with less.

$ cat sample1.txt | jq '.' -C | less -R

Object Identifier .foo, .foo.bar

As stated, JSON objects consist of key/value pairs (dictionaries). To get the value of a specific key in a JSON object, you can use .foo if the key is foo.

Example: To get the “Version” in the file “sample1.txt” shown above you can use the below command:

$ cat sample1.txt | jq '.Version'
"2012-10-17"

If an object is nested inside another object you can use .foo.bar where “foo” is the key in the outer object and “bar” is the key ins the inner object.

Example: In the below AWS Cloudtrail event (sample2.txt):

{
    "eventVersion": "1.0",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "EX_PRINCIPAL_ID",
        "arn": "arn:aws:iam::123456789012:user/Alice",
        "accessKeyId": "EXAMPLE_KEY_ID",
        "accountId": "123456789012",
        "userName": "Alice"
    },
    "eventTime": "2014-03-06T21:22:54Z",
    "eventSource": "ec2.amazonaws.com",
    "eventName": "StartInstances",
    "awsRegion": "us-east-2",
    "sourceIPAddress": "205.251.233.176",
    "userAgent": "ec2-api-tools 1.6.12.2",
    "requestParameters": {"instancesSet": {"items": [{"instanceId": "i-ebeaf9e2"}]}},
    "responseElements": {"instancesSet": {"items": [{
        "instanceId": "i-ebeaf9e2",
        "currentState": {
            "code": 0,
            "name": "pending"
        },
        "previousState": {
            "code": 80,
            "name": "stopped"
        }
    }]}}
}

If we want to find the ARN of the user that invoked the event we can use “.userIdentity.arn” as shown below.

$ cat sample2.txt | jq '.userIdentity.arn'
"arn:aws:iam::123456789012:user/Alice"

Array Index .[0]

For JSON arrays, you can select a certain item in the array using [n] where n is the order of the item in the array (0 is the first item).

Example: In the below AWS IAM policy (sample3.txt)

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "FirstStatement",
      "Effect": "Allow",
      "Action": ["iam:ChangePassword"],
      "Resource": "*"
    },
    {
      "Sid": "SecondStatement",
      "Effect": "Allow",
      "Action": ["s3:ListAllMyBuckets"],
      "Resource": "*"
    },
    {
      "Sid": "ThirdStatement",
      "Effect": "Allow",
      "Action": [
        "s3:List*",
        "s3:Get*"
      ],
      "Resource": [
        "arn:aws:s3:::confidential-data",
        "arn:aws:s3:::confidential-data/*"
      ],
      "Condition": {"Bool": {"aws:MultiFactorAuthPresent": "true"}}
    }
  ]
}

To get all statements:

$ cat sample3.txt | jq '.Statement'

To get first statement

$ cat sample3.txt | jq '.Statement[0]'
{
  "Sid": "FirstStatement",
  "Effect": "Allow",
  "Action": [
    "iam:ChangePassword"
  ],
  "Resource": "*"
}

To get second statement

$ cat sample3.txt | jq '.Statement[1]'
{
  "Sid": "SecondStatement",
  "Effect": "Allow",
  "Action": "s3:ListAllMyBuckets",
  "Resource": "*"
}

Array Slice .[0:2]

For JSON arrays using [n:m] you can get a slice of the array where the order of he first returned item with be item is “n” and the order of the last item is “m-1”.

Example: To get the first 2 items of the above policy (sample3.txt):

$ cat sample3.txt | jq '.Statement[0:2]'
[
  {
    "Sid": "FirstStatement",
    "Effect": "Allow",
    "Action": [
      "iam:ChangePassword"
    ],
    "Resource": "*"
  },
  {
    "Sid": "SecondStatement",
    "Effect": "Allow",
    "Action": "s3:ListAllMyBuckets",
    "Resource": "*"
  }
]

Array iterator .[]

For JSON arrays using [] will iterate through all the items of the array. This means that you can apply more filters to all the items in the array to return the output.

Example: to get the Sids of all statements in the policy in sample3.txt.

$ cat sample3.txt | jq '.Statement[].Sid'
"FirstStatement"
"SecondStatement"
"ThirdStatement"

Example: to get all Actions mentioned in all statements.

$ cat sample3.txt | jq '.Statement[].Action[]'
"iam:ChangePassword"
"s3:ListAllMyBuckets"
"s3:List*"
"s3:Get*"

Comma ,

If two filters are separated by a comma, then the same input will be fed into both and the two filters’ output value streams will be concatenated in order: first, all of the outputs produced by the left expression, and then all of the outputs produced by the right.

Example: In sample3.txt, the below command will output the Sids of all statements followed by the Version.

$ cat sample3.txt | jq '.Statement[].Sid , .Version'
"FirstStatement"
"SecondStatement"
"ThirdStatement"
"2012-10-17"

Pipe |

The | operator combines two filters by feeding the output(s) of the one on the left into the input of the one on the right.

Example: In sample3.txt, the below command will show the resources of all policies. This doesn’t seam useful yet as it would do the same without the pip, but it will be more clear why this is useful in upcoming examples.

$ cat sample3.txt | jq '.Statement[] | .Resource'
"*"
"*"
[
  "arn:aws:s3:::confidential-data",
  "arn:aws:s3:::confidential-data/*"
]

Array construction [ ]

To make the output an array you can place the filters between [ ].

Example:

$ cat sample3.txt | jq '[.Statement[].Sid] '
[
  "FirstStatement",
  "SecondStatement",
  "ThirdStatement"
]

Object construction { }

To make the output an array you can place the filters between { } with the appropriate keys.

Example:

$ cat sample3.txt | jq '.Statement[] | {"s":.Sid,"a":.Action} '
{
  "s": "FirstStatement",
  "a": [
    "iam:ChangePassword"
  ]
}
{
  "s": "SecondStatement",
  "a": [
    "s3:ListAllMyBuckets"
  ]
}
{
  "s": "ThirdStatement",
  "a": [
    "s3:List*",
    "s3:Get*"
  ]
}

Filter objects select(boolean_expression)

To filter multiple objects to only include the ones that meet a certain condition you use select(boolean_expression). To be able to build the boolean expression the below operators/functions.

  • ==
  • =!
  • > >=
  • < <=
  • and
  • or
  • not
  • has
  • test
  • contains
  • startswith
  • endswith

Examples:

Below are some examples for querying AWS cloudtrail logs. Each log file has the below structure, and each event looks like the example in sample2.txt mentioned above. For the below examples I assume we have multiple log files unzipped and placed in the current directory.

{
  "Records": [
    { cloudtrail event },
    { cloudtrail event },
    ...
    { cloudtrail event },
    ]
}
  1. List all “DescribeInstances” events. ==
$ cat * | jq  '.Records[] | select(.eventName=="DescribeInstances")'
  1. List all RDS events. ==
$ cat * | jq  '.Records[] | select(.eventSource=="rds.amazonaws.com")'
  1. List all Describe events. startswith
$ cat * | jq  '.Records[] | select(.eventName | startswith("Describe")) | .eventName' -r
DescribeStackResource
DescribeStackResource
DescribeStackResource
DescribeStackResource
DescribeStackResource
DescribeLoadBalancerAttributes
...
  1. List all RDS Describe events. startswith and ==
$ cat * | jq  '.Records[] | select(.eventName | startswith("Describe")) |  select(.eventSource=="rds.amazonaws.com") | .eventName' -r
DescribeDBSecurityGroups
DescribeDBSnapshots
DescribeDBInstances
DescribeDBInstances
DescribeDBClusters
  1. List all events that has the word “certificate” or “Certificate”. test
$ cat * | jq  '.Records[] | select(.eventName | test("[Cc]ertificate")) | .eventName' -r
ListCertificates
ListTagsForCertificate
DescribeCertificate
DescribeCertificate
ListTagsForCertificate
ListTagsForCertificate
ListTagsForCertificate
DescribeCertificate
...
  1. List all events that have errors. has
$ cat * | jq  '.Records[] | select(has("errorCode")) | {"error":.errorCode,"event":.eventName,"time":.eventTime}'
{
  "error": "NoSuchCORSConfiguration",
  "event": "GetBucketCors",
  "time": "2018-02-28T05:23:18Z"
}
{
  "error": "NoSuchCORSConfiguration",
  "event": "GetBucketCors",
  "time": "2018-02-28T05:23:18Z"
}
...
  1. List all events that contain “Describe” or “List”. contains and or
$ cat * | jq  '.Records[] | select((.eventName | contains("Describe")) or (.eventName | contains("List")) ) | {"event":.eventName,"time":.eventTime}'
{
  "event": "DescribeStackResource",
  "time": "2018-02-28T00:00:10Z"
}
{
  "event": "DescribeStackResource",
  "time": "2018-02-28T00:02:07Z"
}
  1. List all events that do NOT contain “Describe” or “List”. contains and not
$ cat * | jq  '.Records[] | select(.eventName | contains("Describe") | not) | select(.eventName | contains("List") | not) | {"event":.eventName,"time":.eventTime}' 
{
  "event": "AssumeRole",
  "time": "2018-02-28T00:00:36Z"
}
{
  "event": "AssumeRole",
  "time": "2018-02-28T00:10:42Z"
}
...

Conditional value if A then B else C end

This cane be used to show a value based on a boolean expression.

Example:

$ cat * | jq  '.Records[] | {"describe?": (if(.eventName=="DescribeInstances") then "yes" else "no" end) , "event": .eventName }'
{
  "describe?": "no",
  "event": "DescribeStackResource"
}
{
  "describe?": "no",
  "event": "DescribeStackResource"
}

Count items length

You can use length to get the length of an array.

Example: This can be used to count matched cloudtrail events as shown below. To be able to do this we used the -s option to join events from all log files into one array.

$ cat * | jq  '.Records[]' | jq -s '[ .[] | select(.eventName=="DescribeInstances") ] | length'
108

Comma-separated output @csv

This can be used to represent the output as comma-separated instead of JSON. The input of @csv needs to be an array.

Example:

$ cat * | jq  '.Records[] | select(.eventName=="DescribeInstances") | [.eventTime , .eventName ] | @csv' -r
"2018-02-28T00:18:04Z","DescribeInstances"
"2018-02-28T00:17:37Z","DescribeInstances"
"2018-02-28T00:17:41Z","DescribeInstances"
...

Tab-separated output @tsv

This can be used to represent the output as tab-separated instead of JSON. The input of @tsv needs to be an array.

Example:

$ cat * | jq  '.Records[] | select(.eventName=="DescribeInstances") | [.eventTime , .eventName ] | @tsv' -r
2018-02-28T00:18:04Z    DescribeInstances
2018-02-28T00:17:37Z    DescribeInstances
2018-02-28T00:17:41Z    DescribeInstances
...

jq Manual

You can find the full jq manual on https://stedolan.github.io/jq/manual/.

#13 Monitor the logs

Introduction:

When troubleshooting an issue, it is crucial to be able to identify the relevant messages in the log files, which is not always easy due to the large number of messages especially if the time of occurrence of the issue is not known accurately.

One of the best ways to identify these relevant messages is to monitor the log files as you reproduce the issues. This way we can focus on the logs messages that are shown at the exact time the issue occurs and ignore other messages. This tip shows how to do so for Linux and Windows.

tail –f (Linux):

On Linux we can run the “tail” command with the “–f” option to show the messages added to the log file interactively , and the “–n0” option to ignore old messages as shown below.

# tail –f –n0 trace.log

Afterwards, you can reproduce the issue and wait for the new messages to show. You can terminate the command at any time by typing Ctrl-C.

NOTE: You can use the same command to monitor multiple log files. For example you can use the below command to monitor all logs modified today in the current directory:

# ls -ltrh | grep "`date +"%b %e"`" | awk '{print $NF}' | xargs tail -n0 -f 
==> admin_server.log <==

==> trace.log <==

==> access.log <==

Get-Content –Wait (Windows Powershell):

On Windows Powerhsell, we can use the “Get-Content” command with the “-Wait” option to show the messages added to the log file interactively , and the “–Tail 0” option to ignore old messages as shown below.

# Get-Content trace.log -Wait -Tail 0

Afterwards, you can reproduce the issue and wait for the new messages to show. You can terminate the command at any time by typing Ctrl-C.

I hope you liked this tip. Please let me know if there are any comments or questions.

Have a nice day 🙂

#12 long running commands over SSH

Introduction

You open an SSH session to a Linux server, you run a command, it keeps running for a long time, you leave it to get some coffee until it completes, and when you get back the ssh session has timed out, and you need to start a new one to re-run the command from the beginning. That at least has happened once to any one running commands that need long time over SSH.

The problem is that any command you run using your SSH session is run as a child process to the SSH session itself, and when the session is terminated due to timeout, or any other reason all child processes are killed with it.

Hence, when running command that need long time you need to attach it to a different parent process that won’t be killed when the SSH session is terminated so that it continues to run. Here are some ways to do so:

1. Screen

Screen is a command line window manager, you can use screen to create multiple windows and attach/detach to windows as you want. It has a lot of features you may want to explore, but the one we are interested in her is that it runs independent from the SSH session it was started from, and is not killed when the session id terminated.

  • To open a screen window run the below command. This will open a new shell.
# screen
  • Use the new shell to run the command.
  • If the SSH session terminates your command will still be running, to attach back to your screen open another SSH session and run the below command. This will attach you back and you can see your command still running.
# screen -x
  • You can also detach from the screen any time by typing “Ctrl-A” followed by “d”. This will keep the screen running in the background, and you can always re-attach to it by running the previous command.

For more info about screen, you can refer to https://www.gnu.org/software/screen/manual/screen.html

2. tmux

TMUX is another window manager that works similar to “screen”.

  • To run a tmux window run the below command.  This will also open a new shell.
# tmux
  • Use the new shell to run the command.
  • If the SSH session terminates your command will still be running, to attach back to your tmux shell open another SSH session and run the below command. This will attach you back and you can see your command still running.
# tmux attach
  • You can also detach from your window at any time by typing “Ctrl-B” followed by “d”.

For more info about tmux you can refer to http://man.openbsd.org/OpenBSD-current/man1/tmux.1

 

These are 2 ways to run a command that needs a long time in a SSH session without worrying about timeout. However, what if you have already started the command before realizing it will need a long time and you don’t want to stop it to run it in screen or tmux? Is there a way to change the command parent without having to cancel it to restart it in screen or tmux? The answer is yes 🙂 Here’s how:

Detach command from current shell

On bash, you have the option to move the running command to background by stopping it (using Ctrl-Z), then running the command bg.

# sleep 300
[Ctrl-Z]
[1]+  Stopped                 sleep 300
 
# bg
[1]+ sleep 300 &

Once a command is sent to background it is seen as job for bash, you can get a list of job and their relative PID using the command jobs -l.

# jobs -l
[1]+ 26863 Running                 sleep 300 &

At this point the process is running in the background but still attached to the current session. But, when we run the disown command the process will be detached and if the session is closed it will attach itself to the init process to keep running.

# disown

Command output

The above procedure will ensure that the command will keep running if the session is disconnected. However, If the command output is not being logged in a file, we don’t have a way to attach back to see the command output like we did for screen and tmux.

However, a possible workaround is to monitor the process “write” system calls using strace which will show the output within the system calls.

# strace -ewrite -p PID

Example:

# strace -ewrite -p 30161
 
Process 30161 attached
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=30320, si_status=0, si_utime=0, si_stime=0} ---
write(1, "output line 1\n", 4)                    = -1 EIO (Input/output error)
write(2, "-bash: echo: write error: Input/"..., 45) = -1 EIO (Input/output error)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=30324, si_status=0, si_utime=0, si_stime=0} ---
write(1, "output line 2\n", 4)                    = -1 EIO (Input/output error)
write(2, "-bash: echo: write error: Input/"..., 45) = -1 EIO (Input/output error)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=30325, si_status=0, si_utime=0, si_stime=0} ---
write(1, "output line 3\n", 4)                    = -1 EIO (Input/output error)
write(2, "-bash: echo: write error: Input/"..., 45) = -1 EIO (Input/output error)

The PID is the same we got from the jobs -l command shown above, and the command output is highlighted in yellow.

I hope you liked this tip. Please let me know if there are any comments or questions.

Have a nice day 🙂

#11 All port questions answered (2/2)

Introduction

In the previous tip it was shown how to show listening ports and connections locally and how to check TCP and UDP port connectivity remotely. In this tip we will continue showing some other tools to be get port related information.

5) Get processes listening/connecting to a specific port:

We can get the PID of a process listening on or connection to a port using netstat, or lsof (Linux only).

netstat (Linux):

Listening
# sudo netstat –tulpn | grep 7006
...
tcp        0    0 10.148.130.107:7006 :::* LISTEN      13159/java
 
Connections
# sudo netstat –tupn | grep 7006
...
tcp 0 0 192.168.2.30:48752  192.168.2.30:7006  ESTABLISHED 13534/java

netstat (Windows):

> netstat –abn
  TCP    0.0.0.0:135            0.0.0.0:0              LISTENING
  RpcSs
[svchost.exe]
  TCP    0.0.0.0:443            0.0.0.0:0              LISTENING
[vmware-hostd.exe]
  TCP    10.76.201.69:49764     198.252.206.25:443     ESTABLISHED
[chrome.exe]
  TCP    10.76.201.69:49922     8.34.214.54:443        ESTABLISHED
[chrome.exe]

lsof (Linux):

Listening (TCP)
# sudo lsof -iTCP:5500 -sTCP:LISTEN -P -n
rsaadmin's password:
COMMAND   PID     USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
java    32713 rsaadmin  696u  IPv6 413684451      0t0  TCP 127.0.0.1:5500 (LISTEN)
java    32713 rsaadmin  699u  IPv6 413684453      0t0  TCP [fe80::250:56ff:fe01:b56]:5500 (LISTEN)
java    32713 rsaadmin  705u  IPv6 413684459      0t0  TCP 127.0.0.2:5500 (LISTEN)
java    32713 rsaadmin  706u  IPv6 413684460      0t0  TCP 192.168.2.30:5500 (LISTEN)
java    32713 rsaadmin  709u  IPv6 413684463      0t0  TCP [::1]:5500 (LISTEN)                
 
Listening (UDP)
# sudo lsof -i udp:5500 -P -n
COMMAND   PID     USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
java    32713 rsaadmin  733u  IPv6 413687127      0t0  UDP 192.168.2.30:5500
java    32713 rsaadmin  734u  IPv6 413687128      0t0  UDP 127.0.0.2:5500
java    32713 rsaadmin  735u  IPv6 413687129      0t0  UDP 127.0.0.1:5500
 
 
Connections (TCP)
# sudo lsof -iTCP:7004 -sTCP:ESTABLISHED -P -n
COMMAND   PID     USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
java     9336 rsaadmin  381u  IPv6 400873980      0t0  TCP 192.168.2.30:7004->192.168.2.30:37671 (ESTABLISHED)
java    22768 rsaadmin  688u  IPv6 413093308      0t0  TCP 192.168.2.30:37671->192.168.2.30:7004 (ESTABLISHED)

6) Get port connections for a specific process:

We can get the ports listening and connections of a specific process with the process PID using netstat, or lsof (Linux only).

netstat (Linux):

Listening
# sudo netstat –tulpn | grep 13159
...
tcp        0    0 10.148.130.107:7006 :::* LISTEN      13159/java
 
Connections
# sudo netstat –tupn | grep 13534
...
tcp 0 0 127.0.0.1:31006     127.0.0.1:32001    ESTABLISHED 13534/java
tcp 0 0 192.168.2.30:48752  192.168.2.30:7006  ESTABLISHED 13534/java

netstat (Windows):

>netstat -abno | findstr "2112"
  TCP 127.0.0.1:5354      0.0.0.0:0        LISTENING       2112
  TCP 127.0.0.1:5354      127.0.0.1:49158  ESTABLISHED     2112
  TCP 127.0.0.1:5354      127.0.0.1:49170  ESTABLISHED     2112
  UDP 0.0.0.0:50848       *:*                              2112
  UDP 10.76.201.73:5353   *:*                              2112
  UDP 192.168.29.1:5353                                    2112

lsof (Linux):

Listening (TCP)
# sudo lsof -i -a -p 32713 -P -n -iTCP -s TCP:LISTEN
rsaadmin's password:
COMMAND   PID     USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
java    32713 rsaadmin    4u  IPv4 413658498      0t0  TCP 127.0.0.1:32002 (LISTEN)
java    32713 rsaadmin  370u  IPv6 413682296      0t0  TCP 192.168.2.30:7012 (LISTEN)
 
 
Listening (UDP)
# sudo lsof -i -a -p 32713 -P -n -iUDP
COMMAND   PID     USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
java    32713 rsaadmin  357u  IPv6 413686266      0t0  UDP *:34906
java    32713 rsaadmin  358u  IPv6 413687055      0t0  UDP 127.0.0.1:8002
 
Connections (TCP)
# sudo lsof -i -a -p 32713 -P -n -iTCP -s TCP:ESTABLISHED
COMMAND   PID     USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
java    32713 rsaadmin  332u  IPv6 413658555      0t0  TCP 127.0.0.1:31001->127.0.0.1:32002 (ESTABLISHED)
java    32713 rsaadmin  363u  IPv6 430667841      0t0  TCP 192.168.2.30:7022->192.168.2.30:54561 (ESTABLISHED)

NOTE: You can get the process pid using “ps aux” on Linux and “tasklist /v” on Windows cmd.

7) Monitor port Bandwidth:

Here are 2 ways to check the used network bandwidth per port.

iftop (Linux)

You can install the iftop utility from http://www.ex-parrot.com/pdw/iftop/ , then you can run the below command as root to monitor the network bandwidth per port in a way similar to the top command.

# iftop -P -n -N

iptables (Linux)

Another way is to add an iptables rule to monitor traffic on a specific port. Here’s an example on how the rule may look like.

# iptables -I INPUT 1 -p tcp --dport 5500 -j ACCEPT

This adds a rule on line 1 of the INPUT chain. Then we can monitor the packets and bytes for each match of this rule using the below command.

# iptables -nvL | grep -a2 "Chain INPUT"
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target     prot opt in     out     source               destination
 20  2928   ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:5500

I hope you liked this tip. Please let me know if there are any port questions that I’ve missed.

Have a nice day 🙂

#10 All port questions answered (1/2)

Introduction:

Today’s tip is the first of 2 parts that aim to show different tools that can be used to gather various information about TCP/UDP port connections.

1) Check TCP/UDP ports listening locally:

To check the TCP/UDP ports listening locally, you can use the netstat command with the below options. This will show the protocol(TCP/UDP), listening address, listening port and the the PID of the process listening (on).

Linux

# netstat –tulpn
...
tcp        0    0 10.148.130.107:7006 :::* LISTEN      13159/java
udp        0    0 10.148.130.107:5500 :::*             13844/java 

Windows

# This won’t show the Process
> netstat –an
  TCP    0.0.0.0:135            0.0.0.0:0              LISTENING
  TCP    0.0.0.0:443            0.0.0.0:0              LISTENING
  UDP    0.0.0.0:123            *:*
  UDP    0.0.0.0:500            *:*
 
# Shows the Process, but needs cmd to be opened as Administrator
> netstat –abn
  TCP    0.0.0.0:135            0.0.0.0:0              LISTENING
  RpcSs
[svchost.exe]
  TCP    0.0.0.0:443            0.0.0.0:0              LISTENING
[vmware-hostd.exe]

2) Check open connections locally:

To check the TCP/UDP ports listening locally, you can use the netstat command with the below options. This will show the protocol(TCP/UDP), local address and port, foreign address and port, state (ESTABLISHED,TIME_WAIT, … ) and the the PID of the process owning the connection.

Linux

# netstat –tupn
...
tcp 0 0 127.0.0.1:31006     127.0.0.1:32001    ESTABLISHED 13534/java
tcp 0 0 192.168.2.30:48752  192.168.2.30:7006  ESTABLISHED 13534/java
tcp 0 0 192.168.2.30:51642  192.168.2.30:7050  TIME_WAIT   -

Windows

# This won’t show the Process
> netstat –an
  TCP    10.76.201.69:49764     198.252.206.25:443     ESTABLISHED
  TCP    10.76.201.69:49922     8.34.214.54:443        ESTABLISHED
 
# Shows the Process, but needs cmd to be opened as Administrator
> netstat –abn
  TCP    10.76.201.69:49764     198.252.206.25:443     ESTABLISHED
[chrome.exe]
  TCP    10.76.201.69:49922     8.34.214.54:443        ESTABLISHED
[chrome.exe]

3) Check TCP port connectivity remotely:

There are many ways to check TCP port connectivity. Here we will show 4 of them.

Telnet

If you have telnet available you can use it to check port connectivity as shown below. This is applicable for both Windows and Linux.

# telnet host port     
 
Connection Successful example
# telnet 192.168.2.30 7004
Trying 192.168.2.30...
Connected to 192.168.2.30.
Escape character is '^]'.
 
Connection Failed example
# telnet 192.168.2.30 7005
Trying 192.168.2.30...
telnet: Unable to connect to remote host: Connection refused

NOTE: In windows you can enable the telnet feature from from Control Panel -> Programs and Features -> Turn Windows features on or off -> Windows Features -> Telnet Client.

Ncat

Another option if telnet is not available is to use Ncat. This can be downloaded from: https://nmap.org/download.html . For linux you need the the ncat rpm for your architecture, and for windows you the nmap zip file (e.g. nmap-7.40-win32.zip), The ncat.exe executable should be included in it.

# ncat -nv host port

Connection Successful example
# ncat -nv 192.168.2.30 7004
Ncat: Version 7.40 ( https://nmap.org/ncat )
Ncat: Connected to 192.168.2.30:7004.

Connection Failed example
# ncat -nv 192.168.2.30 7005
Ncat: Version 7.40 ( https://nmap.org/ncat )
Ncat: No connection could be made because the target machine actively refused it.

OpenSSL s_client

OpenSSL has a utility that is mainly used to check SSL connection. However, we can use it to check TCP port connectivity even if SSL is not being used.

# openssl s_client -connect host:port

Connection Successful example
# openssl s_client -connect 192.168.2.30:1812
CONNECTED(00000003)

Connection Failed example
# openssl s_client -connect 192.168.2.30:1816
connect: Connection refused
connect:errno=111

/dev/tcp

on Linux, If you are unable to use telnet, openssl or Ncat you can use a built in device file that is present by default in most Linux distributions.

# cat < /dev/tcp/host/port && echo successful || echo failed

Connection Successful example
# cat < /dev/tcp/192.168.2.30/7004 && echo successful || echo failed
 
successful
 
Connection Failed example
# cat < /dev/tcp/192.168.2.30/7005 && echo successful || echo failed
 
-bash: connect: Connection refused
-bash: /dev/tcp/192.168.2.30/7005: Connection refused
failed

4) Check UDP port connectivity remotely:

Ncat

Checking UDP port connectivity is more tricky as it is connectionless. However, we can use try sending data to the port and checking if it responds using the Ncat tool shown in the above section with an extra option –u.

NOTE: This is not 100% accurate as some protocols don’t reply to any sent data so they me be seen as closed while they are open.

# echo "" | ncat -nv -u host port
 
Connection Successful example
# echo "" | ncat -nv -u 192.168.2.30 5500
Ncat: Version 7.40 ( https://nmap.org/ncat )
Ncat: Connected to 192.168.2.30:1812.
Ncat: 5 bytes sent, 0 bytes received in 0.61 seconds.
 
Connection Failed example
# echo "" | ncat -nv -u 192.168.2.30 1815
Ncat: Version 7.40 ( https://nmap.org/ncat )
Ncat: Connected to 192.168.2.30:1815.
Ncat: An existing connection was forcibly closed by the remote host. .

Nmap

A more reliable way is to use the nmap tool, this checks connectivity by checking for an ICMP packet that the OS replies with when getting a UDP connection request on a closed port. Nmap can be downloaded from: https://nmap.org/download.html . For linux you need the the nmap rpm for your architecture, and for windows you the nmap zip file (e.g. nmap-7.40-win32.zip), The nmap.exe executable should be included in it.

NOTE: This may not work if a firewall in between is blocking ICMP. In this case all UDP ports will wrongly be seen as open. To be sure this is not the case you can try first a UDP port you know is not working to make sure it is seen as closed.

# nmap -sU -p  
 
Connection Successful example
# nmap -sU -p 5500 192.168.2.30
 
Starting Nmap 5.51 ( http://nmap.org ) at 2017-02-01 07:18 EST
Nmap scan report for 192.168.2.30
Host is up (0.00036s latency).
PORT     STATE         SERVICE
5500/udp open|filtered securid
MAC Address: 00:50:56:01:0B:56 (VMware)
 
Nmap done: 1 IP address (1 host up) scanned in 0.30 seconds
 
Connection Failed example
# nmap -sU -p 5501 192.168.2.30
 
Starting Nmap 5.51 ( http://nmap.org ) at 2017-02-01 07:18 EST
Nmap scan report for 192.168.2.30
Host is up (0.00033s latency).
PORT     STATE  SERVICE
5501/udp closed unknown
MAC Address: 00:50:56:01:0B:56 (VMware)
 
Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds

Packet Capture

The most reliable way if ICMP is blocked is run a packet capture using tcpdump or Wireshark at the target host to check on the UDP ports to check if the packets reach the server.

 

In the next tip we will continue the same subject showing how to get some more info like “ How to get processes listening/connecting to a specific port”, “How to get port connections for a specific process”, “How to know which service is being used on a specific port”,  and “How to monitor per port network utilization”. Stay tuned 🙂

#9 Performance history (Windows)

Hi All, Long time no see 🙂

I’ve not added any blog posts during the last period as I was busy with a new open source project I am working on. I am gonna give more details about this project in an upcoming post, but for now here’s today’s Tip.

Introduction:

In the last tip we discussed how to monitor the performance metrics’ history (CPU utilization, memory utilization, network traffic, .. ) on a Linux server. This tip shows how to do the same on a Windows server.

Performance monitor:

Windows has a utility called “Performance monitor” which can be used to generate performance logs for a lot of performance metrics, e.g.:

  • CPU Utilization.
  • Memory utilization.
  • Swap utilization.
  • Paging statistics.
  • Network Statistics.
  • I/O performance.

Enable Performance data collection:

To enable data collection, we need to follow the below steps:

1-   Open the “Performance Monitor” as administrator from Start -> Administrative Tools -> Performance Monitor.

0_0_image005

2-    Click on Data Collector Sets User defined, then click on Action -> New -> Data Collector Set.

0_0_image006

3-   Set a name for the data collector, and click next.

0_0_image007

4-   Choose the System performance template and click next

0_0_image008

5-    Next is the path of the performance logs. You can leave the default, and click Finish.

0_0_image009

6-    Select the Data collector set you have created from under Data collector Sets -> User Defined.

0_0_image010

7-   Double click Performance counter, this by default will include most common metric (CPU, Memory, … ). You can use Add/Remove to add or remove metrics. You can also change the sample interval, and the log format from this windows.
NOTE:It is recommended it make the sample interval 1 minute, so that the performance is not impacted and the logs don’t get too big.

0_0_image011

8-    Right click the Data collector Set you created, click properties and go to “Stop condition”. This tab controls for how long the data collector set will run. If you want to keep it running until manually stopped uncheck all checkboxes.

0_0_image012

9-   Right-click the Data collector Set you created, and click “Start”.

0_0_image013

View Logs:

 You can check the performance logs in the path you configured, which is by default C:\PerfLogs\Admin\<Name of Data Collector Set>.
Under this path you will find a .blg file. Double click it and the below window will be opened.

0_0_image014

You can select the metrics and the time frame you want, or you can export to CSV by right clicking on the chart and choosing save data as, then choose the file type CSV.

#8 Performance history (Linux)

Introduction:

Sometimes, we need to check different performance metrics (CPU utilization, memory utilization, network traffic, .. ) to troubleshoot an issue. For that we have various known commands we can use to get the current value of these metrics.

However, when the issue is intermittent, or occurring outside of business hours, we may need to check the history of these metrics to check the performance before and during the issue instead of just the current value. This tip shows how we can use the sysstat utility to monitor the performance metrics on Linux.

Sysstat:

Sysstat is a package that includes a set of utilities that can be used to monitor the history of a lot of performance metrics, e.g.:

  • CPU Utilization.
  • Memory utilization.
  • Swap utilization.
  • Paging statistics.
  • Network Statistics.
  • I/O performance.

Sysstat is installed by default for many Linux distributions. If it is not, you can usually install it using apt-get or yum from the default repositories. In this tip we will show how to enable and configure sysstat on SUSE Linux 11, however you with some minor modifications this can be applied on any other Linux distribution.

Enable Sysstat:

To enable performance metrics gathering, we need to start and enable the sysstat service.

# /etc/init.d/boot.sysstat start
# chkconfig boot.sysstat on

Gathering frequency:

By default the metrics are calculated every 10 minutes. To have a more detailed view it is recommended to decrease the period to 1 minute. This can be done by modifying the highlighted number in the below file (This has the same syntax as crontab).

/etc/sysstat/sysstat.cron

# Activity reports every 10 minutes everyday
*/10 * * * *    root [ -x /usr/lib64/sa/sa1 ] && exec /usr/lib64/sa/sa1 -S ALL 1 1
 
# Update reports every 6 hours
55 5,11,17,23 * * *     root [ -x /usr/lib64/sa/sa2 ] && exec /usr/lib64/sa/sa2 -A

Rotation Period:

By default last 60 days of performance metrics are kept. This can be modified by changing the highlighted option in the below file.

/etc/sysstat/sysstat

# How long to keep log files (in days).
# If value is greater than 28, then log files are kept in
# multiple directories, one for each month.
HISTORY=60
# Compress (using gzip or bzip2) sa and sar files older than (in days):
COMPRESSAFTER=10

Logs location:

The gathered metrics are kept under the /var/log/sa directory. For each day you should find 2 files, one named saXX, and one named sarXX where XX is the day of month (e.g. for 22-Feb the files will be called sa22 and sar22).

Files older than 10 days are kept in a compressed form under directories named YYYYMM (e.g. 201612).

$ ls -l /var/log/sa
total 12
drwxr-xr-x 2 root root 4096 Feb 22 00:05 201612
drwxr-xr-x 2 root root 4096 Feb 12 06:05 201701
drwxr-xr-x 2 root root 4096 Feb 22 05:55 201702
lrwxrwxrwx 1 root root   11 Feb 11 23:59 sa11 -> 201702/sa11
lrwxrwxrwx 1 root root   11 Feb 12 23:59 sa12 -> 201702/sa12
lrwxrwxrwx 1 root root   11 Feb 13 23:59 sa13 -> 201702/sa13
lrwxrwxrwx 1 root root   11 Feb 14 23:59 sa14 -> 201702/sa14
lrwxrwxrwx 1 root root   11 Feb 15 23:59 sa15 -> 201702/sa15
lrwxrwxrwx 1 root root   11 Feb 16 23:59 sa16 -> 201702/sa16
lrwxrwxrwx 1 root root   11 Feb 17 23:59 sa17 -> 201702/sa17
lrwxrwxrwx 1 root root   11 Feb 18 23:59 sa18 -> 201702/sa18
lrwxrwxrwx 1 root root   11 Feb 19 23:59 sa19 -> 201702/sa19
lrwxrwxrwx 1 root root   11 Feb 20 23:59 sa20 -> 201702/sa20
lrwxrwxrwx 1 root root   11 Feb 21 23:59 sa21 -> 201702/sa21
lrwxrwxrwx 1 root root   11 Feb 22 14:30 sa22 -> 201702/sa22
lrwxrwxrwx 1 root root   12 Feb 11 23:55 sar11 -> 201702/sar11
lrwxrwxrwx 1 root root   12 Feb 12 23:55 sar12 -> 201702/sar12
lrwxrwxrwx 1 root root   12 Feb 13 23:55 sar13 -> 201702/sar13
lrwxrwxrwx 1 root root   12 Feb 14 23:55 sar14 -> 201702/sar14
lrwxrwxrwx 1 root root   12 Feb 15 23:55 sar15 -> 201702/sar15
lrwxrwxrwx 1 root root   12 Feb 16 23:55 sar16 -> 201702/sar16
lrwxrwxrwx 1 root root   12 Feb 17 23:55 sar17 -> 201702/sar17
lrwxrwxrwx 1 root root   12 Feb 18 23:55 sar18 -> 201702/sar18
lrwxrwxrwx 1 root root   12 Feb 19 23:55 sar19 -> 201702/sar19
lrwxrwxrwx 1 root root   12 Feb 20 23:55 sar20 -> 201702/sar20
lrwxrwxrwx 1 root root   12 Feb 21 23:55 sar21 -> 201702/sar21
lrwxrwxrwx 1 root root   12 Feb 22 11:55 sar22 -> 201702/sar22

sarXX files:

The files names sarXX show all the gathered metrics in a human readable form. However, they are generated from the saXX files every 6 hours, so If the issue occurred less than 6 hours ago we will not be able to use them unless we manually generate them. Here’s an example:

Linux 3.0.101-0.7.37-default (am81p)    2017-02-16      _x86_64_
 
00:00:01  CPU  %usr  %nice  %sys  %iowait  %steal  %irq  %soft %guest  %idle
00:01:01  all  0.81  3.47   0.23  0.08      0.00   0.00  0.03  0.00     95.38
00:01:01  0    0.53  4.51   0.15  0.07      0.00   0.00  0.02  0.00     94.72
00:01:01  1    0.53  4.51   0.15  0.07      0.00   0.00  0.02  0.00     94.72
 
...
 
Average:  1    0.70  3.57   0.26  0.22      0.00   0.00  0.05  0.00     95.21

saXX files:

The files named saXX are binary files (not human readable). You can only view their contents using the sar command.

For example you can run the below command to generate a sar file from the sa file to show the details in human readable format.

$ sar -f <saXX_FILE> > <sarXX_FILE>
 
e.g:
$ sar -f /var/log/sa/sa22 > sar22

For more information about the sar command you can refer to the sar man page: https://linux.die.net/man/1/sar

 

Please let me know if you have any comments or suggestions, and if you like this tip please share and follow. Have a nice day 🙂

#7 What is filling up the database?

Introduction:

When the database size increases it may cause performance issues. It may also fill up the filesystem causing the service to crash.

That is why it is a good practice to monitor the DB tables’ size to identify the tables with largest size and fix the issue by troubleshooting what is causing the size to increase and/or purging some rows from these tables. Here’s some queries that can help you with that for different Databases.

PostgreSQL:

We can use the below query to list all tables, and show their size and row count. The output will be ordered by size.

SELECT nspname AS table_schema,
relname AS table_name, 
pg_size_pretty(pg_total_relation_size(c.oid)) AS total_size,
c.reltuples AS row_estimate
FROM pg_class c
LEFT JOIN pg_namespace n
ON n.oid = c.relnamespace
WHERE relkind = 'r'
ORDER BY pg_total_relation_size(c.oid) DESC;

MySQL:

We can use the below query to list all tables, and show their size and row count. The output will be ordered by size.

SELECT TABLE_NAME, 
TABLE_SCHEMA, 
table_rows, 
data_length, 
index_length,  
round(((data_length + index_length) / 1024 / 1024),2) "Size in MB" 
FROM information_schema.TABLES 
WHERE TABLE_SCHEMA <> 'information_schema' 
ORDER BY (data_length + index_length) DESC;

ORACLE:

We can use the below query to list all tables, and show their size and row count. The output will be ordered by size.

SELECT owner, table_name, 
round((num_rows*avg_row_len)/(1024*1024),2) Size_MB , 
num_rows
FROM all_tables
WHERE owner NOT LIKE 'SYS%'
AND num_rows > 0
ORDER BY Size_MB DESC;

MS-SQL:

We can use the below query to list all tables, and show their size and row count. The output will be ordered by size.

 

SELECT
    t.NAME AS TableName,
    s.Name AS SchemaName,
    p.rows AS RowCounts,
    SUM(a.total_pages) * 8 AS TotalSpaceKB,
    SUM(a.used_pages) * 8 AS UsedSpaceKB,
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM
    sys.tables t
INNER JOIN
    sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
    sys.partitions p ON i.object_id = p.OBJECT_ID 
AND i.index_id = p.index_id
INNER JOIN
    sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
    sys.schemas s ON t.schema_id = s.schema_id
WHERE
     i.OBJECT_ID > 255
GROUP BY
    t.Name, s.Name, p.Rows
ORDER BY
    TotalSpaceKB;

Please let me know if you have any comments or suggestions, and if you like this tip please share and follow. Have a nice day 🙂

#6 Is it a permission issue? (Windows)

Introduction:

In the last tip we’ve seen how to monitor file access on Linux to check for “denied access” events which indicate a permission issue. This week’s tip shows how to do the same on Windows.

Enable Audit:

First we need to enable audit to the specific file/directory we need to monitor. This can be done by following the below steps:

1-   Right-click on the file/directory.

2-   Open “Properties”.

3-   Open the “Security” tab.

4-   Click the “Advanced” button.

5-   Open the “Auditing” tab.

0_0_image005.jpg

6-    Click on “Edit”.

0_0_image006.jpg

7-    Click on “Add”, enter “Everyone” and click on “OK”.

0_0_image007.jpg

8-   Select “Full control” for all successful and failed events.

0_0_image008.jpg

9-   Click OK and all previous windows to save configuration.

Check for events:

 You can use the Event viewer to check for access events on the file/directory. However, this may be time consuming without the proper filters.

An easier way is to use the powershell Cmdlet “Get-EventLog”.

For example you can use the below log to check for the Audit events on a file named “install.log” for the last 5 minutes. You can change the highlighted parts below to check for a different file or a different period of time.

Get-EventLog -Log Security -source `
"Microsoft-Windows-Security-Auditing" -After `
$(Get-Date).AddMinutes(-5) -message "*install.log*" | 
Format-Table -AutoSize -Wrap `
-property TimeGenerated,EntryType,Message

The output will show the event time, type (success/failure), user, and process.

Here’s an example of an “Access denied” event (EntryType is FailureAudit) :

TimeGenerated            EntryType Message
-------------            --------- -------
12/14/2016 3:05:44 AM FailureAudit A handle to an object was requested.
 
                 Subject:
                     Security ID:        S-1-5-21-...
                     Account Name:        test
                     Account Domain:        JUMPHOST
                     Logon ID:        0xc65757a0

                 Object:
                     Object Server:        Security
                     Object Type:        File
                     Object Name:        C:\install.log
                     Handle ID:        0x0

                 Process Information:
                     Process ID:        0xca8
                     Process Name:  C:\Windows\explorer.exe

                 Access Request Information:
                     Transaction ID:    ...                                        

                     Access Reasons:        ...
                 42380-857235239-1003)

                     Access Mask:        0x20000
                     Privileges Used for Access Check:    -
                     Restricted SID Count:    0

And Here’s an example for an “Access granted” event (EntryType is SuccessAudit):

TimeGenerated            EntryType Message
-------------            --------- -------
12/14/2016 2:51:48 AM SuccessAudit Auditing settings on object were changed.
 
              Subject:
                  Security ID:      S-1-5-..
                  Account Name:      Administrator
                  Account Domain:    JUMPHOST
                  Logon ID:        0x208c1b

              Object:
                  Object Server:    Security
                  Object Type:    File
                  Object Name:    C:\install.log
                  Handle ID:    0x18a0

              Process Information:
                  Process ID:    0x854
                  Process Name:  C:\Windows\explorer.exe

              Auditing Settings:
                  Original Security Descriptor:
                  New Security Descriptor:        ...

 

Note:

You may want to disable auditing after you have found the information you need as this may impact performance if kept running.