Elasticsearch Query Language (ES|QL) is a new piped query language that allows users to filter, transform, aggregate, analyse and display data in a single workspace with a single query. This blog will show you how to quickly get started and give some examples of ES|QL queries to try.
If you don’t already have an Elasticsearch deployment, you can sign up for a free 14 day trial HERE. As mentioned in a previous blog, Elasticsearch ships with trial data which is perfect for this exercise.


To start writing ES|QL queries you should open the discover application, select the dataview drop down and scroll to the very bottom, this reopens discover in ES|QL mode.

Query 1-1 – STATS – The ‘from’ command selects an index, dataview or alias and returns these events in a table. The ‘limit’ command stipulates how many events to return. The ‘keep’ command tells Elasticsearch which fields you want to return. The ‘stats’ command allows you to aggregate by various functions such as AVG, COUNT, COUNT_DISTINCT, MAX, MIN, MEDIAN, SUM, PERCENTILE and many others. In this example ‘stats’ creates a new field ‘total’ which counts the different types of machine operation systems and presents them in a table. Finally the ‘sort’ command sorts the count of machine operating systems (‘total’) in descending order.
from kibana_sample_data_logs
| limit 10
| keep @timestamp, clientip, machine.os
| stats total = count(machine.os) by machine.os
| sort total desc

Query 1-2 – STATS – You can change the ‘stats’ command to split the count by machine.os and clientip to give a quick visualisation of operating systems across your clients.
from kibana_sample_data_logs
| limit 10
| keep @timestamp, clientip, machine.os
| stats total = count(machine.os) by machine.os,clientip
| sort total desc

Geo-enrichment configuration – The demo environment has an enrichment index called ‘geo-data’ pre-configured with an enrichment policy that matches values from the ‘code_2’ field with values from the ‘geo.src’ field. We will use this additional geo data to enrich our query with extra fields. The below search will show you all the data in the enrichment index.
from geo-data
| KEEP code_2,code_3,continent,country,country_code,iso_3166_2,region_code,sub_region,sub_region_code
//this is a preconfigured enrichment index, it has an enrichment policy that matches the code_2 with the field geo.src//

Query 2 – ENRICH – We start by pulling documents from the ‘kibana_sample_data_logs’ index. ‘stats’ creates a new field called ‘avg_bytes’ and averages the bytes received from each source country (geo.src). The ‘eval’ command takes the ‘avg_bytes’, divides it by 1024 (to convert it to KBs), limits the result to 2 decimal places and names the new field ‘avg_bytes_kb’. The ‘enrich’ command selects ‘geo.src’ as it’s match field and adds the fields ‘continent’ and ‘country’ from the geo-data enrichment index. The ‘keep’ command limits the result to the 4 fields seen in the below table.
from kibana_sample_data_logs
| stats avg_bytes = avg(bytes) by geo.src
| eval avg_bytes_kb = round(avg_bytes/1024, 2)
| enrich geo-data on geo.src with continent,country
| keep avg_bytes_kb, geo.src, country,continent

Query 3 – DISSECT – In this query we use the ‘dissect’ command to extract fields from the ‘message’ field. Dissect patterns use variables and separators to extract fields, full details can be found HERE. In this example I have extracted ‘clientip’, ‘date’, and ‘request_type’ from the message field.
from kibana_sample_data_logs
| limit 10
| keep message
| dissect message "%{clientip} - - [%{date}] \"%{request_type} "
| keep message,clientip, date, request_type

Query 4 – GROK – As with the ‘dissect’ command, the ‘grok’ command can also be used to extract or parse out new fields. Grok uses a set of patterns with underlying regular expressions to match values in the data. Elasticsearch grok information can be found HERE, and the list of preconfigure patterns that Elasticsearch supports can be found HERE. In the below example I’ve used the ‘%{IPV4}’ pattern to match the clientip, used the ‘%{DATA}’ to match everything as far as the first quote mark, and then matched the word ‘GET’ with the %{WORD} pattern.
from kibana_sample_data_logs | limit 10
| keep message
| grok message "%{IPV4:clientip} %{DATA} \"%{WORD:request_type}"

Summary – ES|QL provides an intuitive, powerful, and flexible tool for querying and analyzing data within the Elastic Stack. Its ability to handle complex queries, within a single interface will be a huge benefit to security engineers and threat hunters trying to detect, investigate, and respond to security threats in a timely and effective manner.
Links – ES|QL documentation.