스택 추적에 grep 및 regex를 사용하는 방법은 무엇입니까?

스택 추적에 grep 및 regex를 사용하는 방법은 무엇입니까?

다음과 같은 스택 추적이 있습니다.

17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection refused (Connection refused)
17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 INFO JDBCRDD: closed connection
17/04/26 15:29:03 INFO JDBCRDD: closed connection
17/04/26 15:29:03 INFO JDBCRDD: closed connection
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
        at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
        at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:85)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
        at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
        at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
        at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
        ... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
        at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
        at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
        at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
        at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
        at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
        ... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
        at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
        at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:85)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
        at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
        at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
        at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
        ... 10 more
17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 12
17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 0.0 (TID 12)
17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 13
17/04/26 15:29:03 INFO TorrentBroadcast: Started reading broadcast variable 0
17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 2.0 (TID 13)

다음과 같이 관련 줄을 추출하고 싶습니다.

17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
        at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
        at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:85)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
        at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
        at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
        at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
        ... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
        at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
        at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
        at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
        at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
        at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
        ... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
        at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
        at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:85)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
        at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
        at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
        at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
        at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
        at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
        ... 10 more

위의 출력 형식을 어떻게 얻을 수 있습니까(세부 정보가 포함된 모든 오류 줄 찾기)?

답변1

INFO입력에서 메시지를 필터링하는 것만으로도 충분할 것 같습니다 .

$ grep -v '[0-9] INFO ' file.in

관련 줄 이 일치하지 않는지 확인하기 위해 [0-9]올바른 간격을 추가했습니다 ( 임의의 문자열이 있는 경우 ).INFOERRORINFO

디렉터리에 여러 개의 로그 파일이 있는 경우:

$ grep -v '[0-9] INFO ' *.log

*.log로그 파일과 일치하는 파일 이름 패턴은 어디에 있습니까?

답변2

나는 똑같은 문제에 직면했습니다.

플래그 -A를 통해 컨텍스트 줄을 표시할 수 있지만 grep컨텍스트 줄 수는 고정되어 있습니다. 대신 시도해 볼 수 있습니다 awk.

이전에 사용한 스니펫은 다음과 같습니다(https://gist.github.com/maoshuai/33113ac457aca7869171942c696f46d3), fullgrep.sh로 저장합니다.

full_grep()
{
    # keyword to search
    local keyword=$1
    # mark the actual new line, in this job, it is always a date
    local newLinePattern="^[0-9]{2}\/[0-9]{2}\/[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}"

    cat | awk  '
            BEGIN{
                isFound = "no"
            }

            # match lines with keyword
            {
                # if match print the line
                if($0~/'"$keyword"'/)
                {
                    print $0
                    isFound="yes"
                }
                # if new line begin, flush the flag
                else if($0~/'"$newLinePattern"'/) 
                {
                    isFound="no"
                }
                # if isFound, print continuely
                else if(isFound=="yes") 
                {
                    print $0
                }   
            }
            ' 
}


full_grep $@

그런 다음 다음을 입력하면 cat test.log| fullgrep.sh ERROR원하는 결과를 얻을 수 있습니다.

관련 정보