Splunk indexing queue blocked. We collect data from other clients using syslog.

Splunk indexing queue blocked Basically there are a series of queues that data goes through when it arrives in Splunk. gz" files directly on the Splunk Server (after rsyncing them to it). log and didn't really anything that seemed related. Thank you Increasing the queue size may work to add an additional buffer for bursts of incoming log data, however it is unlikely to fix your blocked indexing queue. The meaning of this message is that the indexers are busy, and the queues full. Solved: I have noticed that Splunk is running relatively slow as of recently and found that the typing queue and indexing queue are both at 100% what Home Join the Community Splunk also reports if a queue is blocked in those events (blocked=true) so you can just search for that to see if you have any. typingQueue 4. If you are constantly adding new data sets and they are very large, then I suspect you need to tune some of the ne So what I have now is 800 events for index=_internal source=*metrics. We removed the queue setting entirely and he's no longer getting queue blocked message, queue's dont have to be explicitly specified in 4. I see warning message in splunk master node. Forwarding to output Some suggestions : 1 - Install the Splunk on Splunk app on your search-head. On the forwarder, there are repeating entries in the splunkd. indexQueue. For example if you made you incoming TCP input queue bigger, then more data can queue there while the data is getting written to disk, you can do this with various other queues to allow a While looking at graph, your indexing queue is blocking continuously but percentage is low, for that you are hitting IOPS issue. log blocked - is there a setting for index throughput? Problem is we're adding new sources daily and need to figure this out before the lag starts affecting more realtime searches. index=_internal host=YOUR_INDEXER sourcetype=splunkd component=Metrics blocked=true Since you're talking about JSON files, I'd wonder how big they are, and if the bottleneck is actually on the forwarders. Calling support now! Indexing is very slow - added 250 mb to indices - helped some - going to the customized time stamping formats next due to mixed windows, sourcefire, and cisco data - everything is single line coming from snare and syslog so will turn on Should_linemerge = false - regexes are spot on Some suggestions : 1 - Install the Splunk on Splunk app on your search-head. We moved the json files to a directory that is directly accessible by the splunk instance doing the indexing. indexQueue. Will a splunk lightfowarder use the same AQ-queue Archive Processor mechanism like the full-blown splunk server, when it works on a Log Server with a lot . 5) Goto splunk UI and following queries will be helpful: Which source type is taking most of the cpu time. you can find blocked queues is MOnitoring Console at [Settings -- Monitoring Console -- Indexing -- Indexing Performance: Instance] or running a search like this If the internal queue on the receiving indexer gets blocked, the indexer shuts down the receiving/listening (splunktcp) port after a specified interval of being unable to insert data into the queue. gz-files? The cause of my question is that we have performance problems during indexing ". 728 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying. After this it hits the typingQueue until it finally passes into the indexQueue and onto disk. conf: [default] host = splunk COVID-19 Response SplunkBase Developers Documentation. Therefore the internal splunk logs (like audit) are disabled in order to dedicate all the performance to the indexing. While looking at graph, your indexing queue is blocking continuously but percentage is low, for that you are hitting IOPS issue. Blocked Queues are: typingqueue, aggqueue, parsingqueue, indexqueue, splunktcpin. Once the indexing queue receded, data from HFs started flowing to indexers and data was then written to disks. queue = parsingQueue. No actual indexing activity is in the picture, of course. And I found in some discussions that increasing queue sizes may help sometimes. Splunk App for Stream : if indexing queue blocked or the network connection with Indexer is disconnected what happen? kwchang_splunk. When the indexerPipeline comes across the next chunk of data that needs to be routed there, it cannot add to the queue, so being a single thread of execution, the indexer pipeline blocks, waiting for the ability to add something to the port 5001 output queue. conf, not the LWF. Data from the client UFs gets delayed or not sent at all and digging through the logs, I noticed this: On the client UF: 09-29-2012 05:42:07. Indexing is not necessarily "blocked" Looking at the metrics log on one of the Indexers, I see several messages about various queues being blocked; the aggqueue, the indexqueue, and the typingqueue. 200. For example if you made you incoming TCP input queue bigger, then more data can queue there while the data is getting written to disk, you can do this with various other queues to allow a We seem to be having a similar issue with 6. i can Support said - -- What i assume is occurring based on which Q's are full is that you have a bad regex somewhere, or for a particular source type the event breaking is not working, so for the event breaking not working, the forwarder locks onto an indexer and does not let it loadbalance as it feels it has not reached the end of the event. These messages started showing up after the restart. Forwarding to output group default-autolb-group has been blocked for 100 seconds. So this is a consequence of your indexer(s) queue being blocked you have to investigate until you find the root cause which could be many (more data arriving But I might be wrong? If I use dropEventsOnQueueFull for example, this will prevent blocking of output queue but also drop data which can lead to data loss? In case of destination server being down, I want the output queue to not get blocked but at the same time data should get indexed because the indexers are still available to receive data. Check on the indexer: Is a receiving port set? [okay] Is the indexer Forwarding to host_dest=(ip of indexer) inside output group default-autolb-group from host_src=(ip folder source) has been blocked for blocked_seconds=100. log group=per_sourcetype_regex_cpu |timechart max(cpu) by series Which source type is taking most of the cpu time per event: Use persistent queues to help prevent data loss. Queue size increase in forwarder does impact on the indexers (forwarder queue change will helps till parsing queue), So I guess we can apply the same conf in indexers and test. Are all queues blocked down to the indexer queue or is there blockage upstream of that? I had seen errors like this on splunk 5 indexers. Restart of Splunk clears the issue, but only temporarily. An additional reason that this message comes up is because of indexer discovery when using multisite clustering. Splunk Employee ‎09 The Splunk UF has an in-memory event queue and HA capabilities that are controlled by outputs. I'll 1. however that is not I am looking for. Any assistance would be appreciated. This will probably Is there any search to find out whether indexer queues were blocked at a particular period of time? With Distributed Management Console (DMC), it shows only indexers queues which were full in last 15 minutes. 039 -0500 WARN TcpOutputProc - The TCP output processor has paused the data flow. A queue in the data pipeline that holds events that have been parsed and need to be indexed. Mostly after restarting the services it would continue without any issue. 4) Wait for typing queue to block. Indexing is not necessarily "blocked" even if the event-processing queues are Hey guys, I got some question regarding parsing queue issues I have been observing on our Heavy Forwarders. you can check quque performance from Monioting Console . Indexing queue fill profiles can be grouped into three basic shapes. This can stall the data flow towards indexing and other network outputs. indexer: I added a tcp listener in: Manager -> Forwarding and receiving -> Configure receiving inputs. Forwarding to output Increasing the queue size may work to add an additional buffer for bursts of incoming log data, however it is unlikely to fix your blocked indexing queue. Disk space would be a common issue should be shown prominently in Splunk though. In particular, see the maxQueueSize, dropEventsOnQueueFull and hi, I can see blocked=true in metrics. For example if you made you incoming TCP input queue bigger, then more data can queue there while the data is getting written to disk, you can do this with various other queues to allow a Indexing is very slow - added 250 mb to indices - helped some - going to the customized time stamping formats next due to mixed windows, sourcefire, and cisco data - everything is single line coming from snare and syslog so will turn on Should_linemerge = false - regexes are spot on We have observed yesterday that there was around 90+% of indexing queue on our indexers. as splunk should default to parsing queue for all stanzas. This helped a lot, but now on my universal forwarder I am getting blocked=true messages for my parsing queue. Rates went to over 20KB/s. My last step was to switch from a heavy forwarder to a universal forwarder, eliminating all processing activities from the forwarder. This external device became unreachable and, consequently, all the queues in my Indexer became blocked. This helped a lot, but now on my universal forwarder I am getting blocked=true messag Ok, to be super clear: In a situation like LFW->FWD->ExtDev, in the outputs. Splunk also reports if a queue is blocked in those events (blocked=true) so you can just search for that to see if you have any. Have a look at very good white paper created by @dpaper_splunk for disk diagnostics. To disable receiving through the CLI, run the splunk disable listen command: splunk disable listen -port <port> -auth <username>:<password> 1. conf: serviceMetaPeriod = <nonnegative integer> Defines how frequently metadata is Queues become blocked when the corresponding pipeline is too slow to keep up with incoming data. splunk btool inputs list --debug The fix is to remove whatever is blocking the queues. It sounds like you dont have both index destinations in your outputs, which Splunk should software based load balance across. conf. Splunk is parsing the timestamp for each json lines and taking more time there. A queue becomes unblocked as soon as the code pulling items out of it pulls an item. We did an interesting thing. The indexes are striped across all three indexers. log file: 03-04-2013 12:12:39. 1. Then I experimented a queue block in the past, when I had to send a large syslog to a third party and it The issue started from the splunk-optimize process unable to access the tsidx files for optimization and eventually gave up data feeding and wait until optimizer catch up the backlogs - the log messages like, You can self-manage your Splunk Cloud Platform index settings. I also checked the splunkd. Hi . Aha! Good catch :) Are you all set, then?. Indexers - Blocked Queues by Queue Type First off, the index queue is only present on universal forwarders as a recipient for the TCPout processor, responsible for sending the data out to configured receivers. Queues unblocked. Search head 1 Search Head2 ↑ ↑ Index1 I am seeing a lot of blocking on my three indexers, in the range of 500-1000 a day per host. The client send data to the splunk system via syslog and then the Splunk reads the content of the folder the data are stored. (In a ten minute period, about 75% of my parsingqueue messages from metrics. you can find blocked queues is MOnitoring Console at [Settings -- Monitoring Console -- Indexing -- Indexing Performance: Instance] or running a search like this I do have the Splunk Monitoring Console configured on the license master. Verify the HF's destinations are all up, listening, and reachable. Thank you This helped a lot, but now on my universal forwarder I am getting blocked=true messages for my parsing queue. Browse One set of polling logs goes to Indexer-1; A second set of logs goes to Indexer-2 (same data sent to Indexer-1 but less frequent polling) And the Unix TA logs go to both indexers; It was envisioned that if Indexer-1 dies Indexer-2 will still be chugging along with a similar data set that is polled less frequently. I'm at a loss on where to begin looking, anyone While looking at graph, your indexing queue is blocking continuously but percentage is low, for that you are hitting IOPS issue. Tune net. I have one Heavy forwarder and one indexer+search head. 760 -0500 WARN TcpOutputProc - Cooked connection to ip=10. According to my monitoring console, the indexing queues on my search head are all pegged at 100%, and have been for a long time. thanks for the help will check back. Parsing We collect data from other clients using syslog. I know splunk works off tubes and if one stops it blocks and queue's back up the chain. And we found out that there Hi, I need some help :) scheme: 3 Universal Forwarders -> collecting/forwarding -> Indexer uf: Changed every UF host (windows:applications and services logs) from to . The remaining indexer queues will fill up unless the I/O problem is corrected. Once the queue is again able to start accepting data, the indexer reopens the port. As the port is now closed, this is a very fast operation, which explain the small delay in internal logs. . See The Indexes page in the Splunk Cloud Platform Admin Manual. index=_internal host= source=*metrics. The search will map the current percentage blocked to a human readable status such as 'Critical' or 'Healthy' Increasing the queue size may work to add an additional buffer for bursts of incoming log data, however it is unlikely to fix your blocked indexing queue. The queues that are mentioned by that message are those that lead into the data pipelines where splunkd shapes your data into events before indexing those on disk. splunktcpin is in the double-digit range. We bounced them and all went back to normal. please correct me if I am wrong in my statement. I know splunk will index zip files as single threaded so does increasing core will reduce queue blockage? 2. This resulted in failed connections between Heavy Forwarders (HF) and Indexers. In all likelihood, one of three things is happening: Blocked queues are (obviously) bad for your environment so here a search to identify those: index=_internal sourcetype=splunkd group=queue (name=parsingQueue OR name Hi @VijaySrrie,. conf: serviceMetaPeriod = <nonnegative integer> Defines how frequently metadata is Queue size increase in forwarder does impact on the indexers (forwarder queue change will helps till parsing queue), So I guess we can apply the same conf in indexers and test. To remember how the queues and pipelines are ordered, and what each one does, I refer to this excellent Splunk Community post. For parsing and Aggregation queue, it looks like due to full aggregation queue & back-pressure, parsing queue also filled 100%. This message would indicate that there is a bottleneck in one of those pipelines, which causes the queue that feeds it and all queues upstream to fill up, all the way to the queue that accepts Solved: Hi, i am not able to receive any data from my forwarder. Hi , At first, you should check if the resources that you're using are sufficient for the work that the HF has to do and you can do this using the Monitoring Console, especially see the CPU load. In this case, the index pipeline is unable to send data out as fast as it's coming in. For more information on which queue is blocked, you can add the below to your limits. In a Splunk Enterprise deployment, persistent queues work for either forwarders or indexers. This morning, one of them stop listening on port 9997. This message will appear when the socket reopens: Started listening on tcp ports. Once that queue is filled, it back pressures against the Typing, then the Aggregation, and then the Parsing Queues. port 9997 is open. The heaviest is indexqueue and typingqueue, followed by aggqueue. It is probably not accepting data It results in the indexing queue blocked and filling, causing backups into all other queues. For example if you made you incoming TCP input queue bigger, then more data can queue there while the data is getting written to disk, you can do this with various other queues to allow a We can’t guarantee the health of our services or a great user experience without data from our applications. gz files on the HF. One set of polling logs goes to Indexer-1; A second set of logs goes to Indexer-2 (same data sent to Indexer-1 but less frequent polling) And the Unix TA logs go to both indexers; It was envisioned that if Indexer-1 dies Indexer-2 will still be chugging along with a similar data set that is polled less frequently. what can be done to resolve queue blockage? all queues are getting blocked at heavy Hello Splunkers, I have 4 Indexers in a cluster, serving 2 search heads, both stand-alone (no clustering/pooling), with one acting as job server (this is where I run my summary index searches). So if the communication between FWD->ExtDev goes down, tcpout_connections in FWD get fulled so it starts dropping event, but communication LWD->FWD stay You can just set dropEventsOnQueueFull to 1 or some other positive integer for the output if you don't want it to block. Anyone is having any idea on this issue? Note: this queue blockage is My Indexer is receiving data from a Forwarder but also sending data to non Splunk device. wmem_ max setting on forwarder. . Log flow: [~150 UniversalForwarders] -> [Cental UniversalForwarder] -> [Indexer] with my "Central UF" being the problem child. Many Solved: Blocked auditqueue can cause random skipped searches, scheduler slowness on SH/SHC and slow UI. The splunktcpin queue blocked once in this group. 13:9997 timed out 01-11-2016 14:46:38. Increasing the queue size may work to add an additional buffer for bursts of incoming log data, however it is unlikely to fix your blocked indexing queue. 01-05-2021 18:43:01. and only as long as I need to pull fields from . After that, I am seeing below in splunkd. I am using Heavy forwarder used I am getting multiple queues blocked specially aggqueu But current_size, especially considered in aggregate, across events, can tell you which portions of Splunk indexing are the bottlenecks. core. For example if you made you incoming TCP input queue bigger, then more data can queue there while the data is getting written to disk, you can do this with various other queues to allow a Use persistent queues to help prevent data loss. aggQueue/merginig 3. Are all queues blocked down to the indexer queue or is there blockage upstream of that? queue = parsingQueue. Queue options are the following: Splunk Tcpin Queue; Parsing Queue; Aggregation Queue; Typing Queue; Indexing Queue; Comparing the queues against one another shows you which queue has the lowest latency and is hindering Indexing is very slow - added 250 mb to indices - helped some - going to the customized time stamping formats next due to mixed windows, sourcefire, and cisco data - everything is single line coming from snare and syslog so will turn on Should_linemerge = false - regexes are spot on . connection is established. the other problem may just be that you need a faster machine or faster disk. 6. This keeps the processing load low on the production server that is running the forwarder. Hi inters. Getting Now skipping indexing of internal audit events, because the downstream queue is not accepting data nls7010. Mark as New; Bookmark Message; Subscribe to Message; why did indexing stop? Blocked queues are usually just a symptom of from your host, as the port is now closed, your splunk will try to reconnect. Queues blocked for more than N seconds. I am monitoring (high amount of) zip files in heavy forwarder and parsing it using indexqueue and null queue to reduce number of logs to reduce license cost. A workaround for the issue can be implemented by modifying this setting in indexes. None of these documents are having description of multiple queues in indexer ( such as 2 parsing queue, 2 indexing queue). 0 Karma Reply. conf stanza which defines FWD->ExtDev I set dropEventsOnQueueFull to 1. noun. I have started historical indexing by copying the . Many The thruput will be applied on indexers or forwarders, meaning any splunk instance. Forwarding to host_dest= inside output group group1 from host_src=XXXXP13 has been blocked for blocked_seconds=10. log. Indexing->Performance->Indexing Performance : Instance. btool can show what inputs are enabled. Today the system stopped indexing. 6, and everything was working fine. Hi. Correct, you'd set dropEventsOnQueueFull on the indexer outputs. 01-05-2021 18:43:00. The Order of Queues in the Pipeline. This can stall the data flow towards indexing and other So what I have now is 800 events for index=_internal source=*metrics. log etc), the searches still run but are really slow. - If it's from indexing queue it could be due to the load on disk - If it's from typing First off, the index queue is only present on universal forwarders as a recipient for the TCPout processor, responsible for sending the data out to configured receivers. These include: Timestamp extraction. The Quickly and effectively identifying which queues are being blocked, or causing slow down, is relatively straightforward, but there are configurations that can be implemented to If the internal queue on the receiving indexer gets blocked, the indexer shuts down the receiving/listening (splunktcp) port after a specified interval of being unable to insert data into Recently, we have seen many being caused by blocked queues on Splunk Indexers and Forwarders. We upgraded to 6. However, Splunk generally recommends that you use a Universal Forwarder and do this parsing on the indexers. Review the receiving system's health in the Splunk Monitoring Console. We collect data from other clients using syslog. splunk disable listen -port <port> -auth <username In my indexer cluster, on the MC under "Indexing>Performance>Indexing Performance: Deployment" I'm noticing that some about half of my indexers show close to 100% across queues (from parsing to indexing) and about half show less that 20% across queues (Quite a few are at 0% across queues). From Past two days, I’m seeing this banner message on the Job server search head, *Search peer –SHHostName After you select an Aggregation value, select a Queue value to view the latency performance of each queue in the graph. and couldn't. It then moves into the indexQueue and on to the indexing pipeline, where the Splunk software stores the events on disk. 3. Among them: Add another indexer; Optimize any index-time props and transforms rules on your data, or remove unnecessary ones. With metrics. I understand scaling is achieved by adding indexers. For this diagnosis, differentiate between flat and low, spiky, and saturated. splunk disable listen -port <port> -auth <username you can ignore pulldown. Determine queue fill pattern. In a Splunk Cloud Platform deployment, persistent queues can help prevent data loss if a forwarder that you configured to send data to your Splunk Cloud Platform instance backs up. 2 admin apache audit audittrail authentication Cisco Diagnostics failed logon Firewall IIS index indexes internal license License usage Linux linux audit Login Logon malware Network Perfmon Performance qualys REST Security Will a splunk lightfowarder use the same AQ-queue Archive Processor mechanism like the full-blown splunk server, when it works on a Log Server with a lot . conf: 3) restart splunk. The section called "How to Find Problematic Queues" might help you figure out the issue. This will allow you to identify CPU usage by queue and can be seen in the Monitoring Console -> Performance -> Indexing Performance : Advanced Hi inters. 503 +0000 INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying 03-04-2013 12:1 The Splunk UF has an in-memory event queue and HA capabilities that are controlled by outputs. Note that dropEventsOnQueueFull defaults to -1, which blocks the Stream TA and will cause its event queue to I have following in the logs-INFO TailReader - Could not send data to output queue (parsingQueue), retrying INFO TailReader - continuing. The distribution of the blocked events and which queue they are from will tell us where to look next. Indexing > Indexing Performance: Instance. If I'm understanding the data pipeline hierarchy correctly, it's the parsing queue that's actually blocked and causing the other queues to be blocked. A blue progress bar might appear above a panel, indicating that the Splunk platform is still generating data. My favorite part is that it tells me which configuration file setting controls After that, you need to understand that if Splunk cant send TCPOut (indexing queue. Note that dropEventsOnQueueFull defaults to -1, which blocks the Stream TA and will cause its event queue to This search will provide information on the current status of all indexing queues. If the indexQueue blocks it won't take long f After configuring everything I couldn't able to index the data while was checking in the splunkd. Yesterday we realized that three of our six production indexers stop listening on port 9997. I don't know if you're sending syslogs so I put my attention on the first issue: You Live troubleshooting Tcpin queue © 2019 SPLUNK INC. My questi Hi @willsy,. I am currently seeing between 500 and 1000 blocked events on each heavy forwarder daily when running: index=_internal host=HF blocked=true The total ratio of blocked events seems to be about 1 If you are experiencing indexing throughput problems, there are a few options. This can manifest itself in many ways, such as no data coming into Once you review which source or sourcetype is the issue, and where in the queue it is becoming blocked, you can use the Troubleshooting blocked queues guide to implement Indexing > Indexing Performance: Deployment. I have a few questions regarding this : But current_size, especially considered in aggregate, across events, can tell you which portions of Splunk indexing are the bottlenecks. Forwarding to output group all_indexers has been blocked for 10 seconds. Sometimes forwarders stick to certain indexers though and it also helps to use the magic 8 props, in particular the EVENT_BREAKER_ENABLE and EVENT_BREAKER props were designed to combat this forwarder stickiness. "Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. (Note: Reason for queue block is, when some component in the index time can not service data as fast is entering into the system) Hi, I have universal forwarder monitoring a number of directories and forwarding to an indexer. Hi @VijaySrrie,. The index queue is full because the disk holding the indexes is too slow. From there, it goes into the parsing pipeline, where it undergoes event processing. Is this what you're Hello, currently im having a problem with the Splunk system we use. If current_size remains near zero, then probably the indexing system is not being taxed in any way. 3. Tags (2) Tags: splunk-enterprise. Our Splunk Search Head is no longer indexing _internal logs (splunkd. 5. I had seen errors like this on splunk 5 indexers. Is there a way to figure out what's blocking it up? The key question is, why did indexing stop? Blocked queues are usually just a symptom of something down the line not working properly, they're usually not a cause of anything. Then the processor after the bottom-most blocked queue might be to blame. Persistent queuing lets you store data in an input queue to disk. " Please help me to fix this. The device sends the logs by means of syslog to the heavy forwarder who receives it, stores it and tries to send it to the indexers, but the errors that I attach appear. For example, i want to know which indexer queues were full in past 2 hrs but this not possi Solved: Hi, How to correctly set splunktcpin queue size on indexers? I tried: in server. 1, and indexing queue is filling up to 100%, causing all other queues to fill up to the point where indexing stops completely. parsingQueue 2. log of Splunk heavy forwarder. Disable receiving. we can see the logs still coming in the folder that splunk reads but they are not showed during the searches. Thanks for replying, the indexer queues (SOS) seem to be ok, the 256KBps is not a problem either the forwarder has a thruput close to 0 for most of the time and then from time to time indexes its data(I don't see why it behaves like this). conf presentation about troubleshooting queues. It just controls whether the sourcetype appears in the GUI list. This all currently works Correct, you'd set dropEventsOnQueueFull on the indexer outputs. At first, you should check if the resources that you're using are sufficient for the work that the HF has to do and you can do this using the Monitoring Console, especially see the CPU load. conf: [queue] maxSize = 2MB in inputs. 620 -0400 WARN TcpOutputFd - Connect to My Indexer is receiving data from a Forwarder but also sending data to non Splunk device. - If it's from indexing queue it could be due to It results in the indexing queue blocked and filling, causing backups into all other queues. If you can specify explicit timestams formats, those are than having Splunk guess; Line merging rules. --- There was a . I have been troubleshooting blocked queues, and been gradually eliminating them. also which queue is used to transfer files from heavy forwarder to indexer for indexing? so that I can check if that queue is getting full. AND separating JSON data using INDEXED EXtRACTIONS Can you please let us know how many queues are blocked? If Indexing queue will block then due to back pressure But current_size, especially considered in aggregate, across events, can tell you which portions of Splunk indexing are the bottlenecks. Are all queues blocked down to the indexer queue or is there blockage upstream of that? Hi @anem . Looking through the Indexing menu, I saw that in the data pipeline for my two indexers the fill ratios are 100% for all 4 Queues (Parsing Queue, Aggregator Queue, Typing Queue, and Index Queue). Review system health: ensure downstream indexing and/or forwarding are operating correctly. Looking at his metric queue diagnostics he was getting over 1000 events in his parsing queue even at fairly slow times. 951 -0500 WARN TcpOu I have data going to my indexers and also selective data going though a HF off to a 3rd party via Syslog. It stopped working yesterday. The TCP output processor has paused the data flow. please check - apparently your Splunk instance is forwarding to itself. This is similar to what I mentioned above. This tells me it is an issue with the universal forwarder. The thing is, nothing's indexing on the thing. It takes too long. To solve the problem, the Splunk Support hinted two intervenes: reduct the quantity of syslogs. conf: [default] Regex_cpu_profiling = true And restart Splunk. Quite often the real issue (if there is any issue) are found from it. Stopping all listening ports. iii) Slow in processing index data This is similar to what I mentioned above. We are indexing ~400GB per day and it makes sense to increase the queue sizes as default values might not be good enough in this case. None of the indexes are 100% full, except one of the indexes is over 99% full. Indexer discovery used in Multisite clustering There can be many reasons for this failure, including the ones listed above. You also need to check the indexer queue status to see where the queue blocking started from. I could see the following warnings occuring repeatedly 01-11-2016 14:46:25. Splunk tcpin queue is fine as well. Some suggestions : 1 - Install the Splunk on Splunk app on your search-head. We are running these in batch mode to index the files then delete them. Take a look at the "Indexing Performance" view. It's forwarding internal logs to my indexers, and I'm not running any Summary indexes on it. If you are thinking Eventually that queue will fill and block. log are "blocked"). x. Is there any one done something clever basically they have internet issues it stops any data on that HF going to my indexer. Many Hello I have a Universal Forwarder that acts as an intermediary forwarder between about 200 other UFs and the Indexer. Yes,We have quques each level that helps to identify the issue if blocking at any quque . First it will pass through the parsingQueue and then onto the aggQueue. 1. ) It will hold the data and keep trying. In all likelihood, one of The typing queue is full because the indexing queue is full. 4. what can be done to resolve queue blockage? Indexing is very slow - added 250 mb to indices - helped some - going to the customized time stamping formats next due to mixed windows, sourcefire, and cisco data - everything is single line coming from snare and syslog so will turn on Should_linemerge = false - regexes are spot on . Yes I've spent time on the answers site with similar results, but after using/running Splunk now for 3 years I've found that if I can't get the answer from Splunk answers I've either used the wrong search term, or most times I find something close and am able to backwards/sideways engineer it until it fixes my issue. when indexqueue has blocked on HF (or other instances) you should tart to looking from next host which is receiving those events. In particular, see the maxQueueSize, dropEventsOnQueueFull and dropClonedEventsOnQueueFull. (Note: Reason for queue block is, when some component in the index time can not service data as fast is entering into the system) So, I've got a weird one. Wait for the bar to disappear before reviewing the panel. Will keep dropping events until data flow resumes. I mean, not just the tcpout_connections related the the device, but also all the aggqueue, indexqueue, parsingqueue. When using multisite clustering, every forwarder must have a si While looking at graph, your indexing queue is blocking continuously but percentage is low, for that you are hitting IOPS issue. There are multiple reasons for this, for example: firewall block between search head and indexer, Various queue on Indexers are full (Due to low IOPS or higher load on indexer for data processing) If the internal queue on the receiving indexer gets blocked, the indexer shuts down the receiving/listening (splunktcp) port after a specified interval of being unable to insert data into the queue. 32. Incoming data first goes into the parsingQueue. Performance got better only after removing custom time format and using DATE_CONFIG = CURRENT. Calling support now! To resolve that, I had to restart the Indexers. Looking at the metrics log on one of the Indexers, I see several messages about various queues being blocked; the aggqueue, the indexqueue, and the typingqueue. log/netstat tcp recv buffer is empty. This all currently works Are you sure the HF is not forwarding any data? By default, it will send its own logs. We were on 6. It may be due to all the network ports getting used up? The Splunk UF has an in-memory event queue and HA capabilities that are controlled by outputs. In this case, make sure the HF has indexers to send to. /splunk cmd btool check. uunkee nedmmyhdd nkhkqb qre yjscff xtattz bwpvlsly qmtmp lmihr wfhfmwx