Skip to main content
Question

Seeking Guidance on Research Tool and Monitor Performance Issues

  • February 13, 2026
  • 3 replies
  • 61 views

killerkain
Forum|alt.badge.img+1

I’m looking for explanations, recommendations, or general guidance to help us achieve a more consistent and reliable experience when using the Research Tools and Monitors within Google Threat Intelligence.

We regularly search for intelligence related to specific organizations using Digital Threat Monitoring Research Tools. A simplified example of a query we commonly use is below (in practice, we often include additional AND conditions to narrow the scope):

domain:"company.com" OR group_brand:"Company"

However, we’ve encountered several recurring issues and inconsistencies that make it challenging to reliably search for and analyze intelligence. I’ve outlined the primary concerns below.

Observed Issues

  1. Inconsistent query execution

    • The same query may time out multiple times before eventually succeeding.
    • For example, running the same search five times may result in four timeouts followed by one successful execution.
    • In other cases, a query will work without issue, but when rerun later (with no changes), it times out.
  2. Date range limitations

    • We are sometimes prompted to reduce the date range to resolve timeouts.
    • In practice, this may require limiting the range to as little as one week, even when the intelligence we are looking for is several months old (e.g., three months).
  3. Filtering appears to reduce performance

    • In some cases, a base query returns 200+ results successfully.
    • When we apply additional filters (such as collection type or threat type) to reduce the result set, the query then times out.
    • Logically, we would expect filtering to reduce the workload rather than increase it, which makes this behavior difficult to understand.
  4. Additional query constraints causing timeouts

    • Adding additional monitor fields can also cause timeouts.
    • For example, a query that works initially may time out after adding a condition such as:
      AND group_threats:"CL0P"
    • This behavior seems consistent whether the constraint is an inclusion or an exclusion.
  5. Exclusions and query size

    • When searching for malicious domains, excluding known or owned domains (which we know to be non-malicious) can cause the query to time out.
    • This appears to scale with the size of the exclusion list: a small number of exclusions may work, but larger lists often lead to timeouts.
  6. Monitor behavior vs. research queries

    • If a query consistently times out when used in the Research Tool, will it still function reliably if activated as a Monitor?
    • Since monitors only evaluate new data rather than performing a full historical search, we are wondering whether they are subject to the same limitations.
  7. Query syntax and structure

    • I believe I understand the standard use of parentheses for logical operators (AND / OR), but I’m unsure whether syntax structure affects how the Research Tool processes queries internally.
    • For example, all of the following queries are logically equivalent, yet their behavior can differ (one may time out while another does not):
      domain:"company.com" OR group_brand:"Company"
      (domain:"company.com") OR (group_brand:"Company")
      ((domain:"company.com") OR (group_brand:"Company"))
    • This may simply be related to the broader performance issues described above, but if there are best practices or recommended “query etiquette” to help avoid timeouts, that guidance would be greatly appreciated.

Any insight into these behaviors, recommended query patterns, known limitations, or configuration best practices would be extremely helpful. Thank you in advance for any explanations or suggestions you can share.

3 replies

Rob_P
Staff
Forum|alt.badge.img+10
  • Staff
  • February 17, 2026

Hello ​@killerkain 

Good to hear from you again and thanks for reaching out on this topic.  As a DTM Expert on the Google Cloud Security Customer Success team, let me take a moment to try and address your issues and respond to each of them broken out.  Where I don’t have a clear answer just yet, I’ll be reaching out to our backend engineering team to see if I can get some additional information from them.

I wanted to initially comment about research tools as a whole.  This functionality is a good starting point for understanding if the types of data and entities you are looking for are collected within DTM. While this is the primary method for doing a preliminary search for the content we’ve collected, you may see some of the inconsistencies you’ve mentioned but you would NOT experience that with the monitors once they are configured.

1. I also have experienced inconsistencies on Research Tools with searches coming back or failing.  If you use the “Test Monitor” function when building a Monitor, it essentially takes the query you’ve built and runs it through the research tools to see if any results would have previously hit for that Monitor Parameter.  This testing can be frustrating since if the query doesn't return in a timely manner, the system will timeout, and the Test Monitor function doesn’t give you the ability to adjust the time frame unless at least one result returns. This is a subject I will reach out to engineering about to see if I can get a better answer on this as to why some searches will time out multiple times before a result is actually returned. 

 

2. With regards to the time ranges, DTM is really focused on building monitors and placing them as a “Snapshot in time moving forward” to catch matching searches.  The tool (Especially research tools) is not strongly designed to be heavily used for previous findings that are several months to years old. Its good for determining if a monitor in the future would catch anything that I’m looking for.”

 

3. With regards to filtering, it is important to understand how the monitors/queries are built and when the filtering occurs.  The limitation for the “Timeout” is to prevent users from sending overly complex queries which will never return in a timely fashion, or will put too much stress on the DTM system and thus impacting overall performance for all other users.  When filtering out, the system may need to grab all results that match, then remove the exclusions which increases processing time and workload, thus leading to a general timeout.
 

4. Grouped fields and conditions are set so that instead of making multiple searches, they are grouped into a single searchable field. This however can have the performance impact that several other searches are concurrently being performed when using the grouped fields.  Take the example of some of the grouped fields in our documentation:

https://gtidocs.virustotal.com/docs/monitor-matching-methodology#topic-groups

group_network searches for 5 different data types which our ingestion engine has processed when looking at text/entities we ingest.  As a result, this does increase processing power required and the time needed for a manual query, likely leading to the research tools timeout. 

 

5.  When using exclusions and additional NOT statements, when the query gets passed to the backend, it's essentially using each line in a monitor or a research tools query as an additional OR / AND NOT statement.  Having a very long list of these will affect performance of the query, likely leading to the timeouts you’re experiencing when testing with research tools or Test Monitor functions.  If this query is too complex and there is a massive performance hit and the query doesn't come back in a set amount of time, this can also lead to set monitors self disabling themselves which then you will get an email stating the monitor was disabled. This is to protect the overall performance/stability of the DTM System.

6. Monitor configurations have a longer window of timeout and performance thresholds than Research Tools and Test Monitor functions. My comment above in #5 regarding monitors self disabling still applies, but typically this is seen in very complex queries, poorly written queries, or queries with too many parameters and exclusions. 


7. You are correct in that all three of your examples should run and return data.  One thought on this is depending on how you are setting up your monitors, I typically start with my Monitor by using the “Search Collection Type” parameter or the Threat Type.   Search collection types help to narrow down the search by the “Bucket / Index” of where the data resides which I’m looking for. Threat Type Labels (See the entry in that page’s table) are also good if I’m not sure where the data exists, but I know I’m looking for ingested documents that are related to Ransomware or Exploits.

This page of our documentation better explains how we ingest, parse, normalize, and flatten any of the “Documents” we ingest, and how we apply the Monitor fields to every piece of data within a document.  This may help you understand the process a bit better, but I do agree the inconsistencies with results not returning as expected can be frustrating. In one of your previous posts, I shared the Lucene guide for how to better harness Lucene queries for the monitors.  Lucene queries also work in research tools, so you may have better luck building out specific use cases where you know exactly what we’re mapping those JSON fields to, given a specific “Document” (our generic term for anything we ingest into DTM.)

I hope this helps, let me check in with Engineering on some of your concerns and see if I can get any additional specific answers from them on some of your reported issues. 

Thanks for taking the time to reach out, I’ll be back in touch soon.

Respectfully,

- Rob


Rob_P
Staff
Forum|alt.badge.img+10
  • Staff
  • February 19, 2026

Hello Again ​@killerkain 

I spoke to an engineering team member regarding your concerns, specifically around Question #1. 

This issue is a result of the way that we’ve designed the system, and the very large dataset that needs to be searched.  Sometimes the results find matches within that allotted time frame before it times out.  This is a system limitation they are aware of, and are actively working to resolve in the next major release of DTM.  They were also able to confirm that the Tool’s Queries do operate different when performing this search as opposed to saving a monitor and watching incoming ‘documents’ match against them, so you would not have this issue should you save a monitor and it find matches in the future. 

Thanks,

- Rob


killerkain
Forum|alt.badge.img+1
  • Author
  • Bronze 2
  • February 19, 2026

This is great info Rob and will help us use the DTM more aligned with what it was built for.

 

Thanks for all the information!