Running agents on systems is a problem. It saps the performance of the device while running, creates another set of security vulnerabilities, required additional maintenance and possibly costs money to get and run.
|Why is my computer slowing down?|
Running multiple agents which scan the same things, means that there could be performance issues due to the conflicts between the tools.
System administrators would rather collect information once, and share multiple times at the back end, rather than deploying an additional collection agent to the end system.
However, a problem with most tools developed to collect data, is that they have a specific purpose. Some purposes are very general, like Endpoint management, Client Automation and Configuration Management tools. Some are more specific, like anti-virus or security management. Some vendors have multiple tools which they roll into one agent, which at least reduces the stack of tools from that vendor.
Smaller vendors may be able to plug into other tools, or even re-purpose the collected information without deploying any new agents.
The result is there might be some need to compromise, either in the quality of the data collected, or on the performance of the machines running. In an ideal world, some kind of universal agent which grabs all data depending on the tools running in the back end, and runs on the required schedule to gather the required information and bring it back. If this nirvana sounds a bit far fetched, you have to wonder whose interest would it server to create this universal agent.