This is just a quick page to answer some common questions that aren't covered in the official FAQ. It is mainly a repository for answers to questions I keep seeing but don't always remember the answers. It is not meant to be read all at once, as it has no coherent structure.
Configuration
> I am cross compiling for a PPC/Linux target. > > I have not been doing a "make install" because this has no meaning in a > cross compile environment (or at least mine). I plan on just moving the > snmpd executable over to the target and executing it. It took me a while, > but I finally realized that agent/snmpd is a script and not a binary file. That is correct. The actual binary is created when you run 'make install'. You can not simply build, then copy the binaries to the target system. > In this script it refers to directories on my build system! So in other > words, my build system needs to have the exact same directory structure as > my target system! No, the directory structures do not have to match. You just have to configure the to do what you want. You should configure with the --prefix and --exec-prefix for your target system. Then, after building, you can install to a temporary location. ./configure --prefix=/not/usr/ --with-install-prefix=/tmp make make install cd /tmp tar cvf net-snmp-target.tar not Then copy the tar file to your target, and on your target cd / tar xvf net-snmp-target.tar If you did not specify the correct options to configure, you can override them on the command line for make install. There are 2 options. The prefix and exec_prefix can be specified to override where to install the package. For example, to put the binaries in /usr/bin, but put everything else in /not/usr: make install prefix=/not/usr exec_prefix=/usr Note that for the most part, prefix and exec_prefix will be the same. Also note that this will not change where the binaries expect to find the data. The binaries will use the options specified to configure. The second option is INSTALL_PREFIX. This is simply a path that is prefixed to all install paths. So, to install using the hierarchy from above, but to do so in /tmp so that a tarball can be created and moved to the target system: make install prefix=/not/usr exec_prefix=/usr INSTALL_PREFIX=/tmp
You can get existing configuration options for Net-SNMP in two ways: 1) command line: 'net-snmp-config --configure-options' 2) snmp: 'snmpget -v 1 -c public localhost UCD-SNMP-MIB::versionConfigureOptions.0'
SNMPv3 introduced the concept of 'contexts', which allows an agent to return different values for an object based on the context in the incoming packet. This lets the agent implement the NOTIFICATION-LOG-MIB for sent traps, and snmptrapd implement the same table for received traps. The context will determine which set of data the agent returns. A context of 'snmptrapd' must be used to get received traps from the agent. Special setup is requires to use contexts for SNMPv1 and SNMPv2c. Unique community strings must be set up to map to each context. Here is a sample for context mapping in snmpd.conf: view all_view included .1 80 com2sec -Cn snmptrapd trap_sec default public-traps group trap_grp v1 trap_sec group trap_grp v2c trap_sec access trap_grp snmptrapd any noauth exact all_view none none With this setup, you can walk received traps like so: snmpwalk -v 2c -c public-traps localhost nlmLog
Set up your snmpd.conf to include a unique community and context for each device you will be proxying for. Assuming that the remote agent on DEVICE1 is configure to accept remote_cmty, then it would look something like this: view all_view included .1 80 com2sec -Cn device1_ctx device1_sec default device1_cmty group mrtg_grp v1 device1_sec access mrtg_grp device1_ctx any noauth exact all_view none none proxy -Cn device1_ctx -v 1 -c remote_cmty DEVICE1 .1.3 Start up snmpd with some debug to verify the results: snmpd -f -L -Dresult,proxy,vacm You should see something like: proxy_config: entering proxy_args: final args: 0 = snmpd-proxy proxy_args: final args: 1 = -Cn proxy_args: final args: 2 = device1_ctx proxy_args: final args: 3 = -v proxy_args: final args: 4 = 1 proxy_args: final args: 5 = -c proxy_args: final args: 6 = test proxy_args: final args: 7 = 192.168.1.9 proxy_args: final args: 8 = .1.3 proxy_config: parsing args: 9 proxy_config: done parsing args proxy_init: name = .1.3 proxy_init: registering at: SNMPv2-SMI::org NET-SNMP version 5.2.pre3 Issue a command to the local agent: snmpget -v 1 -c device1_cmty localhost sysContact.0 And you should see the agent send the request to the remote host: vacm:getView: , found proxy: proxy handler starting, mode = 160 proxy: sending pdu results: request results (status = 0): results: SNMPv2-MIB::sysContact.0 = No Such Instance currently exists at this OID proxy: got response... SNMPv2-MIB::sysContact.0
*** NOTE *** If the tcpwrappers aren't working like you expect, and you are *** NOTE *** using host names, try using IP addresses instead. It appears that *** NOTE *** library (or the way we use it) does not resolve host names. *** NOTE *** *** NOTE *** Some versions of the Net-SNMP have had an inconsistency and use *** NOTE *** 'snmp' instead of 'snmpd'. For releases prior to 5.2, you may *** NOTE *** want to use both in your configuration, just to be sure. 1) If both hosts.allow and hosts.deny are empty, access is allowed 2) If hosts.deny has the following line, all access to snmpd is denied: snmpd: ALL 3) If hosts.deny has the following line, all access to snmpd is denied except localhost: snmpd: ALL EXCEPT 127. Alternatively, leave "snmpd: ALL" in hosts.deny and add this to hosts.allow: snmpd: 127.0.0.1
(Updated 2005-09-01) To run the same executable multiple times, on different ports, and ensure that the persistent directories don't conflict, use these steps: 1) Determine your current configuration path: snmpd -f -Lo -Dread_config -H 2>&1 | grep "config path" | head -1 This will probably be something like: /usr/etc/snmp:/usr/share/snmp:/usr/lib/snmp:/root/.snmp:/var/net-snmp 2) Set the environment variable SNMPCONFPATH to the string from step 1, replacing /var/net-snmp with a unique directory and starting with a unique directory. You can also remove any directories that are empty, if you want. NOTE: any conf files in the non-unique part of this path will be shared by ALL agents. export SNMPCONFPATH=/usr/share/snmp/agent1:/usr/share/snmp:/var/net-snmp/agent1 3) Set the persistent directory in the unique snmpd.conf: echo "[snmp] persistentDir /var/net-snmp/agent1" > /usr/share/snmp/agent1/snmpd.conf 4) repeat steps 2 and 3 for each agent, using a unique directory each time. NOTE: if you are using the same shell to start all the agents, if you forget to change one of the environment variables before starting the next agent, things will get messy. I would recommend creating a script to start each agent, and have the script set the environment variables for you. The other option is to specify the environment variable on the command line when starting snmpd, instead of exporting it. For example: env SNMPCONFPATH=/opt/snmp/agent1:/opt/snmp:/var/net-snmp/agent1 snmpd
net-snmp can be configured many different ways, including environment variables, command line options and configuration files. The order is a bit odd, which can lead to some confusion. 1) small commandline args 2) configuration files 3) large/long commandline args (which are config file tokens) To make things even more confusing, net-snmp uses a search path for to find configuration file, so multiple files may be read. If you compile net-snmp from source, the default path will generally be something like this: /usr/local/etc/snmp:/usr/local/share/snmp:/usr/local/lib/snmp:$HOME/.snmp:/var/net-snmp Some vendors use different prefixes. As long as net-snmp was compile with debug support (which it is, by defaul), you can find out the exact path being used, and what options are being set via configuration files, by running a command with '-Dread_config'. snmptranslate -Dread_config
The default maximum packet size for net-snmp is 1472, which is based on the ethernet frame size. If you want to build an agent or client capable of sending or receiving larger packets, you will need to change two header definitions: net-snmp/library/snmp.h:41:#define SNMP_MAX_LEN net-snmp/library/snmp_api.h:340:#define SNMP_MAX_MSG_SIZE
Recent Versions (5.x) --------------------- If you want to *just* change the sysObjectID numbering, (and leave the notifications using the Net-SNMP enterprise OID) then use --with-enterprise-sysoid Or you could just use the snmpd.conf directive 'sysobjectid' to set this at run time. If you want to *just* change enterprise-specific notification OID (and leave the sysObjectID using the Net-SNMP values) then use --with-enterprise-notification-oid If you want to change *both* of these, then use --with-enterprise-oid. The name of this option is a little misleading, since what is actually required is the enterprise *number* rather than a full OID. For Example: --with-enterprise-oid=18293 Older Versions (4.2.x) ---------------------- In older version, you have to manuall update the version_id in agent/agent_trap.c:80 oid version_id[] = { EXTENSIBLEMIB, AGENTID, OSTYPE };
From the net-snmp INSTALL file: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Specifying the System Type ========================== There may be some features `configure' can not figure out automatically, but needs to determine by the type of host the package will run on. Usually `configure' can figure that out, but if it prints a message saying it can not guess the host type, give it the `--host=TYPE' option. TYPE can either be a short name for the system type, such as `sun4', or a canonical name with three fields: CPU-COMPANY-SYSTEM See the file `config.sub' for the possible values of each field. If `config.sub' isn't included in this package, then this package doesn't need to know the host type. If you are building compiler tools for cross-compiling, you can also use the `--target=TYPE' option to select the type of system they will produce code for and the `--build=TYPE' option to select the type of system on which you are compiling the package. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - The most important configure options are: --with-cc=[cross-compiler] --with-ld=[cross-linker] --target=[target-environment] --with-endianness=[big|little] Other potentially useful options: --with-cflags="..." --with-ldflags="..." --with-ar=/path/ar --enable-mini-agent --enable-shared="no" --without-pic Some simple examples of cross-compiling: ./configure --target=ppc-linux --with-cc=ppc_405-gcc --with-endianness=big ./configure --target=powerpc-snmc-linux-gnu --build=i386-redhat-linux \ --with-endianness=big ./configure --host=mips-hardhat-linux 'CFLAGS=-Os -mips2 -mtune=r4600' \ --with-endianness=big CC=mips-linux-gcc CPP=mips-linux-cpp \ LDFLAGS= host_alias=mips-hardhat-linux ./configure --disable-snmpv2c --enable-mini-agent \ --with-mib-modules="mibII ip-mib if-mib tcp-mib udp-mib ucd_snmp target \ agent_mibs notification-log-mib snmpv3mibs notification" \ --disable-applications --disable-des --disable-privacy --disable-md5 \ --without-openssl --with-out-transports="Callback Unix TCP" \ --disable-manuals --disable-shared --disable-mib-loading \ --with-cflags="-s -static -O2 -Dlinux" \ --with-cc=/opt/devkit/ppc/82xx/bin/ppc_82xx-gcc \ --with-ar=/opt/devkit/ppc/82xx/bin/ppc_82xx-ar --with-endianness=big \ --with-defaults --build=i386-linux --host=ppc-hardhat-linux A more complex example involves setting environment variables for all the flags for the tools needed for the cross compile: export TOOLPATH=/opt/hardhat/devkit/ppc/405 export PATH=$TOOLPATH/bin:$PATH export CFLAGS=' -g -fPIC -msoft-float -D_SOFT_FLOAT -Dlinux -mcpu=403' export CPPFLAGS='-I$TOOLPATH/include -I$TOOLPATH/target/usr/include' export ASFLAGS='-g -gstabs' export LDFLAGS='-Wl,-soname,-Bdynamic -lc' export LIB='ar rcu' ./configure --build=i686-pc-linux-gnu --host=powerpc \ --target=powerpc-hardhat-linux-gnu --with-endianness=big - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - The mibgroup section is the most OS specific part of the agent. It's likely that you will have to remove some of them. You just need to figure out which ones to remove. For example, agent/mibgroup/mibII/tcpTable.c is a common module that doesn't work on new platforms. So if it isn't working, omit it during configure, like so: ./configure --with-out-mib-modules=mibII/tcpTable Alternatively, you could work the other way. Start with a minimal configuration: ./configure --enable-mini-agent and then start adding MIB modules one-by-one: ./configure --enable-mini-agent --with-mib-modules=mibII/system_mib,mibII/sysORTable
undefined symbol: usmAESPrivProtocol at /usr/lib/perl5/5.8.3/ppc-linux-thread-multi/DynaLoader.pm This error indicates that the Net-SNMP libraries are not in sync with the perl libraries. This often happens when you install from source, and a packaged version of Net-SNMP (like a RPM) is already installed. Remove the packaged version and re-install the source version.
There are two ways to create SNMPv3 users. 1) net-snmp-config --create-snmpv3-user See the net-snmp-config help for details on parameters. NOTE: You must stop the agent (snmpd) *before* running this command 2) snmpusm See the snmpusm man page for details. This command *requires* a running agent. Also, see the V3 tutorial page.
It is possible to bind outgoing snmp requests to a specific address. Use the 'snmp.conf' directive "clientaddr" to do this.
RMON/DISMAN ----------- Please note that the Rmon code in this package isn't really supported, not least because it works with random data (which isn't a lot of use!) Also the Rmon Event group has essentially been superceded by the DisMan event group (which is significantly more flexible, and *is* supported). ifTable ------- You should not use ifSpecific because it really doesn't provide you with any useful info. This is a long known problem, and is described in section 3.1.16 of RFC 2863. So, the value of { 0 0 } is appropriate, and NOT A BUG. The NETSNMP code MUST NOT be changed to return another value!
Some of the more common ones are: Agent: agent, agentx, dlmod, handler, helper, snmpd, trap, table MIBs: init_mib, mib_init, parse-file, parse-mibs Apps: read_config Add them to the command line like so: snmpd -Dagent,table Other usefule command line options for debugging: --logTimeStamp=true log time stamps -f don't fork into background -Le log to stderr -Lf /tmp/debug.log log to file /tmp/debug.log or add them your snmp.conf like so: doDebugging 1 debugTokens agentx/config # optionally turn on time stamps logTimestamp true If you put them in a different conf file, like snmpd.conf or myapp.conf, prefix each with '[snmp]', like so: [snmp] doDebugging 1 This list was generated by running the following command in the main CVS branch: find . -name \"*.c\" -print | xargs grep DEBUGMSGT | grep \" | cut -f 2 -d\" | sort -u add agent_handler agent_registry agent_set agentx agentx_build agentx_build_varbind agentx/config agentx/config/retries agentx/config/timeout agentx/master agentx/subagent agentx/subgaent asn_realloc auto_nlist build_oid_noalloc build_oid_segment callback callback_clear check_getnext_results clear_nsap_list compare:index comparex container container_iterator container_iterator:results container:null: container:null:find container:null:find_next container:null:for_each container:null:free container:null:get_null container:null:get_null_factory container:null:get_null_noalloc container:null:insert container:null:remove container:null:size container_registry daemonize deinit_usm_post_config delayed_instance dlmod dump_etimelist encode_keychange example example_data_set example_notification example_scalar_int fixup_mib_directory generate_Ku generate_kul get_mib_directory handler:calling handler:inject handler::register handler_registry handler:returned header_complex header_complex_add_data header_complex_dump header_complex_extract_entry header_complex_generate_oid header_complex_generate_varoid header_complex_parse_oid header_complex_test helper:baby_steps helper:cache_handler helper:debug helper:instance helper:mfd helper:null helper:read_only helper:row_merge helper:scalar helper:scalar_group helper:serialize helper:stash_cache helper:table helper:watcher helper:watcher:spinlock helper:watcher:timestamp host/hr_device host/hr_disk host/hr_filesys host/hr_inst host/hr_network host/hr_partition host/hr_print host/hr_proc host/hr_storage host/hr_swinst host/hr_swrun host/hr_swrun::GetNextHR_SWRun host/hr_system hr_proc initialize_table_ipCidrRouteTable initialize_table_mteEventNotificationTable initialize_table_mteEventTable initialize_table_netSnmpHostsTable initialize_table_nlmLogTable initialize_table_nlmLogVariableTable initialize_table_nsModuleTable initialize_table_nsTransactionTable init_mib init_usm injectHandler kernel_sunos5 ksm lcd_get_enginetime lcd_get_enginetime_ex lcd_set_enginetime log_notification md5 mfd mibII/at mibII/icmp mibII/interfaces mibII/ip mibII/ipv6 mibII/mta_sendmail.c:mta_sendmail_parse_config mibII/mta_sendmail.c:open_sendmailst mibII/mta_sendmail.c:read_sendmailcf mibII/snmp_mib mibII/sysORTable mibII/tcpScalar mibII/tcpTable mibII/udpScalar mibII/udpTable mibII/vacm_vars mibII/var_route mib_init mte_disco mteEventTable:send_events mteObjectsTable mteTriggerBooleanTable mteTriggerDeltaTable mteTriggerExistenceTable mteTriggertable mteTriggerTable mteTriggerTest mteTriggerTest:send_mte_trap mteTriggerThresholdTable netsnmp_aal5pvc netsnmp_agent_check_packet netsnmp_deregister_agent_nsap netsnmp_ds_handle_config netsnmp_ds_set_boolean netsnmp_ds_set_int netsnmp_ds_set_string netsnmp_ds_set_void netsnmp_ds_toggle_boolean netsnmp_instance_counter32_handler netsnmp_instance_int_handler netsnmp_instance_long_handler netsnmp_instance_ulong_handler netsnmp_ipx netsnmp_register_agent_nsap netsnmp_register_mib_table_row netsnmp_sockaddr_in netsnmp_sockaddr_in6 netsnmp_sockaddr_ipx netsnmp_table_data_set netsnmp_tcp netsnmp_tcp6 netsnmp_udp netsnmp_udp6 netsnmp_udp6_getSecName netsnmp_udp6_parse_security netsnmp_udp_getSecName netsnmp_udp_parse_security netsnmp_unix netsnmp_unix_getSecName netsnmp_unix_parse_security netsnmp_unix_transport netstat:if notification_log nsCacheScalars nsDebugScalars object_monitor old_api output override parse-file parse-mibs parse_oid parse_oid_indexes perl proc proxy proxy_args proxy_config proxy_init read_config read_config_copy_word read_config_files read_config:forward read_config:initmib read_config_read_data read_config_read_memory read_config_read_objid read_config_read_octet_string read_config_store_data_prefix read_config:traphandle register_exceptfd register_index register_mib register_readfd register_signal register_writefd report results scalar_int scapi scopedPDU_parse send_notifications sess_async_send _sess_open sess_process_packet sess_read sess_resend sess_select setting auth type: \ signal smux smux_conf smux_init smux/snmp_bgp smux/snmp_ospf smux/snmp_rip2 snmp_agent snmp_alarm snmp_api snmp_build snmp_clean_persistent snmp_config snmpd snmpd/main snmpd_ports snmpd_register_app_config_handler snmpd/select snmpEngine snmpNotifyFilterProfileTable snmpNotifyFilterTable snmpNotifyTable snmp_parse snmp_parse_args snmp_parse_oid snmp_pdu_realloc_rbuild snmp_save_persistent snmp_send snmp_sess_add snmp_sess_close snmp_sess_open snmpSetSerialNo snmp_store snmpTargetAddrEntry snmpTargetParamsEntry snmptrapd snmpv3 snmpv3_parse sprint_by_type stash_cache subtree table_array table_array:get table_array:group table_data_add_data table_iterator table_set_add_row take_snapshot target_counters target_sessions tdomain testhandler testhandler_table transport_callback trap trapsess tunnel ucdDemoPublic ucd-snmp/disk ucd-snmp/disk: ucd-snmp/memory ucd-snmp/pass ucd-snmp/pass_persist ucd-snmp/proc ucd-snmp/versioninfo ucd-snmp/vmstat_aix4.c:update_stats ucd-snmp/vmstat_dynix.c:update_stats ucd-snmp/vmstat_hpux.c:update_stats ucd-snmp/vmstat_solaris2.c:update_stats unlink_tree unload-mib unregister_exceptfd unregister_readfd unregister_signal unregister_writefd usm usmUser util_funcs vacm:checkSubtree vacm:getView versioninfo vmstat yyyinjectHandler
MRTG is a tool for monitoring devices and creating graphs. More information is available at the MRGT web site. A simple tutorial for configuring server monitoring (including using Net-SNMP and MRTG) can be found here.
The definitive guide is "Understanding SNMP MIBs" by David T. Perkins, Evan McGinnis (ISBN: 0134377087).
Cisco has a tool for locating MIBs
here
Note: Many CISCO MIBs contain syntax errors that cause Net-SNMP errors. These
errors have to be fixed by hand. There used to be a patch to do it, but I
can't find it these days. Let me know if you have a reference.
The trap examples on the tutorial pages require you to load the tutorial MIBs. Here is a simple example of using snmptrap wit the IF-MIB, which should be loaded by deafult. This simulates a linkDown trap for ifIndex 1. snmptrap -v 2c -c public localhost '' IF-MIB:linkDown \ IF-MIB::ifIndex.1 i 1 IF-MIB::ifAdminStatus.1 i down ifOperStatus.1 i up
The new API for Agent module development that was introduced in release 5.0 is a little different than the pre 5.0 api, now referred to as the 'old api'. The basic idea is that when an OID is registered, an function pointer is provided. This function is called a handler, and it is called whenever the agent needs to query an OID beneath the OID that the handler was registered with. A handler's registration has a 'next' pointer, which is a function pointer that (optionally) will be called after the original, or root, handler is called. Thus an OID is associated with a chain of handlers. Since a good bit of request processing is independent of the actual objects and data for the request, the agent contains several handlers designed to do a specific job, so that handlers further down the chain have less to worry about. These handlers are referred to as 'helpers'. For example, the handler chain for the tcp group of scalars is: cache_handler -> bulk_to_next -> serialize -> scalar_group -> scalar -> instance -> tcp More details on various helpers can be found in the Net-SNMP API documentation on handlers.
Q: What is the benefit of switching to the new Net-SNMP style coding for modules, compared to the old style ucd-snmp style. A: The benefit is that (we think) it is a little easier to understand, and you cantake advantage of some 'helpers' to do some grunt work for you.. Q: Any performance differences? A: I don't think any benchmarks have been done. There are multiple new style helpers, too. Some of the 'new' helpers have huge performance gains in certain circumstances (really large tables; eg route table), but that's mostly due to data storage/access changes. And of course the tradeoff is run-time memory use. Q: If an snmp querye PDU contains 3 (eg) varbinds, all from the same table, in the ucd code we know the agent sends 3 requests to the individual set funcs of the varbinds. If we go to the new style would the agent "collect" these and make a single call? A: yes, the new style will collect into one request to the handler. There is even a 'row merge' helper which will break/merge it into one call per row, so all the requests for a certain row will come in together. Q: Are get and set requests handled differently? A: get and set are handled in the same way now. The same handler is called for both, and the handler can switch on the request mode. Though there is a helper that will split it up each modes into its own function call. Even then, all the parameters are the same.
agent/mibgroup/mibII/interfaces.c
.
The intent of the new methodology is to separate the MIB module implementation from the data access code. A data structure is defined which contains the data required by the MIB module, and a simple interface is defined to manipulate that data structure.
Another decision that was made for this new methodology was to try and separate the code for the various architectures into separate files. This reduces the number of ifdefs needed in each file, and hopefully will make the code easier to maintain.
agent/mibgroup/if-mib/data_access
. Currently, there is a
file for common code, linux code, and ioctl code. In the future, more
architectures will be ported (I hope), and there will likely be a
file for solaris, HP-UX, BDS, etc. These may use some of the routines in
the ioctl file, or maybe some other new file for some mechanism that is
shared on multiple architectures (kstats, maybe?).
Selecting modules to include in teh agent goes something like this:
When the ifTable MIB module needs interface data, the chain of events is something like this:
There are 3 ways to add a module coded in C to the Net-SNMP agent. EMBEDDED IN snmpd ----------------------------------------------------------------------------- Integrating into the agent directly is the simplest method, and will work on every platform where the agent is supported. It has the following advantages: - slightly faster than a sub-agent, since there is no need to send packets to another process. - easier to debug, since you can trace through the agent code and the module code It has the following disadvantages: - bugs in your module could cause instability of the agent as a whole - updates/changes to your module require shipping a whole new executable DYNAMICALLY LOADED MODULE ----------------------------------------------------------------------------- A dynamically loadable module is built separately from the Net-SNMP agent, and loaded at run-time. It has the following advantages: - module can be updated independently of the Net-SNMP agent - master agent can be upgraded to newer version, without rebuiling the module It has the following disadvantages: - not all platforms support dynamically loaded modules SUBAGENT ----------------------------------------------------------------------------- An sub-agent connects with a master agent and registers to handle requests for certain object and/or tables. Net-SNMP supports to sub-agent protocols: AgentX and SMUX. SMUX was first, but never made it onto the standards track. AgentX was the successor that was adopted by the IETF. We strongly recommned the use of AgentX for sub-agents. A sub-agent has the following advantages: - can run against other master agents from other vendors - can be updated independently of the Net-SNMP agent - master agent can be upgraded to newer version, without rebuiling the sub-agent - instability of the sub-agent does not affect master agent It has the following disadvantages: - slightly longer reponse times because of the overhead of the protocol and inter-process communication to the master agent
Q: Is there a API call to get the target and source IP addresses from a packet? A: No, and there is a reason... (technically, actually, it can be done but it is not recommended). You want to support things independently of the transport that a packet was received over, if at all possible. First of all, relying on an unsupported method to determine the IP information will mean that your module must be embedded in the agent, and can't run as an AgentX sub-agent. Second, it won't work for any non-IP based system. Q: Is there an alternative to have different behavior for an object based on the incoming packet? A: Yes. 1) SNMPv3 actually has "contexts" which let you do things the way you want. You register each mib implementation under a different context, and you let the context distinguish which thing you're talking to. If you're using SNMPv1, Net-SNMP 5.2 and later will have the support to map SNMPv1 communities into SNMPv3 contexts so you can have a different community name per serial device. Or if you're using SNMPv3 (which we recommend) you simply have a different context name per device. See the 'Multiple device' question for more details. 2) Put the device "name" or other identifier into every table index set that you implement. That way when you implement a table, it'll show you the information about all your devices, not just one (each row in the table would be assigned to a particular device).
If you want a single agent to return data for the same MIB for multiple devices or programs, there are several ways to do it. Proxy ----- If the other devices have their own SNMP agents, you can simply proxy requests directly to the device. See the 'proxy' section of the snmpd.conf man page for more information (if you need to support SNMPv1/SNMPv2 proxy requests, see the 'SNMPv1 proxy for multiple devices' section of this FAQ). Tables ------ If you want a user to be able to 'walk' the agent and see all the data for all the devices, then you should define your MIB to use tables for all data. In each table, every device will have its own row. The row index can be arbitrary (1-N) or something more useful, like a text identifier. One 'gotcha' to be careful of when using (1-N) style indexing is that the index for a particular device should not change while the agent is running. That is, if you have devices (1,2,3) and device 2 goes away for some reason, you must make sure that the indexes for the remaining rows remain (1,3), and not (1,2). Otherwise an user requesting information for row 2 will suddenly see different data and be very confused. Contexts -------- SNMPv3 introduced the concept of 'contexts', which allows an agent to return different values for an object based on the context in the incoming packet. By default, Net-SNMP will register all objects with the default (NULL) context. However, you can register a table or scalar object with as many different contexts as you would like. Your handler can use the context to determine which device should be used to answer the request. Note that a user wanting to retrieve data for a particular device must explicitly specify the context in their request. The is no way to 'walk' through contexts. If there are 3 devices, three separate walks must be performed. Special setup is requires to use contexts for SNMPv1 and SNMPv2c. Unique community strings must be set up to map to each context. Here is a sample for context mapping in snmpd.conf: view all_view included .1 80 com2sec -Cn device1_ctx device1_sec default device1_cmty com2sec -Cn device2_ctx device1_sec default device2_cmty group my_grp v1 device1_sec group my_grp v1 device2_sec access my_grp device1_ctx any noauth exact all_view none none access my_grp device2_ctx any noauth exact all_view none none
There are two basic methods of caching in the agent. One is the cache helper, the other is the stash cache. There is no reason the two could not be combined. STASH CACHE ----------------------------------------------------------------------------- The stash cache basically does a walk of your handler, and stores the results. Future requests, until the time expires, won't even call your handler. Used by itself, the first hit of your table could be very expensive (as it will walk your whole table). For exmaple, with an iterator helper, the iteration would still be done for each row. CACHE HELPER ----------------------------------------------------------------------------- The cache helper is much simpler. I simply calls a load_cache routine, which you implement, to set up your cache. The your handler will be called (in your case, the iterator code), which can use the cache you set up to speed the iterator lookups. Your handler will still be called, as usual. For examples, see the following mib modules (or grep through the code for 'netsnmp_cache'): agent/mibgroup/agent/nsCache.c agent/mibgroup/mibII/tcpTable.c agent/mibgroup/mibII/tcp.c DIRECT ACCESS USING THE CACHE HELPER (5.2 and later) ----------------------------------------------------------------------------- If you don't mind being on the bleeding edge, in 5.2 the cache helper can provide a 'hint' that you can use in your load_cache routine to only load the data you need into the cache. Thus, when your handler is called, it will only have to deal with the required rows, which should be much faster. (This mode would not be very beneficial when used in conjunction with the stash cache, however.) The other caveat is that you have to be able to deal with GETNEXT, which most helpers don't usually have to deal with. This is even trickier if your table is sparse. If you don't actually want to move to 5.2, I believe the cache helper file could be transplanted back to a 5.1.x base without any problem.
from RFC 3416: noError(0), tooBig(1), noSuchName(2), -- for proxy compatibility badValue(3), -- for proxy compatibility readOnly(4), -- for proxy compatibility genErr(5), noAccess(6), wrongType(7), wrongLength(8), wrongEncoding(9), wrongValue(10), noCreation(11), inconsistentValue(12), resourceUnavailable(13), commitFailed(14), undoFailed(15), authorizationError(16), notWritable(17), inconsistentName(18)
SNMPv1 used BIT STRINGs (RFC 1212, 5.1.1): (3) An object with BIT STRING syntax containing no more than 32 bits becomes an INTEGER defined as a sum; otherwise if more than 32 bits are present, the object becomes an OCTET STRING, with the bits numbered from left-to-right, in which the least significant bits of the last octet may be "reserved for future use". SNMPv2 and beyond uses BITS (RFC 3417, 8.1): (3) When encoding an object whose syntax is described using the BITS construct, the value is encoded as an OCTET STRING, in which all the named bits in (the definition of) the bitstring, commencing with the first bit and proceeding to the last bit, are placed in bits 8 (high order bit) to 1 (low order bit) of the first octet, followed by bits 8 to 1 of each subsequent octet in turn, followed by as many bits as are needed of the final subsequent octet, commencing with bit 8. Remaining bits, if any, of the final octet are set to zero on generation and ignored on receipt. SNMP BITS are actually reversed, so 0x01 would be '10000000' and 0x1f would be represented in 2 bytes, '11111111' and '10000000'.
Add the following line to snmpd.conf: perl do "/path/to/perl_module.pl" To test: 1) Try staring up snmpd, like so: 'snmpd -Dperl'. Check your log files for: perl: initializing perl (/tmp/snmp_perl.pl) starting perl_module.pl perl_module.pl loaded ok registering at netSnmp.999 If you get: perl: initializing perl (/usr/local/share/snmp/snmp_perl.pl) Can't open perl script "/usr/local/share/snmp/snmp_perl.pl": No such file or directory embedded perl support failed to initalize Then you need to locate snmp_perl.pl, and put it in the correct path, OR put the path in your snmpd.conf: perlInitFile /tmp/snmp_perl.pl 2) Once it's loaded ok, try a walk: snmpwalk -v2c -c public localhost netSnmp.999 You should see something like this in your logs: refs: NetSNMP::agent::netsnmp_mib_handler, NetSNMP::agent::reginfo, NetSNMP::agent::netsnmp_agent_request_info, NetSNMP::agent::netsnmp_request_infoPtr processing a request of type 161 processing request of nsTransactionEntry.3 .1.3.6.1.4.1.8072.999.1.2.1 -> hello world finished processing 3) If not, maybe check /path/to/perl_module.pl and make sure it is executable. Hope that helps.
For auto-dependencies, add the following to your Makefile: # # Build rules # %.d : %.c @echo "Generating makefile $@ ..." @set -e; $(CC) -M $(COPTS) $(CFLAGS) $(CPPFLAGS) $< \ | sed 's/\($*\)\.o[ :]*/\1.o $@ : /g' > $@; \ [ -s $@ ] || $(RM) $(RMFLAGS) $@ include $(SOURCES:.c=.d)
If you are not using the current release and run into what you thing might be a bug, it is very helpful to us to know if the problem exists in a current version. Even if you are unable to upgrade to a new relase for some reason, you can test a newer version without installing it. To test a new release without installing it, follow these simple steps: 1) get the tarball, optionally verify the gpg signature/md5 sum, and unpack the tarball. 2) configure the package to build using static libraries, and any other flags you need. Try running 'net-snmp-config --configure-options' to see how the currently installed version was configured. I strongly recommend that you *do not* build shared libraries, to make sure that there is no confusion with any installed versions. Specify --disable-shared and --enable-static to configure. eg: ./configure --with-defaults --disable-shared --enable-static \ --with-mib-modules=host 3) run 'make' to build the package 4) run the agent for testing. If you are not root, then you may need to use a different port for testing. This example runs the agent in the foreground, logging to the terminal window, ignoring any error associated with accessing data that requires root access, and on the non-privlidged port 1161: agent/snmpd -r -f -Le udp:1161 and snmpwalk -v 2c -c public localhost:1161 system
For net-snmp executables that staticly link in the net-snmp libraries (system libraries will still use shared libraries), use configure --enable-static --disable-shared For totally static net-snmp executables, try 'configure --with-ldflags=-Bstatic'. To compile your application with static libraries (eg for easier debugging), and to link to a non-installed build directory, I use this in my Makefile: NETSNMPDIR=/usr/local/build/snmp/full-clean-cvs-V5-1-patches NETSNMPCONFIG=$(NETSNMPDIR)/net-snmp-config NETSNMPBASECFLAGS := $(shell $(NETSNMPCONFIG) --base-cflags) NETSNMPINCLUDES := $(shell $(NETSNMPCONFIG) --build-includes $(NETSNMPDIR)) # base flags after build/src include, in case it has /usr/local/include NETSNMPCFLAGS=$(NETSNMPINCLUDES) $(NETSNMPBASECFLAGS) NETSNMPBASELIBS := $(shell $(NETSNMPCONFIG) --base-agent-libs) NETSNMPEXTLIBS := $(shell $(NETSNMPCONFIG) --external-agent-libs) NETSNMPLIBDIRS := $(shell $(NETSNMPCONFIG) --build-lib-dirs $(NETSNMPDIR)) NETSNMPLIBDEPS := $(shell $(NETSNMPCONFIG) --build-lib-deps $(NETSNMPDIR)) LIB_DEPS=$(NETSNMPLIBDEPS) LIBS=$(NETSNMPLIBDIRS) -Wl,-Bstatic $(NETSNMPBASELIBS) -Wl,-Bdynamic $(NETSNMPEXTLIBS) STRICT_FLAGS = -Wall -Wstrict-prototypes CFLAGS=-I. $(NETSNMPCFLAGS) $(STRICT_FLAGS) SRCS = myfile.c OBJS = myfile.o TARGETS=myfile all: $(TARGETS) $(TARGETS): $(LIB_DEPS) myfile: $(OBJS) Makefile $(CC) -o myfile $(OBJS) $(LIBS) clean: rm -f $(OBJS) $(TARGETS) If you want to use a non-installed verions of net-snmp and want to use shared libraries, uou could also play around with the environment variables LD_RUN_PATH and LD_LIBRARY_PATH (check if your OS loader supports them). Set them to include the directories with the shared libraries and use ldd to see where libraries are getting picked up from.
> can you please tell me the meaning of the third argument. obviously the > first parameter to register_readfd is a file descriptor and second is the > callback function. Correct. The third parameter is arbitrary data that you can specify when you register the file descriptor, and will then be passed to the callback function when it's invoked. So if you were listening on two separate sockets, for very similar types of data, then you could use the same callback for both and use this third parameter to distinguish between them: fd1 = open( "/proc/tweedledum" ); fd2 = open( "/proc/tweedledee" ); register_readfd( fd1, who_broke_the_rattle(), "tweedledee" ); register_readfd( fd2, who_broke_the_rattle(), "tweedledum" ); void who_broke_the_rattle( int fd, void *data ) { char *he_did = (char *)data; printf("%s broke the rattle!\n", he_did ); }
If two modules are compiled into the same agent/subagent, then just use a normal C global var. No need to involve net-snmp in the process. if they are different sub-agents, you are better off using tradition IPC mechanisms. there is no net-snmp api for sharing data between subagents. That said, *IF* you only need the data during processing of GET requests, or when you aren't processing a request at all, you send a snmp request to the master, which will query the other subagent. If you are processing a GET request and are compiled into the master, you will have to delegate the current request). It's a little inefficient, compared to direct communication w/the other subagent and it *will not* work during set processing.
To register a hander for a branch, instead of an instance or a table: int _dummy_handler(netsnmp_mib_handler *handler, netsnmp_handler_registration *reginfo, netsnmp_agent_request_info *agtreq_info, netsnmp_request_info *requests); void init_branch() { static oid dummy_oid[] = { 1, 3, 99 }; static int dummy_oid_size = sizeof(dummy_oid)/sizeof(oid); handler = netsnmp_create_handler("dummy", _dummy_handler); reginfo = netsnmp_handler_registration_create("dummy", handler, dummy_oid, dummy_oid_size, HANDLER_CAN_RONLY); netsnmp_register_handler(reginfo); } int _dummy_handler(netsnmp_mib_handler *handler, netsnmp_handler_registration *reginfo, netsnmp_agent_request_info *agtreq_info, netsnmp_request_info *requests) { /* * look at reqinfo->mode to figure out what to do. * look at request varbinds to figure out OID to do it to. */ while(requests) { DEBUGMSGTL(("dummy", "Got request oid:")); DEBUGMSGOID(("dummy", request->requestvb->name, request->requestvb->name_len)); DEBUGMSG(("dummy", "\n")); requests = requests->next; } return SNMP_ERR_NOERROR; }
RFC 3416 Protocol Operations for SNMP December 2002 4.2.5. The SetRequest-PDU A SetRequest-PDU is generated and transmitted at the request of an application. Upon receipt of a SetRequest-PDU, the receiving SNMP entity determines the size of a message encapsulating a Response-PDU having the same values in its request-id and variable-bindings fields as the received SetRequest-PDU, and the largest possible sizes of the error-status and error-index fields. If the determined message size is greater than either a local constraint or the maximum message size of the originator, then an alternate Response-PDU is generated, transmitted to the originator of the SetRequest-PDU, and processing of the SetRequest-PDU terminates immediately thereafter. This alternate Response-PDU is formatted with the same values in its request-id field as the received SetRequest-PDU, with the value of its error-status field set to "tooBig", the value of its error-index field set to zero, and an empty variable-bindings field. This alternate Response-PDU is then encapsulated into a message. If the size of the resultant message is less than or equal to both a local constraint and the maximum message size of the originator, it is transmitted to the originator of the SetRequest-PDU. Otherwise, the snmpSilentDrops [RFC3418] counter is incremented and the resultant message is discarded. Regardless, processing of the SetRequest-PDU terminates. Otherwise, the receiving SNMP entity processes each variable binding in the variable-binding list to produce a Response-PDU. All fields of the Response-PDU have the same values as the corresponding fields of the received request except as indicated below. The variable bindings are conceptually processed as a two phase operation. In the first phase, each variable binding is validated; if all validations are successful, then each variable is altered in the second phase. Of course, implementors are at liberty to implement either the first, or second, or both, of these conceptual phases as multiple implementation phases. Indeed, such multiple implementation phases may be necessary in some cases to ensure consistency. The following validations are performed in the first phase on each variable binding until they are all successful, or until one fails: (1) If the variable binding's name specifies an existing or non- existent variable to which this request is/would be denied access because it is/would not be in the appropriate MIB view, then the value of the Response-PDU's error-status field is set to "noAccess", and the value of its error-index field is set to the index of the failed variable binding. (2) Otherwise, if there are no variables which share the same OBJECT IDENTIFIER prefix as the variable binding's name, and which are able to be created or modified no matter what new value is specified, then the value of the Response-PDU's error-status field is set to "notWritable", and the value of its error-index field is set to the index of the failed variable binding. (3) Otherwise, if the variable binding's value field specifies, according to the ASN.1 language, a type which is inconsistent with that required for all variables which share the same OBJECT IDENTIFIER prefix as the variable binding's name, then the value of the Response-PDU's error-status field is set to "wrongType", and the value of its error-index field is set to the index of the failed variable binding. (4) Otherwise, if the variable binding's value field specifies, according to the ASN.1 language, a length which is inconsistent with that required for all variables which share the same OBJECT IDENTIFIER prefix as the variable binding's name, then the value of the Response-PDU's error-status field is set to "wrongLength", and the value of its error-index field is set to the index of the failed variable binding. (5) Otherwise, if the variable binding's value field contains an ASN.1 encoding which is inconsistent with that field's ASN.1 tag, then the value of the Response-PDU's error-status field is set to "wrongEncoding", and the value of its error-index field is set to the index of the failed variable binding. (Note that not all implementation strategies will generate this error.) (6) Otherwise, if the variable binding's value field specifies a value which could under no circumstances be assigned to the variable, then the value of the Response-PDU's error-status field is set to "wrongValue", and the value of its error-index field is set to the index of the failed variable binding. (7) Otherwise, if the variable binding's name specifies a variable which does not exist and could not ever be created (even though some variables sharing the same OBJECT IDENTIFIER prefix might under some circumstances be able to be created), then the value of the Response-PDU's error-status field is set to "noCreation", and the value of its error-index field is set to the index of the failed variable binding. (8) Otherwise, if the variable binding's name specifies a variable which does not exist but can not be created under the present circumstances (even though it could be created under other circumstances), then the value of the Response-PDU's error- status field is set to "inconsistentName", and the value of its error-index field is set to the index of the failed variable binding. (9) Otherwise, if the variable binding's name specifies a variable which exists but can not be modified no matter what new value is specified, then the value of the Response-PDU's error-status field is set to "notWritable", and the value of its error-index field is set to the index of the failed variable binding. (10) Otherwise, if the variable binding's value field specifies a value that could under other circumstances be held by the variable, but is presently inconsistent or otherwise unable to be assigned to the variable, then the value of the Response- PDU's error-status field is set to "inconsistentValue", and the value of its error-index field is set to the index of the failed variable binding. (11) When, during the above steps, the assignment of the value specified by the variable binding's value field to the specified variable requires the allocation of a resource which is presently unavailable, then the value of the Response-PDU's error-status field is set to "resourceUnavailable", and the value of its error-index field is set to the index of the failed variable binding. (12) If the processing of the variable binding fails for a reason other than listed above, then the value of the Response-PDU's error-status field is set to "genErr", and the value of its error-index field is set to the index of the failed variable binding. (13) Otherwise, the validation of the variable binding succeeds. At the end of the first phase, if the validation of all variable bindings succeeded, then the value of the Response-PDU's error-status field is set to "noError" and the value of its error-index field is zero, and processing continues as follows. For each variable binding in the request, the named variable is created if necessary, and the specified value is assigned to it. Each of these variable assignments occurs as if simultaneously with respect to all other assignments specified in the same request. However, if the same variable is named more than once in a single request, with different associated values, then the actual assignment made to that variable is implementation-specific. If any of these assignments fail (even after all the previous validations), then all other assignments are undone, and the Response-PDU is modified to have the value of its error-status field set to "commitFailed", and the value of its error-index field set to the index of the failed variable binding. If and only if it is not possible to undo all the assignments, then the Response-PDU is modified to have the value of its error-status field set to "undoFailed", and the value of its error-index field is set to zero. Note that implementations are strongly encouraged to take all possible measures to avoid use of either "commitFailed" or "undoFailed" - these two error-status codes are not to be taken as license to take the easy way out in an implementation. Finally, the generated Response-PDU is encapsulated into a message, and transmitted to the originator of the SetRequest-PDU.
RFC 2578 (STD: 58; Structure of Management Information Version 2 (SMIv2)), section 10, defines the allowable changes to an existing MIB module: 10. Extending an Information Module As experience is gained with an information module, it may be desirable to revise that information module. However, changes are not allowed if they have any potential to cause interoperability problems "over the wire" between an implementation using an original specification and an implementation using an updated specification(s). For any change, the invocation of the MODULE-IDENTITY macro must be updated to include information about the revision: specifically, updating the LAST-UPDATED clause, adding a pair of REVISION and DESCRIPTION clauses (see section 5.5), and making any necessary changes to existing clauses, including the ORGANIZATION and CONTACT- INFO clauses. Note that any definition contained in an information module is available to be IMPORT-ed by any other information module, and is referenced in an IMPORTS clause via the module name. Thus, a module name should not be changed. Specifically, the module name (e.g., "FIZBIN-MIB" in the example of Section 5.7) should not be changed when revising an information module (except to correct typographical errors), and definitions should not be moved from one information module to another. Also note that obsolete definitions must not be removed from MIB modules since their descriptors may still be referenced by other information modules, and the OBJECT IDENTIFIERs used to name them must never be re-assigned. 10.1. Object Assignments If any non-editorial change is made to any clause of a object assignment, then the OBJECT IDENTIFIER value associated with that object assignment must also be changed, along with its associated descriptor. 10.2. Object Definitions An object definition may be revised in any of the following ways: (1) A SYNTAX clause containing an enumerated INTEGER may have new enumerations added or existing labels changed. Similarly, named bits may be added or existing labels changed for the BITS construct. (2) The value of a SYNTAX clause may be replaced by a textual convention, providing the textual convention is defined to use the same primitive ASN.1 type, has the same set of values, and has identical semantics. (3) A STATUS clause value of "current" may be revised as "deprecated" or "obsolete". Similarly, a STATUS clause value of "deprecated" may be revised as "obsolete". When making such a change, the DESCRIPTION clause should be updated to explain the rationale. (4) A DEFVAL clause may be added or updated. (5) A REFERENCE clause may be added or updated. (6) A UNITS clause may be added. (7) A conceptual row may be augmented by adding new columnar objects at the end of the row, and making the corresponding update to the SEQUENCE definition. (8) Clarifications and additional information may be included in the DESCRIPTION clause. (9) Entirely new objects may be defined, named with previously unassigned OBJECT IDENTIFIER values. Otherwise, if the semantics of any previously defined object are changed (i.e., if a non-editorial change is made to any clause other than those specifically allowed above), then the OBJECT IDENTIFIER value associated with that object must also be changed. Note that changing the descriptor associated with an existing object is considered a semantic change, as these strings may be used in an IMPORTS statement. 10.3. Notification Definitions A notification definition may be revised in any of the following ways: (1) A REFERENCE clause may be added or updated. (2) A STATUS clause value of "current" may be revised as "deprecated" or "obsolete". Similarly, a STATUS clause value of "deprecated" may be revised as "obsolete". When making such a change, the DESCRIPTION clause should be updated to explain the rationale. (3) A DESCRIPTION clause may be clarified. Otherwise, if the semantics of any previously defined notification are changed (i.e., if a non-editorial change is made to any clause other those specifically allowed above), then the OBJECT IDENTIFIER value associated with that notification must also be changed. Note that changing the descriptor associated with an existing notification is considered a semantic change, as these strings may be used in an IMPORTS statement.
/* * NOTE: if you update this chart, please update the versions in * local/mib2c-conf.d/parent-set.m2i * agent/mibgroup/helpers/baby_steps.c * while you're at it. */ /* *********************************************************************** * Baby Steps Flow Chart (2004.06.05) * * * * +--------------+ +================+ U = unconditional path * * |optional state| ||required state|| S = path for success * * +--------------+ +================+ E = path for error * *********************************************************************** * * +--------------+ * | pre | * | request | * +--------------+ * | U * +-------------+ +==============+ * | row |f|<-------|| object || * | create |1| E || lookup || * +-------------+ +==============+ * E | | S | S * | +------------------>| * | +==============+ * | E || check || * |<---------------|| values || * | +==============+ * | | S * | +==============+ * | +<-------|| undo || * | | E || setup || * | | +==============+ * | | | S * | | +==============+ * | | || set ||-------------------------->+ * | | || value || E | * | | +==============+ | * | | | S | * | | +--------------+ | * | | | check |-------------------------->| * | | | consistency | E | * | | +--------------+ | * | | | S | * | | +==============+ +==============+ | * | | || commit ||-------->|| undo || | * | | || || E || commit || | * | | +==============+ +==============+ | * | | | S U |<--------+ * | | +--------------+ +==============+ * | | | irreversible | || undo || * | | | commit | || set || * | | +--------------+ +==============+ * | | | U U | * | +-------------->|<------------------------+ * | +==============+ * | || undo || * | || cleanup || * | +==============+ * +---------------------->| U * | * (err && f1)------------------->+ * | | * +--------------+ +--------------+ * | post |<--------| row | * | request | U | release | * +--------------+ +--------------+ * */
Page Last modified: Wed Apr 19 15:06:28 EDT 2006 |