blob: d157866e8a76243e9980a5a7a8bfa016e28ce213 [file] [log] [blame]
Pradosh Mohapatra60cc9592015-11-09 20:21:41 -050010. Introduction
2
3This is the design specification for next hop tracking feature in
4Quagga.
5
61. Background
7
8Recursive routes are of the form:
9
10 p/m --> n
11 [Ex: 1.1.0.0/16 --> 2.2.2.2]
12
13where 'n' itself is resolved through another route as follows:
14
15 p2/m --> h, interface
16 [Ex: 2.2.2.0/24 --> 3.3.3.3, eth0]
17
18Usually, BGP routes are recursive in nature and BGP nexthops get
19resolved through an IGP route. IGP usually adds its routes pointing to
20an interface (these are called non-recursive routes).
21
22When BGP receives a recursive route from a peer, it needs to validate
23the nexthop. The path is marked valid or invalid based on the
24reachability status of the nexthop. Nexthop validation is also
25important for BGP decision process as the metric to reach the nexthop
26is a parameter to best path selection process.
27
28As it goes with routing, this is a dynamic process. Route to the
29nexthop can change. The nexthop can become unreachable or
30reachable. In the current BGP implementation, the nexthop validation
31is done periodically in the scanner run. The default scanner run
32interval is one minute. Every minute, the scanner task walks the
33entire BGP table. It checks the validity of each nexthop with Zebra
34(the routing table manager) through a request and response message
35exchange between BGP and Zebra process. BGP process is blocked for
36that duration. The mechanism has two major drawbacks:
37
38(1) The scanner task runs to completion. That can potentially starve
39 the other tasks for long periods of time, based on the BGP table
40 size and number of nexthops.
41
42(2) Convergence around routing changes that affect the nexthops can be
43 long (around a minute with the default intervals). The interval
44 can be shortened to achieve faster reaction time, but it makes the
45 first problem worse, with the scanner task consuming most of the
46 CPU resources.
47
48"Next hop tracking" feature makes this process event-driven. It
49eliminates periodic nexthop validation and introduces an asynchronous
50communication path between BGP and Zebra for route change notifications
51that can then be acted upon.
52
532. Goal
54
55Stating the obvious, the main goal is to remove the two limitations we
56discussed in the previous section. The goals, in a constructive tone,
57are the following:
58
59- fairness: the scanner run should not consume an unjustly high amount
60 of CPU time. This should give an overall good performance and
61 response time to other events (route changes, session events,
62 IO/user interface).
63
64- convergence: BGP must react to nexthop changes instantly and provide
65 sub-second convergence. This may involve diverting the routes from
66 one nexthop to another.
67
683. Overview of the changes
69
70The changes are in both BGP and Zebra modules. The short summary is
71the following:
72
73- Zebra implements a registration mechanism by which clients can
74 register for next hop notification. Consequently, it maintains a
75 separate table, per (VRF, AF) pair, of next hops and interested
76 client-list per next hop.
77
78- When the main routing table changes in Zebra, it evaluates the next
79 hop table: for each next hop, it checks if the route table
80 modifications have changed its state. If so, it notifies the
81 interested clients.
82
83- BGP is one such client. It registers the next hops corresponding to
84 all of its received routes/paths. It also threads the paths against
85 each nexthop structure.
86
87- When BGP receives a next hop notification from Zebra, it walks the
88 corresponding path list. It makes them valid or invalid depending
89 on the next hop notification. It then re-computes best path for the
90 corresponding destination. This may result in re-announcing those
91 destinations to peers.
92
934. Design
94
954.1. Modules
96
97The core design introduces an "nht" (next hop tracking) module in BGP
98and "rnh" (recursive nexthop) module in Zebra. The "nht" module
99provides the following APIs:
100
101bgp_find_or_add_nexthop() : find or add a nexthop in BGP nexthop table
102bgp_find_nexthop() : find a nexthop in BGP nexthop table
103bgp_parse_nexthop_update() : parse a nexthop update message coming
104 from zebra
105
106The "rnh" module provides the following APIs:
107
108zebra_add_rnh() : add a recursive nexthop
109zebra_delete_rnh() : delete a recursive nexthop
110zebra_lookup_rnh() : lookup a recursive nexthop
111
112zebra_add_rnh_client() : register a client for nexthop notifications
113 against a recursive nexthop
114
115zebra_remove_rnh_client(): remove the client registration for a
116 recursive nexthop
117
118zebra_evaluate_rnh_table(): (re)evaluate the recursive nexthop table
119 (most probably because the main routing
120 table has changed).
121
122zebra_cleanup_rnh_client(): Cleanup a client from the "rnh" module
123 data structures (most probably because the
124 client is going away).
125
1264.2. Control flow
127
128The next hop registration control flow is the following:
129
130<==== BGP Process ====>|<==== Zebra Process ====>
131 |
132receive module nht module | zserv module rnh module
133----------------------------------------------------------------------
134 | | |
135bgp_update_ | | |
136 main() | bgp_find_or_add_ | |
137 | nexthop() | |
138 | | |
139 | | zserv_nexthop_ |
140 | | register() |
141 | | | zebra_add_rnh()
142 | | |
143
144
145The next hop notification control flow is the following:
146
147<==== Zebra Process ====>|<==== BGP Process ====>
148 |
149rib module rnh module | zebra module nht module
150----------------------------------------------------------------------
151 | | |
152meta_queue_ | | |
153 process() | zebra_evaluate_ | |
154 | rnh_table() | |
155 | | |
156 | | bgp_read_nexthop_ |
157 | | update() |
158 | | | bgp_parse_
159 | | | nexthop_update()
160 | | |
161
162
1634.3. zclient message format
164
165ZEBRA_NEXTHOP_REGISTER and ZEBRA_NEXTHOP_UNREGISTER messages are
166encoded in the following way:
167
168/*
169 * 0 1 2 3
170 * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
171 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
172 * | AF | prefix len |
173 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
174 * . Nexthop prefix .
175 * . .
176 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
177 * . .
178 * . .
179 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
180 * | AF | prefix len |
181 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
182 * . Nexthop prefix .
183 * . .
184 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
185 */
186
187ZEBRA_NEXTHOP_UPDATE message is encoded as follows:
188
189/*
190 * 0 1 2 3
191 * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
192 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
193 * | AF | prefix len |
194 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
195 * . Nexthop prefix getting resolved .
196 * . .
197 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
198 * | metric |
199 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
200 * | #nexthops |
201 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
202 * | nexthop type |
203 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
204 * . resolving Nexthop details .
205 * . .
206 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
207 * . .
208 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
209 * | nexthop type |
210 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
211 * . resolving Nexthop details .
212 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
213 */
214
2154.4. BGP data structure
216
217Legend:
218
219/\ struct bgp_node: a BGP destination/route/prefix
220\/
221
222[ ] struct bgp_info: a BGP path (e.g. route received from a peer)
223
224 _
225(_) struct bgp_nexthop_cache: a BGP nexthop
226
227
228
229 /\ NULL
230 \/--+ ^
231 | :
232 +--[ ]--[ ]--[ ]--> NULL
233 /\ :
234 \/--+ :
235 | :
236 +--[ ]--[ ]--> NULL
237 :
238 _ :
239 (_).............
240
241
2424.5. Zebra data structure
243
244rnh table:
245
246 O
247 / \
248 O O
249 / \
250 O O
251
252 struct rnh
253 {
254 u_char flags;
255 struct rib *state;
256 struct list *client_list;
257 struct route_node *node;
258 };
259
2605. User interface changes
261
262quagga# show ip nht
2633.3.3.3
264 resolved via kernel
265 via 11.0.0.6, swp1
266 Client list: bgp(fd 12)
26711.0.0.10
268 resolved via connected
269 is directly connected, swp2
270 Client list: bgp(fd 12)
27111.0.0.18
272 resolved via connected
273 is directly connected, swp4
274 Client list: bgp(fd 12)
27511.11.11.11
276 resolved via kernel
277 via 10.0.1.2, eth0
278 Client list: bgp(fd 12)
279
280quagga# show ip bgp nexthop
281Current BGP nexthop cache:
282 3.3.3.3 valid [IGP metric 0], #paths 3
283 Last update: Wed Oct 16 04:43:49 2013
284
285 11.0.0.10 valid [IGP metric 1], #paths 1
286 Last update: Wed Oct 16 04:43:51 2013
287
288 11.0.0.18 valid [IGP metric 1], #paths 2
289 Last update: Wed Oct 16 04:43:47 2013
290
291 11.11.11.11 valid [IGP metric 0], #paths 1
292 Last update: Wed Oct 16 04:43:47 2013
293
294quagga# show ipv6 nht
295quagga# show ip bgp nexthop detail
296
297quagga# debug bgp nht
298quagga# debug zebra nht
299
3006. Sample test cases
301
302 r2----r3
303 / \ /
304 r1----r4
305
306- Verify that a change in IGP cost triggers NHT
307 + shutdown the r1-r4 and r2-r4 links
308 + no shut the r1-r4 and r2-r4 links and wait for OSPF to come back
309 up
310 + We should be back to the original nexthop via r4 now
311- Verify that a NH becoming unreachable triggers NHT
312 + Shutdown all links to r4
313- Verify that a NH becoming reachable triggers NHT
314 + no shut all links to r4
315
3167. Future work
317
318- route-policy for next hop validation (e.g. ignore default route)
319- damping for rapid next hop changes
320- prioritized handling of nexthop changes ((un)reachability vs. metric
321 changes)
322- handling recursion loop, e.g.
323 11.11.11.11/32 -> 12.12.12.12
324 12.12.12.12/32 -> 11.11.11.11
325 11.0.0.0/8 -> <interface>
326- better statistics