diff options
| author | Jakub Kicinski <kuba@kernel.org> | 2020-10-29 18:39:49 -0700 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2020-10-29 18:39:49 -0700 |
| commit | 6e2b243db4dc8c0fd51570244b6fd7810e16261a (patch) | |
| tree | 6e11b7a491bd444ae8e9b06aaf4b53235174cc0e /net | |
| parent | Merge branch 'vsock-minor-clean-up-of-ioctl-error-handling' (diff) | |
| parent | bridge: cfm: Netlink Notifications. (diff) | |
| download | linux-6e2b243db4dc8c0fd51570244b6fd7810e16261a.tar.gz linux-6e2b243db4dc8c0fd51570244b6fd7810e16261a.zip | |
Merge branch 'net-bridge-cfm-add-support-for-connectivity-fault-management-cfm'
Henrik Bjoernlund says:
====================
net: bridge: cfm: Add support for Connectivity Fault Management(CFM)
Connectivity Fault Management (CFM) is defined in 802.1Q
section 12.14.
Connectivity Fault Management (CFM) comprises capabilities for
detecting, verifying, and isolating connectivity failures in Virtual
Bridged Networks. These capabilities can be used in networks
operated by multiple independent organizations, each with restricted
management access to each other’s equipment.
CFM functions are partitioned as follows:
— Path discovery
— Fault detection
— Fault verification and isolation
— Fault notification
— Fault recovery
The primary CFM protocol shims are called Maintenance Points (MPs).
A MP can be either a MEP or a MHF.
The MEP:
-It is the Maintenance association End Point
described in 802.1Q section 19.2.
-It is created on a specific level (1-7) and is assuring
that no CFM frames are passing through this MEP on lower levels.
-It initiates and terminates/validates CFM frames on its level.
-It can only exist on a port that is related to a bridge.
The MHF:
-It is the Maintenance Domain Intermediate Point
(MIP) Half Function (MHF) described in 802.1Q section 19.3.
-It is created on a specific level (1-7).
-It is extracting/injecting certain CFM frame on this level.
-It can only exist on a port that is related to a bridge.
-Currently not supported.
There are defined the following CFM protocol functions:
-Continuity Check
-Loopback. Currently not supported.
-Linktrace. Currently not supported.
This CFM component supports create/delete of MEP instances and
configuration of the different CFM protocols. Also status information
can be fetched and delivered through notification due to defect
status change.
The user interacts with CFM using the 'cfm' user space client
program, the client talks with the kernel using netlink.
Any notification emitted by CFM from the kernel can be monitored in
user space by starting 'cfm_server' program.
Currently this 'cfm' and 'cfm_server' programs are standalone placed
in a cfm repository https://github.com/microchip-ung/cfm but it is
considered to integrate this into 'iproute2'.
v1 -> v2
Added the CFM switchdev interface and also added utilization by
calling the interface from the kernel CFM implementation trying
to offload CFM functionality to HW. This offload (CFM driver) is
currently not implemented.
Corrections based on RCF comments:
-The single CFM kernel implementation Patch is broken up into
three patches.
-Changed the list of MEP instances from list_head to
hlist_head.
-Removed unnecessary RCU list traversing.
-Solved RCU unlocking problem.
-Removed unnecessary comments.
-Added ASSERT_RTNL() where required.
-Shaping up on error messages.
-Correction NETLINK br_fill_ifinfo() to be able to handle
'filter_mask' with multiple flags asserted.
v2 -> v3
-The switchdev definition and utilization has been removed as
there was no switchdev implementation.
-Some compiling issues are fixed as Reported-by:
kernel test robot <lkp@intel.com>.
v3 -> v4
-Fixed potential crash during hlist walk where elements are
removed.
-Giving all commits unique titles.
-NETLINK implementation split into three commits.
-Commit "bridge: cfm: Bridge port remove" is merged with
commit "bridge: cfm: Kernel space implementation of CFM. MEP
create/delete."
v4 -> v5
-Reordered members in struct net_bridge to bring member
frame_type_list to the first cache line.
-Helper functions nla_get_mac() and nla_get_maid() are removed.
-The NLA_POLICY_NESTED() macro is used to initialize the
br_cfm_policy array.
-Fixed reverse xmas tree.
v5 -> v6
-Fixed that the SKB buffer was not freed during error handling return.
-Removed unused struct definition.
-Changed bool to u8 bitfields for space save.
-Utilizing the NETLINK policy validation feature.
v6 -> v7
-Removed check of parameters in br_cfm_mep_config_set() and
br_cfm_cc_peer_mep_add() in first commit of MEP implementation
(Patch 4 out of 10)
Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: Henrik Bjoernlund <henrik.bjoernlund@microchip.com>
====================
Link: https://lore.kernel.org/r/20201027100251.3241719-1-henrik.bjoernlund@microchip.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net')
| -rw-r--r-- | net/bridge/Kconfig | 11 | ||||
| -rw-r--r-- | net/bridge/Makefile | 2 | ||||
| -rw-r--r-- | net/bridge/br_cfm.c | 867 | ||||
| -rw-r--r-- | net/bridge/br_cfm_netlink.c | 726 | ||||
| -rw-r--r-- | net/bridge/br_device.c | 4 | ||||
| -rw-r--r-- | net/bridge/br_if.c | 1 | ||||
| -rw-r--r-- | net/bridge/br_input.c | 33 | ||||
| -rw-r--r-- | net/bridge/br_mrp.c | 19 | ||||
| -rw-r--r-- | net/bridge/br_netlink.c | 115 | ||||
| -rw-r--r-- | net/bridge/br_private.h | 77 | ||||
| -rw-r--r-- | net/bridge/br_private_cfm.h | 147 |
11 files changed, 1979 insertions, 23 deletions
diff --git a/net/bridge/Kconfig b/net/bridge/Kconfig index 80879196560c..3c8ded7d3e84 100644 --- a/net/bridge/Kconfig +++ b/net/bridge/Kconfig @@ -73,3 +73,14 @@ config BRIDGE_MRP Say N to exclude this support and reduce the binary size. If unsure, say N. + +config BRIDGE_CFM + bool "CFM protocol" + depends on BRIDGE + help + If you say Y here, then the Ethernet bridge will be able to run CFM + protocol according to 802.1Q section 12.14 + + Say N to exclude this support and reduce the binary size. + + If unsure, say N. diff --git a/net/bridge/Makefile b/net/bridge/Makefile index ccb394236fbd..4702702a74d3 100644 --- a/net/bridge/Makefile +++ b/net/bridge/Makefile @@ -27,3 +27,5 @@ bridge-$(CONFIG_NET_SWITCHDEV) += br_switchdev.o obj-$(CONFIG_NETFILTER) += netfilter/ bridge-$(CONFIG_BRIDGE_MRP) += br_mrp_switchdev.o br_mrp.o br_mrp_netlink.o + +bridge-$(CONFIG_BRIDGE_CFM) += br_cfm.o br_cfm_netlink.o diff --git a/net/bridge/br_cfm.c b/net/bridge/br_cfm.c new file mode 100644 index 000000000000..001064f7583d --- /dev/null +++ b/net/bridge/br_cfm.c @@ -0,0 +1,867 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <linux/cfm_bridge.h> +#include <uapi/linux/cfm_bridge.h> +#include "br_private_cfm.h" + +static struct br_cfm_mep *br_mep_find(struct net_bridge *br, u32 instance) +{ + struct br_cfm_mep *mep; + + hlist_for_each_entry(mep, &br->mep_list, head) + if (mep->instance == instance) + return mep; + + return NULL; +} + +static struct br_cfm_mep *br_mep_find_ifindex(struct net_bridge *br, + u32 ifindex) +{ + struct br_cfm_mep *mep; + + hlist_for_each_entry_rcu(mep, &br->mep_list, head, + lockdep_rtnl_is_held()) + if (mep->create.ifindex == ifindex) + return mep; + + return NULL; +} + +static struct br_cfm_peer_mep *br_peer_mep_find(struct br_cfm_mep *mep, + u32 mepid) +{ + struct br_cfm_peer_mep *peer_mep; + + hlist_for_each_entry_rcu(peer_mep, &mep->peer_mep_list, head, + lockdep_rtnl_is_held()) + if (peer_mep->mepid == mepid) + return peer_mep; + + return NULL; +} + +static struct net_bridge_port *br_mep_get_port(struct net_bridge *br, + u32 ifindex) +{ + struct net_bridge_port *port; + + list_for_each_entry(port, &br->port_list, list) + if (port->dev->ifindex == ifindex) + return port; + + return NULL; +} + +/* Calculate the CCM interval in us. */ +static u32 interval_to_us(enum br_cfm_ccm_interval interval) +{ + switch (interval) { + case BR_CFM_CCM_INTERVAL_NONE: + return 0; + case BR_CFM_CCM_INTERVAL_3_3_MS: + return 3300; + case BR_CFM_CCM_INTERVAL_10_MS: + return 10 * 1000; + case BR_CFM_CCM_INTERVAL_100_MS: + return 100 * 1000; + case BR_CFM_CCM_INTERVAL_1_SEC: + return 1000 * 1000; + case BR_CFM_CCM_INTERVAL_10_SEC: + return 10 * 1000 * 1000; + case BR_CFM_CCM_INTERVAL_1_MIN: + return 60 * 1000 * 1000; + case BR_CFM_CCM_INTERVAL_10_MIN: + return 10 * 60 * 1000 * 1000; + } + return 0; +} + +/* Convert the interface interval to CCM PDU value. */ +static u32 interval_to_pdu(enum br_cfm_ccm_interval interval) +{ + switch (interval) { + case BR_CFM_CCM_INTERVAL_NONE: + return 0; + case BR_CFM_CCM_INTERVAL_3_3_MS: + return 1; + case BR_CFM_CCM_INTERVAL_10_MS: + return 2; + case BR_CFM_CCM_INTERVAL_100_MS: + return 3; + case BR_CFM_CCM_INTERVAL_1_SEC: + return 4; + case BR_CFM_CCM_INTERVAL_10_SEC: + return 5; + case BR_CFM_CCM_INTERVAL_1_MIN: + return 6; + case BR_CFM_CCM_INTERVAL_10_MIN: + return 7; + } + return 0; +} + +/* Convert the CCM PDU value to interval on interface. */ +static u32 pdu_to_interval(u32 value) +{ + switch (value) { + case 0: + return BR_CFM_CCM_INTERVAL_NONE; + case 1: + return BR_CFM_CCM_INTERVAL_3_3_MS; + case 2: + return BR_CFM_CCM_INTERVAL_10_MS; + case 3: + return BR_CFM_CCM_INTERVAL_100_MS; + case 4: + return BR_CFM_CCM_INTERVAL_1_SEC; + case 5: + return BR_CFM_CCM_INTERVAL_10_SEC; + case 6: + return BR_CFM_CCM_INTERVAL_1_MIN; + case 7: + return BR_CFM_CCM_INTERVAL_10_MIN; + } + return BR_CFM_CCM_INTERVAL_NONE; +} + +static void ccm_rx_timer_start(struct br_cfm_peer_mep *peer_mep) +{ + u32 interval_us; + + interval_us = interval_to_us(peer_mep->mep->cc_config.exp_interval); + /* Function ccm_rx_dwork must be called with 1/4 + * of the configured CC 'expected_interval' + * in order to detect CCM defect after 3.25 interval. + */ + queue_delayed_work(system_wq, &peer_mep->ccm_rx_dwork, + usecs_to_jiffies(interval_us / 4)); +} + +static void br_cfm_notify(int event, const struct net_bridge_port *port) +{ + u32 filter = RTEXT_FILTER_CFM_STATUS; + + return br_info_notify(event, port->br, NULL, filter); +} + +static void cc_peer_enable(struct br_cfm_peer_mep *peer_mep) +{ + memset(&peer_mep->cc_status, 0, sizeof(peer_mep->cc_status)); + peer_mep->ccm_rx_count_miss = 0; + + ccm_rx_timer_start(peer_mep); +} + +static void cc_peer_disable(struct br_cfm_peer_mep *peer_mep) +{ + cancel_delayed_work_sync(&peer_mep->ccm_rx_dwork); +} + +static struct sk_buff *ccm_frame_build(struct br_cfm_mep *mep, + const struct br_cfm_cc_ccm_tx_info *const tx_info) + +{ + struct br_cfm_common_hdr *common_hdr; + struct net_bridge_port *b_port; + struct br_cfm_maid *maid; + u8 *itu_reserved, *e_tlv; + struct ethhdr *eth_hdr; + struct sk_buff *skb; + __be32 *status_tlv; + __be32 *snumber; + __be16 *mepid; + + skb = dev_alloc_skb(CFM_CCM_MAX_FRAME_LENGTH); + if (!skb) + return NULL; + + rcu_read_lock(); + b_port = rcu_dereference(mep->b_port); + if (!b_port) { + kfree_skb(skb); + rcu_read_unlock(); + return NULL; + } + skb->dev = b_port->dev; + rcu_read_unlock(); + /* The device cannot be deleted until the work_queue functions has + * completed. This function is called from ccm_tx_work_expired() + * that is a work_queue functions. + */ + + skb->protocol = htons(ETH_P_CFM); + skb->priority = CFM_FRAME_PRIO; + + /* Ethernet header */ + eth_hdr = skb_put(skb, sizeof(*eth_hdr)); + ether_addr_copy(eth_hdr->h_dest, tx_info->dmac.addr); + ether_addr_copy(eth_hdr->h_source, mep->config.unicast_mac.addr); + eth_hdr->h_proto = htons(ETH_P_CFM); + + /* Common CFM Header */ + common_hdr = skb_put(skb, sizeof(*common_hdr)); + common_hdr->mdlevel_version = mep->config.mdlevel << 5; + common_hdr->opcode = BR_CFM_OPCODE_CCM; + common_hdr->flags = (mep->rdi << 7) | + interval_to_pdu(mep->cc_config.exp_interval); + common_hdr->tlv_offset = CFM_CCM_TLV_OFFSET; + + /* Sequence number */ + snumber = skb_put(skb, sizeof(*snumber)); + if (tx_info->seq_no_update) { + *snumber = cpu_to_be32(mep->ccm_tx_snumber); + mep->ccm_tx_snumber += 1; + } else { + *snumber = 0; + } + + mepid = skb_put(skb, sizeof(*mepid)); + *mepid = cpu_to_be16((u16)mep->config.mepid); + + maid = skb_put(skb, sizeof(*maid)); + memcpy(maid->data, mep->cc_config.exp_maid.data, sizeof(maid->data)); + + /* ITU reserved (CFM_CCM_ITU_RESERVED_SIZE octets) */ + itu_reserved = skb_put(skb, CFM_CCM_ITU_RESERVED_SIZE); + memset(itu_reserved, 0, CFM_CCM_ITU_RESERVED_SIZE); + + /* Generel CFM TLV format: + * TLV type: one byte + * TLV value length: two bytes + * TLV value: 'TLV value length' bytes + */ + + /* Port status TLV. The value length is 1. Total of 4 bytes. */ + if (tx_info->port_tlv) { + status_tlv = skb_put(skb, sizeof(*status_tlv)); + *status_tlv = cpu_to_be32((CFM_PORT_STATUS_TLV_TYPE << 24) | + (1 << 8) | /* Value length */ + (tx_info->port_tlv_value & 0xFF)); + } + + /* Interface status TLV. The value length is 1. Total of 4 bytes. */ + if (tx_info->if_tlv) { + status_tlv = skb_put(skb, sizeof(*status_tlv)); + *status_tlv = cpu_to_be32((CFM_IF_STATUS_TLV_TYPE << 24) | + (1 << 8) | /* Value length */ + (tx_info->if_tlv_value & 0xFF)); + } + + /* End TLV */ + e_tlv = skb_put(skb, sizeof(*e_tlv)); + *e_tlv = CFM_ENDE_TLV_TYPE; + + return skb; +} + +static void ccm_frame_tx(struct sk_buff *skb) +{ + skb_reset_network_header(skb); + dev_queue_xmit(skb); +} + +/* This function is called with the configured CC 'expected_interval' + * in order to drive CCM transmission when enabled. + */ +static void ccm_tx_work_expired(struct work_struct *work) +{ + struct delayed_work *del_work; + struct br_cfm_mep *mep; + struct sk_buff *skb; + u32 interval_us; + + del_work = to_delayed_work(work); + mep = container_of(del_work, struct br_cfm_mep, ccm_tx_dwork); + + if (time_before_eq(mep->ccm_tx_end, jiffies)) { + /* Transmission period has ended */ + mep->cc_ccm_tx_info.period = 0; + return; + } + + skb = ccm_frame_build(mep, &mep->cc_ccm_tx_info); + if (skb) + ccm_frame_tx(skb); + + interval_us = interval_to_us(mep->cc_config.exp_interval); + queue_delayed_work(system_wq, &mep->ccm_tx_dwork, + usecs_to_jiffies(interval_us)); +} + +/* This function is called with 1/4 of the configured CC 'expected_interval' + * in order to detect CCM defect after 3.25 interval. + */ +static void ccm_rx_work_expired(struct work_struct *work) +{ + struct br_cfm_peer_mep *peer_mep; + struct net_bridge_port *b_port; + struct delayed_work *del_work; + + del_work = to_delayed_work(work); + peer_mep = container_of(del_work, struct br_cfm_peer_mep, ccm_rx_dwork); + + /* After 13 counts (4 * 3,25) then 3.25 intervals are expired */ + if (peer_mep->ccm_rx_count_miss < 13) { + /* 3.25 intervals are NOT expired without CCM reception */ + peer_mep->ccm_rx_count_miss++; + + /* Start timer again */ + ccm_rx_timer_start(peer_mep); + } else { + /* 3.25 intervals are expired without CCM reception. + * CCM defect detected + */ + peer_mep->cc_status.ccm_defect = true; + + /* Change in CCM defect status - notify */ + rcu_read_lock(); + b_port = rcu_dereference(peer_mep->mep->b_port); + if (b_port) + br_cfm_notify(RTM_NEWLINK, b_port); + rcu_read_unlock(); + } +} + +static u32 ccm_tlv_extract(struct sk_buff *skb, u32 index, + struct br_cfm_peer_mep *peer_mep) +{ + __be32 *s_tlv; + __be32 _s_tlv; + u32 h_s_tlv; + u8 *e_tlv; + u8 _e_tlv; + + e_tlv = skb_header_pointer(skb, index, sizeof(_e_tlv), &_e_tlv); + if (!e_tlv) + return 0; + + /* TLV is present - get the status TLV */ + s_tlv = skb_header_pointer(skb, + index, + sizeof(_s_tlv), &_s_tlv); + if (!s_tlv) + return 0; + + h_s_tlv = ntohl(*s_tlv); + if ((h_s_tlv >> 24) == CFM_IF_STATUS_TLV_TYPE) { + /* Interface status TLV */ + peer_mep->cc_status.tlv_seen = true; + peer_mep->cc_status.if_tlv_value = (h_s_tlv & 0xFF); + } + + if ((h_s_tlv >> 24) == CFM_PORT_STATUS_TLV_TYPE) { + /* Port status TLV */ + peer_mep->cc_status.tlv_seen = true; + peer_mep->cc_status.port_tlv_value = (h_s_tlv & 0xFF); + } + + /* The Sender ID TLV is not handled */ + /* The Organization-Specific TLV is not handled */ + + /* Return the length of this tlv. + * This is the length of the value field plus 3 bytes for size of type + * field and length field + */ + return ((h_s_tlv >> 8) & 0xFFFF) + 3; +} + +/* note: already called with rcu_read_lock */ +static int br_cfm_frame_rx(struct net_bridge_port *port, struct sk_buff *skb) +{ + u32 mdlevel, interval, size, index, max; + const struct br_cfm_common_hdr *hdr; + struct br_cfm_peer_mep *peer_mep; + const struct br_cfm_maid *maid; + struct br_cfm_common_hdr _hdr; + struct br_cfm_maid _maid; + struct br_cfm_mep *mep; + struct net_bridge *br; + __be32 *snumber; + __be32 _snumber; + __be16 *mepid; + __be16 _mepid; + + if (port->state == BR_STATE_DISABLED) + return 0; + + hdr = skb_header_pointer(skb, 0, sizeof(_hdr), &_hdr); + if (!hdr) + return 1; + + br = port->br; + mep = br_mep_find_ifindex(br, port->dev->ifindex); + if (unlikely(!mep)) + /* No MEP on this port - must be forwarded */ + return 0; + + mdlevel = hdr->mdlevel_version >> 5; + if (mdlevel > mep->config.mdlevel) + /* The level is above this MEP level - must be forwarded */ + return 0; + + if ((hdr->mdlevel_version & 0x1F) != 0) { + /* Invalid version */ + mep->status.version_unexp_seen = true; + return 1; + } + + if (mdlevel < mep->config.mdlevel) { + /* The level is below this MEP level */ + mep->status.rx_level_low_seen = true; + return 1; + } + + if (hdr->opcode == BR_CFM_OPCODE_CCM) { + /* CCM PDU received. */ + /* MA ID is after common header + sequence number + MEP ID */ + maid = skb_header_pointer(skb, + CFM_CCM_PDU_MAID_OFFSET, + sizeof(_maid), &_maid); + if (!maid) + return 1; + if (memcmp(maid->data, mep->cc_config.exp_maid.data, + sizeof(maid->data))) + /* MA ID not as expected */ + return 1; + + /* MEP ID is after common header + sequence number */ + mepid = skb_header_pointer(skb, + CFM_CCM_PDU_MEPID_OFFSET, + sizeof(_mepid), &_mepid); + if (!mepid) + return 1; + peer_mep = br_peer_mep_find(mep, (u32)ntohs(*mepid)); + if (!peer_mep) + return 1; + + /* Interval is in common header flags */ + interval = hdr->flags & 0x07; + if (mep->cc_config.exp_interval != pdu_to_interval(interval)) + /* Interval not as expected */ + return 1; + + /* A valid CCM frame is received */ + if (peer_mep->cc_status.ccm_defect) { + peer_mep->cc_status.ccm_defect = false; + + /* Change in CCM defect status - notify */ + br_cfm_notify(RTM_NEWLINK, port); + + /* Start CCM RX timer */ + ccm_rx_timer_start(peer_mep); + } + + peer_mep->cc_status.seen = true; + peer_mep->ccm_rx_count_miss = 0; + + /* RDI is in common header flags */ + peer_mep->cc_status.rdi = (hdr->flags & 0x80) ? true : false; + + /* Sequence number is after common header */ + snumber = skb_header_pointer(skb, + CFM_CCM_PDU_SEQNR_OFFSET, + sizeof(_snumber), &_snumber); + if (!snumber) + return 1; + if (ntohl(*snumber) != (mep->ccm_rx_snumber + 1)) + /* Unexpected sequence number */ + peer_mep->cc_status.seq_unexp_seen = true; + + mep->ccm_rx_snumber = ntohl(*snumber); + + /* TLV end is after common header + sequence number + MEP ID + + * MA ID + ITU reserved + */ + index = CFM_CCM_PDU_TLV_OFFSET; + max = 0; + do { /* Handle all TLVs */ + size = ccm_tlv_extract(skb, index, peer_mep); + index += size; + max += 1; + } while (size != 0 && max < 4); /* Max four TLVs possible */ + + return 1; + } + + mep->status.opcode_unexp_seen = true; + + return 1; +} + +static struct br_frame_type cfm_frame_type __read_mostly = { + .type = cpu_to_be16(ETH_P_CFM), + .frame_handler = br_cfm_frame_rx, +}; + +int br_cfm_mep_create(struct net_bridge *br, + const u32 instance, + struct br_cfm_mep_create *const create, + struct netlink_ext_ack *extack) +{ + struct net_bridge_port *p; + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + if (create->domain == BR_CFM_VLAN) { + NL_SET_ERR_MSG_MOD(extack, + "VLAN domain not supported"); + return -EINVAL; + } + if (create->domain != BR_CFM_PORT) { + NL_SET_ERR_MSG_MOD(extack, + "Invalid domain value"); + return -EINVAL; + } + if (create->direction == BR_CFM_MEP_DIRECTION_UP) { + NL_SET_ERR_MSG_MOD(extack, + "Up-MEP not supported"); + return -EINVAL; + } + if (create->direction != BR_CFM_MEP_DIRECTION_DOWN) { + NL_SET_ERR_MSG_MOD(extack, + "Invalid direction value"); + return -EINVAL; + } + p = br_mep_get_port(br, create->ifindex); + if (!p) { + NL_SET_ERR_MSG_MOD(extack, + "Port is not related to bridge"); + return -EINVAL; + } + mep = br_mep_find(br, instance); + if (mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance already exists"); + return -EEXIST; + } + + /* In PORT domain only one instance can be created per port */ + if (create->domain == BR_CFM_PORT) { + mep = br_mep_find_ifindex(br, create->ifindex); + if (mep) { + NL_SET_ERR_MSG_MOD(extack, + "Only one Port MEP on a port allowed"); + return -EINVAL; + } + } + + mep = kzalloc(sizeof(*mep), GFP_KERNEL); + if (!mep) + return -ENOMEM; + + mep->create = *create; + mep->instance = instance; + rcu_assign_pointer(mep->b_port, p); + + INIT_HLIST_HEAD(&mep->peer_mep_list); + INIT_DELAYED_WORK(&mep->ccm_tx_dwork, ccm_tx_work_expired); + + if (hlist_empty(&br->mep_list)) + br_add_frame(br, &cfm_frame_type); + + hlist_add_tail_rcu(&mep->head, &br->mep_list); + + return 0; +} + +static void mep_delete_implementation(struct net_bridge *br, + struct br_cfm_mep *mep) +{ + struct br_cfm_peer_mep *peer_mep; + struct hlist_node *n_store; + + ASSERT_RTNL(); + + /* Empty and free peer MEP list */ + hlist_for_each_entry_safe(peer_mep, n_store, &mep->peer_mep_list, head) { + cancel_delayed_work_sync(&peer_mep->ccm_rx_dwork); + hlist_del_rcu(&peer_mep->head); + kfree_rcu(peer_mep, rcu); + } + + cancel_delayed_work_sync(&mep->ccm_tx_dwork); + + RCU_INIT_POINTER(mep->b_port, NULL); + hlist_del_rcu(&mep->head); + kfree_rcu(mep, rcu); + + if (hlist_empty(&br->mep_list)) + br_del_frame(br, &cfm_frame_type); +} + +int br_cfm_mep_delete(struct net_bridge *br, + const u32 instance, + struct netlink_ext_ack *extack) +{ + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + mep = br_mep_find(br, instance); + if (!mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance does not exists"); + return -ENOENT; + } + + mep_delete_implementation(br, mep); + + return 0; +} + +int br_cfm_mep_config_set(struct net_bridge *br, + const u32 instance, + const struct br_cfm_mep_config *const config, + struct netlink_ext_ack *extack) +{ + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + mep = br_mep_find(br, instance); + if (!mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance does not exists"); + return -ENOENT; + } + + mep->config = *config; + + return 0; +} + +int br_cfm_cc_config_set(struct net_bridge *br, + const u32 instance, + const struct br_cfm_cc_config *const config, + struct netlink_ext_ack *extack) +{ + struct br_cfm_peer_mep *peer_mep; + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + mep = br_mep_find(br, instance); + if (!mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance does not exists"); + return -ENOENT; + } + + /* Check for no change in configuration */ + if (memcmp(config, &mep->cc_config, sizeof(*config)) == 0) + return 0; + + if (config->enable && !mep->cc_config.enable) + /* CC is enabled */ + hlist_for_each_entry(peer_mep, &mep->peer_mep_list, head) + cc_peer_enable(peer_mep); + + if (!config->enable && mep->cc_config.enable) + /* CC is disabled */ + hlist_for_each_entry(peer_mep, &mep->peer_mep_list, head) + cc_peer_disable(peer_mep); + + mep->cc_config = *config; + mep->ccm_rx_snumber = 0; + mep->ccm_tx_snumber = 1; + + return 0; +} + +int br_cfm_cc_peer_mep_add(struct net_bridge *br, const u32 instance, + u32 mepid, + struct netlink_ext_ack *extack) +{ + struct br_cfm_peer_mep *peer_mep; + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + mep = br_mep_find(br, instance); + if (!mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance does not exists"); + return -ENOENT; + } + + peer_mep = br_peer_mep_find(mep, mepid); + if (peer_mep) { + NL_SET_ERR_MSG_MOD(extack, + "Peer MEP-ID already exists"); + return -EEXIST; + } + + peer_mep = kzalloc(sizeof(*peer_mep), GFP_KERNEL); + if (!peer_mep) + return -ENOMEM; + + peer_mep->mepid = mepid; + peer_mep->mep = mep; + INIT_DELAYED_WORK(&peer_mep->ccm_rx_dwork, ccm_rx_work_expired); + + if (mep->cc_config.enable) + cc_peer_enable(peer_mep); + + hlist_add_tail_rcu(&peer_mep->head, &mep->peer_mep_list); + + return 0; +} + +int br_cfm_cc_peer_mep_remove(struct net_bridge *br, const u32 instance, + u32 mepid, + struct netlink_ext_ack *extack) +{ + struct br_cfm_peer_mep *peer_mep; + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + mep = br_mep_find(br, instance); + if (!mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance does not exists"); + return -ENOENT; + } + + peer_mep = br_peer_mep_find(mep, mepid); + if (!peer_mep) { + NL_SET_ERR_MSG_MOD(extack, + "Peer MEP-ID does not exists"); + return -ENOENT; + } + + cc_peer_disable(peer_mep); + + hlist_del_rcu(&peer_mep->head); + kfree_rcu(peer_mep, rcu); + + return 0; +} + +int br_cfm_cc_rdi_set(struct net_bridge *br, const u32 instance, + const bool rdi, struct netlink_ext_ack *extack) +{ + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + mep = br_mep_find(br, instance); + if (!mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance does not exists"); + return -ENOENT; + } + + mep->rdi = rdi; + + return 0; +} + +int br_cfm_cc_ccm_tx(struct net_bridge *br, const u32 instance, + const struct br_cfm_cc_ccm_tx_info *const tx_info, + struct netlink_ext_ack *extack) +{ + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + mep = br_mep_find(br, instance); + if (!mep) { + NL_SET_ERR_MSG_MOD(extack, + "MEP instance does not exists"); + return -ENOENT; + } + + if (memcmp(tx_info, &mep->cc_ccm_tx_info, sizeof(*tx_info)) == 0) { + /* No change in tx_info. */ + if (mep->cc_ccm_tx_info.period == 0) + /* Transmission is not enabled - just return */ + return 0; + + /* Transmission is ongoing, the end time is recalculated */ + mep->ccm_tx_end = jiffies + + usecs_to_jiffies(tx_info->period * 1000000); + return 0; + } + + if (tx_info->period == 0 && mep->cc_ccm_tx_info.period == 0) + /* Some change in info and transmission is not ongoing */ + goto save; + + if (tx_info->period != 0 && mep->cc_ccm_tx_info.period != 0) { + /* Some change in info and transmission is ongoing + * The end time is recalculated + */ + mep->ccm_tx_end = jiffies + + usecs_to_jiffies(tx_info->period * 1000000); + + goto save; + } + + if (tx_info->period == 0 && mep->cc_ccm_tx_info.period != 0) { + cancel_delayed_work_sync(&mep->ccm_tx_dwork); + goto save; + } + + /* Start delayed work to transmit CCM frames. It is done with zero delay + * to send first frame immediately + */ + mep->ccm_tx_end = jiffies + usecs_to_jiffies(tx_info->period * 1000000); + queue_delayed_work(system_wq, &mep->ccm_tx_dwork, 0); + +save: + mep->cc_ccm_tx_info = *tx_info; + + return 0; +} + +int br_cfm_mep_count(struct net_bridge *br, u32 *count) +{ + struct br_cfm_mep *mep; + + *count = 0; + + rcu_read_lock(); + hlist_for_each_entry_rcu(mep, &br->mep_list, head) + *count += 1; + rcu_read_unlock(); + + return 0; +} + +int br_cfm_peer_mep_count(struct net_bridge *br, u32 *count) +{ + struct br_cfm_peer_mep *peer_mep; + struct br_cfm_mep *mep; + + *count = 0; + + rcu_read_lock(); + hlist_for_each_entry_rcu(mep, &br->mep_list, head) + hlist_for_each_entry_rcu(peer_mep, &mep->peer_mep_list, head) + *count += 1; + rcu_read_unlock(); + + return 0; +} + +bool br_cfm_created(struct net_bridge *br) +{ + return !hlist_empty(&br->mep_list); +} + +/* Deletes the CFM instances on a specific bridge port + */ +void br_cfm_port_del(struct net_bridge *br, struct net_bridge_port *port) +{ + struct hlist_node *n_store; + struct br_cfm_mep *mep; + + ASSERT_RTNL(); + + hlist_for_each_entry_safe(mep, n_store, &br->mep_list, head) + if (mep->create.ifindex == port->dev->ifindex) + mep_delete_implementation(br, mep); +} diff --git a/net/bridge/br_cfm_netlink.c b/net/bridge/br_cfm_netlink.c new file mode 100644 index 000000000000..5c4c369f8536 --- /dev/null +++ b/net/bridge/br_cfm_netlink.c @@ -0,0 +1,726 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <net/genetlink.h> + +#include "br_private.h" +#include "br_private_cfm.h" + +static const struct nla_policy +br_cfm_mep_create_policy[IFLA_BRIDGE_CFM_MEP_CREATE_MAX + 1] = { + [IFLA_BRIDGE_CFM_MEP_CREATE_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_MEP_CREATE_INSTANCE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_MEP_CREATE_DOMAIN] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_MEP_CREATE_DIRECTION] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_MEP_CREATE_IFINDEX] = { .type = NLA_U32 }, +}; + +static const struct nla_policy +br_cfm_mep_delete_policy[IFLA_BRIDGE_CFM_MEP_DELETE_MAX + 1] = { + [IFLA_BRIDGE_CFM_MEP_DELETE_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_MEP_DELETE_INSTANCE] = { .type = NLA_U32 }, +}; + +static const struct nla_policy +br_cfm_mep_config_policy[IFLA_BRIDGE_CFM_MEP_CONFIG_MAX + 1] = { + [IFLA_BRIDGE_CFM_MEP_CONFIG_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_MEP_CONFIG_INSTANCE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_MEP_CONFIG_UNICAST_MAC] = NLA_POLICY_ETH_ADDR, + [IFLA_BRIDGE_CFM_MEP_CONFIG_MDLEVEL] = NLA_POLICY_MAX(NLA_U32, 7), + [IFLA_BRIDGE_CFM_MEP_CONFIG_MEPID] = NLA_POLICY_MAX(NLA_U32, 0x1FFF), +}; + +static const struct nla_policy +br_cfm_cc_config_policy[IFLA_BRIDGE_CFM_CC_CONFIG_MAX + 1] = { + [IFLA_BRIDGE_CFM_CC_CONFIG_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_CC_CONFIG_INSTANCE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CONFIG_ENABLE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CONFIG_EXP_INTERVAL] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CONFIG_EXP_MAID] = { + .type = NLA_BINARY, .len = CFM_MAID_LENGTH }, +}; + +static const struct nla_policy +br_cfm_cc_peer_mep_policy[IFLA_BRIDGE_CFM_CC_PEER_MEP_MAX + 1] = { + [IFLA_BRIDGE_CFM_CC_PEER_MEP_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_CC_PEER_MEP_INSTANCE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_PEER_MEPID] = NLA_POLICY_MAX(NLA_U32, 0x1FFF), +}; + +static const struct nla_policy +br_cfm_cc_rdi_policy[IFLA_BRIDGE_CFM_CC_RDI_MAX + 1] = { + [IFLA_BRIDGE_CFM_CC_RDI_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_CC_RDI_INSTANCE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_RDI_RDI] = { .type = NLA_U32 }, +}; + +static const struct nla_policy +br_cfm_cc_ccm_tx_policy[IFLA_BRIDGE_CFM_CC_CCM_TX_MAX + 1] = { + [IFLA_BRIDGE_CFM_CC_CCM_TX_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_CC_CCM_TX_INSTANCE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CCM_TX_DMAC] = NLA_POLICY_ETH_ADDR, + [IFLA_BRIDGE_CFM_CC_CCM_TX_SEQ_NO_UPDATE] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CCM_TX_PERIOD] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV_VALUE] = { .type = NLA_U8 }, + [IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV] = { .type = NLA_U32 }, + [IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV_VALUE] = { .type = NLA_U8 }, +}; + +static const struct nla_policy +br_cfm_policy[IFLA_BRIDGE_CFM_MAX + 1] = { + [IFLA_BRIDGE_CFM_UNSPEC] = { .type = NLA_REJECT }, + [IFLA_BRIDGE_CFM_MEP_CREATE] = + NLA_POLICY_NESTED(br_cfm_mep_create_policy), + [IFLA_BRIDGE_CFM_MEP_DELETE] = + NLA_POLICY_NESTED(br_cfm_mep_delete_policy), + [IFLA_BRIDGE_CFM_MEP_CONFIG] = + NLA_POLICY_NESTED(br_cfm_mep_config_policy), + [IFLA_BRIDGE_CFM_CC_CONFIG] = + NLA_POLICY_NESTED(br_cfm_cc_config_policy), + [IFLA_BRIDGE_CFM_CC_PEER_MEP_ADD] = + NLA_POLICY_NESTED(br_cfm_cc_peer_mep_policy), + [IFLA_BRIDGE_CFM_CC_PEER_MEP_REMOVE] = + NLA_POLICY_NESTED(br_cfm_cc_peer_mep_policy), + [IFLA_BRIDGE_CFM_CC_RDI] = + NLA_POLICY_NESTED(br_cfm_cc_rdi_policy), + [IFLA_BRIDGE_CFM_CC_CCM_TX] = + NLA_POLICY_NESTED(br_cfm_cc_ccm_tx_policy), +}; + +static int br_mep_create_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_MEP_CREATE_MAX + 1]; + struct br_cfm_mep_create create; + u32 instance; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_MEP_CREATE_MAX, attr, + br_cfm_mep_create_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_MEP_CREATE_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INSTANCE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_MEP_CREATE_DOMAIN]) { + NL_SET_ERR_MSG_MOD(extack, "Missing DOMAIN attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_MEP_CREATE_DIRECTION]) { + NL_SET_ERR_MSG_MOD(extack, "Missing DIRECTION attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_MEP_CREATE_IFINDEX]) { + NL_SET_ERR_MSG_MOD(extack, "Missing IFINDEX attribute"); + return -EINVAL; + } + + memset(&create, 0, sizeof(create)); + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_CREATE_INSTANCE]); + create.domain = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_CREATE_DOMAIN]); + create.direction = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_CREATE_DIRECTION]); + create.ifindex = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_CREATE_IFINDEX]); + + return br_cfm_mep_create(br, instance, &create, extack); +} + +static int br_mep_delete_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_MEP_DELETE_MAX + 1]; + u32 instance; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_MEP_DELETE_MAX, attr, + br_cfm_mep_delete_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_MEP_DELETE_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, + "Missing INSTANCE attribute"); + return -EINVAL; + } + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_DELETE_INSTANCE]); + + return br_cfm_mep_delete(br, instance, extack); +} + +static int br_mep_config_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_MEP_CONFIG_MAX + 1]; + struct br_cfm_mep_config config; + u32 instance; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_MEP_CONFIG_MAX, attr, + br_cfm_mep_config_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_MEP_CONFIG_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INSTANCE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_MEP_CONFIG_UNICAST_MAC]) { + NL_SET_ERR_MSG_MOD(extack, "Missing UNICAST_MAC attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_MEP_CONFIG_MDLEVEL]) { + NL_SET_ERR_MSG_MOD(extack, "Missing MDLEVEL attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_MEP_CONFIG_MEPID]) { + NL_SET_ERR_MSG_MOD(extack, "Missing MEPID attribute"); + return -EINVAL; + } + + memset(&config, 0, sizeof(config)); + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_CONFIG_INSTANCE]); + nla_memcpy(&config.unicast_mac.addr, + tb[IFLA_BRIDGE_CFM_MEP_CONFIG_UNICAST_MAC], + sizeof(config.unicast_mac.addr)); + config.mdlevel = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_CONFIG_MDLEVEL]); + config.mepid = nla_get_u32(tb[IFLA_BRIDGE_CFM_MEP_CONFIG_MEPID]); + + return br_cfm_mep_config_set(br, instance, &config, extack); +} + +static int br_cc_config_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_CC_CONFIG_MAX + 1]; + struct br_cfm_cc_config config; + u32 instance; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_CC_CONFIG_MAX, attr, + br_cfm_cc_config_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_CC_CONFIG_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INSTANCE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CONFIG_ENABLE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing ENABLE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CONFIG_EXP_INTERVAL]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INTERVAL attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CONFIG_EXP_MAID]) { + NL_SET_ERR_MSG_MOD(extack, "Missing MAID attribute"); + return -EINVAL; + } + + memset(&config, 0, sizeof(config)); + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CONFIG_INSTANCE]); + config.enable = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CONFIG_ENABLE]); + config.exp_interval = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CONFIG_EXP_INTERVAL]); + nla_memcpy(&config.exp_maid.data, tb[IFLA_BRIDGE_CFM_CC_CONFIG_EXP_MAID], + sizeof(config.exp_maid.data)); + + return br_cfm_cc_config_set(br, instance, &config, extack); +} + +static int br_cc_peer_mep_add_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_MAX + 1]; + u32 instance, peer_mep_id; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_CC_PEER_MEP_MAX, attr, + br_cfm_cc_peer_mep_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INSTANCE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_PEER_MEPID]) { + NL_SET_ERR_MSG_MOD(extack, "Missing PEER_MEP_ID attribute"); + return -EINVAL; + } + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_INSTANCE]); + peer_mep_id = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_PEER_MEPID]); + + return br_cfm_cc_peer_mep_add(br, instance, peer_mep_id, extack); +} + +static int br_cc_peer_mep_remove_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_MAX + 1]; + u32 instance, peer_mep_id; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_CC_PEER_MEP_MAX, attr, + br_cfm_cc_peer_mep_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INSTANCE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_PEER_MEPID]) { + NL_SET_ERR_MSG_MOD(extack, "Missing PEER_MEP_ID attribute"); + return -EINVAL; + } + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_INSTANCE]); + peer_mep_id = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_PEER_MEPID]); + + return br_cfm_cc_peer_mep_remove(br, instance, peer_mep_id, extack); +} + +static int br_cc_rdi_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_CC_RDI_MAX + 1]; + u32 instance, rdi; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_CC_RDI_MAX, attr, + br_cfm_cc_rdi_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_CC_RDI_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INSTANCE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_RDI_RDI]) { + NL_SET_ERR_MSG_MOD(extack, "Missing RDI attribute"); + return -EINVAL; + } + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_RDI_INSTANCE]); + rdi = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_RDI_RDI]); + + return br_cfm_cc_rdi_set(br, instance, rdi, extack); +} + +static int br_cc_ccm_tx_parse(struct net_bridge *br, struct nlattr *attr, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_CC_CCM_TX_MAX + 1]; + struct br_cfm_cc_ccm_tx_info tx_info; + u32 instance; + int err; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_CC_CCM_TX_MAX, attr, + br_cfm_cc_ccm_tx_policy, extack); + if (err) + return err; + + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_INSTANCE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing INSTANCE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_DMAC]) { + NL_SET_ERR_MSG_MOD(extack, "Missing DMAC attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_SEQ_NO_UPDATE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing SEQ_NO_UPDATE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_PERIOD]) { + NL_SET_ERR_MSG_MOD(extack, "Missing PERIOD attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV]) { + NL_SET_ERR_MSG_MOD(extack, "Missing IF_TLV attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV_VALUE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing IF_TLV_VALUE attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV]) { + NL_SET_ERR_MSG_MOD(extack, "Missing PORT_TLV attribute"); + return -EINVAL; + } + if (!tb[IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV_VALUE]) { + NL_SET_ERR_MSG_MOD(extack, "Missing PORT_TLV_VALUE attribute"); + return -EINVAL; + } + + memset(&tx_info, 0, sizeof(tx_info)); + + instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_RDI_INSTANCE]); + nla_memcpy(&tx_info.dmac.addr, + tb[IFLA_BRIDGE_CFM_CC_CCM_TX_DMAC], + sizeof(tx_info.dmac.addr)); + tx_info.seq_no_update = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CCM_TX_SEQ_NO_UPDATE]); + tx_info.period = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CCM_TX_PERIOD]); + tx_info.if_tlv = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV]); + tx_info.if_tlv_value = nla_get_u8(tb[IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV_VALUE]); + tx_info.port_tlv = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV]); + tx_info.port_tlv_value = nla_get_u8(tb[IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV_VALUE]); + + return br_cfm_cc_ccm_tx(br, instance, &tx_info, extack); +} + +int br_cfm_parse(struct net_bridge *br, struct net_bridge_port *p, + struct nlattr *attr, int cmd, struct netlink_ext_ack *extack) +{ + struct nlattr *tb[IFLA_BRIDGE_CFM_MAX + 1]; + int err; + + /* When this function is called for a port then the br pointer is + * invalid, therefor set the br to point correctly + */ + if (p) + br = p->br; + + err = nla_parse_nested(tb, IFLA_BRIDGE_CFM_MAX, attr, + br_cfm_policy, extack); + if (err) + return err; + + if (tb[IFLA_BRIDGE_CFM_MEP_CREATE]) { + err = br_mep_create_parse(br, tb[IFLA_BRIDGE_CFM_MEP_CREATE], + extack); + if (err) + return err; + } + + if (tb[IFLA_BRIDGE_CFM_MEP_DELETE]) { + err = br_mep_delete_parse(br, tb[IFLA_BRIDGE_CFM_MEP_DELETE], + extack); + if (err) + return err; + } + + if (tb[IFLA_BRIDGE_CFM_MEP_CONFIG]) { + err = br_mep_config_parse(br, tb[IFLA_BRIDGE_CFM_MEP_CONFIG], + extack); + if (err) + return err; + } + + if (tb[IFLA_BRIDGE_CFM_CC_CONFIG]) { + err = br_cc_config_parse(br, tb[IFLA_BRIDGE_CFM_CC_CONFIG], + extack); + if (err) + return err; + } + + if (tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_ADD]) { + err = br_cc_peer_mep_add_parse(br, tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_ADD], + extack); + if (err) + return err; + } + + if (tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_REMOVE]) { + err = br_cc_peer_mep_remove_parse(br, tb[IFLA_BRIDGE_CFM_CC_PEER_MEP_REMOVE], + extack); + if (err) + return err; + } + + if (tb[IFLA_BRIDGE_CFM_CC_RDI]) { + err = br_cc_rdi_parse(br, tb[IFLA_BRIDGE_CFM_CC_RDI], + extack); + if (err) + return err; + } + + if (tb[IFLA_BRIDGE_CFM_CC_CCM_TX]) { + err = br_cc_ccm_tx_parse(br, tb[IFLA_BRIDGE_CFM_CC_CCM_TX], + extack); + if (err) + return err; + } + + return 0; +} + +int br_cfm_config_fill_info(struct sk_buff *skb, struct net_bridge *br) +{ + struct br_cfm_peer_mep *peer_mep; + struct br_cfm_mep *mep; + struct nlattr *tb; + + hlist_for_each_entry_rcu(mep, &br->mep_list, head) { + tb = nla_nest_start(skb, IFLA_BRIDGE_CFM_MEP_CREATE_INFO); + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_CREATE_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_CREATE_DOMAIN, + mep->create.domain)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_CREATE_DIRECTION, + mep->create.direction)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_CREATE_IFINDEX, + mep->create.ifindex)) + goto nla_put_failure; + + nla_nest_end(skb, tb); + + tb = nla_nest_start(skb, IFLA_BRIDGE_CFM_MEP_CONFIG_INFO); + + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_CONFIG_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put(skb, IFLA_BRIDGE_CFM_MEP_CONFIG_UNICAST_MAC, + sizeof(mep->config.unicast_mac.addr), + mep->config.unicast_mac.addr)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_CONFIG_MDLEVEL, + mep->config.mdlevel)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_CONFIG_MEPID, + mep->config.mepid)) + goto nla_put_failure; + + nla_nest_end(skb, tb); + + tb = nla_nest_start(skb, IFLA_BRIDGE_CFM_CC_CONFIG_INFO); + + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CONFIG_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CONFIG_ENABLE, + mep->cc_config.enable)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CONFIG_EXP_INTERVAL, + mep->cc_config.exp_interval)) + goto nla_put_failure; + + if (nla_put(skb, IFLA_BRIDGE_CFM_CC_CONFIG_EXP_MAID, + sizeof(mep->cc_config.exp_maid.data), + mep->cc_config.exp_maid.data)) + goto nla_put_failure; + + nla_nest_end(skb, tb); + + tb = nla_nest_start(skb, IFLA_BRIDGE_CFM_CC_RDI_INFO); + + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_RDI_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_RDI_RDI, + mep->rdi)) + goto nla_put_failure; + + nla_nest_end(skb, tb); + + tb = nla_nest_start(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_INFO); + + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_DMAC, + sizeof(mep->cc_ccm_tx_info.dmac), + mep->cc_ccm_tx_info.dmac.addr)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_SEQ_NO_UPDATE, + mep->cc_ccm_tx_info.seq_no_update)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_PERIOD, + mep->cc_ccm_tx_info.period)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV, + mep->cc_ccm_tx_info.if_tlv)) + goto nla_put_failure; + + if (nla_put_u8(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_IF_TLV_VALUE, + mep->cc_ccm_tx_info.if_tlv_value)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV, + mep->cc_ccm_tx_info.port_tlv)) + goto nla_put_failure; + + if (nla_put_u8(skb, IFLA_BRIDGE_CFM_CC_CCM_TX_PORT_TLV_VALUE, + mep->cc_ccm_tx_info.port_tlv_value)) + goto nla_put_failure; + + nla_nest_end(skb, tb); + + hlist_for_each_entry_rcu(peer_mep, &mep->peer_mep_list, head) { + tb = nla_nest_start(skb, + IFLA_BRIDGE_CFM_CC_PEER_MEP_INFO); + + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_CC_PEER_MEP_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_PEER_MEPID, + peer_mep->mepid)) + goto nla_put_failure; + + nla_nest_end(skb, tb); + } + } + + return 0; + +nla_put_failure: + nla_nest_cancel(skb, tb); + +nla_info_failure: + return -EMSGSIZE; +} + +int br_cfm_status_fill_info(struct sk_buff *skb, + struct net_bridge *br, + bool getlink) +{ + struct br_cfm_peer_mep *peer_mep; + struct br_cfm_mep *mep; + struct nlattr *tb; + + hlist_for_each_entry_rcu(mep, &br->mep_list, head) { + tb = nla_nest_start(skb, IFLA_BRIDGE_CFM_MEP_STATUS_INFO); + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_MEP_STATUS_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_MEP_STATUS_OPCODE_UNEXP_SEEN, + mep->status.opcode_unexp_seen)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_MEP_STATUS_VERSION_UNEXP_SEEN, + mep->status.version_unexp_seen)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_MEP_STATUS_RX_LEVEL_LOW_SEEN, + mep->status.rx_level_low_seen)) + goto nla_put_failure; + + /* Only clear if this is a GETLINK */ + if (getlink) { + /* Clear all 'seen' indications */ + mep->status.opcode_unexp_seen = false; + mep->status.version_unexp_seen = false; + mep->status.rx_level_low_seen = false; + } + + nla_nest_end(skb, tb); + + hlist_for_each_entry_rcu(peer_mep, &mep->peer_mep_list, head) { + tb = nla_nest_start(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_INFO); + if (!tb) + goto nla_info_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_INSTANCE, + mep->instance)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_PEER_MEPID, + peer_mep->mepid)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_CCM_DEFECT, + peer_mep->cc_status.ccm_defect)) + goto nla_put_failure; + + if (nla_put_u32(skb, IFLA_BRIDGE_CFM_CC_PEER_STATUS_RDI, + peer_mep->cc_status.rdi)) + goto nla_put_failure; + + if (nla_put_u8(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_PORT_TLV_VALUE, + peer_mep->cc_status.port_tlv_value)) + goto nla_put_failure; + + if (nla_put_u8(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_IF_TLV_VALUE, + peer_mep->cc_status.if_tlv_value)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_SEEN, + peer_mep->cc_status.seen)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_TLV_SEEN, + peer_mep->cc_status.tlv_seen)) + goto nla_put_failure; + + if (nla_put_u32(skb, + IFLA_BRIDGE_CFM_CC_PEER_STATUS_SEQ_UNEXP_SEEN, + peer_mep->cc_status.seq_unexp_seen)) + goto nla_put_failure; + + if (getlink) { /* Only clear if this is a GETLINK */ + /* Clear all 'seen' indications */ + peer_mep->cc_status.seen = false; + peer_mep->cc_status.tlv_seen = false; + peer_mep->cc_status.seq_unexp_seen = false; + } + + nla_nest_end(skb, tb); + } + } + + return 0; + +nla_put_failure: + nla_nest_cancel(skb, tb); + +nla_info_failure: + return -EMSGSIZE; +} diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c index 6f742fee874a..9b5d62744acc 100644 --- a/net/bridge/br_device.c +++ b/net/bridge/br_device.c @@ -454,9 +454,13 @@ void br_dev_setup(struct net_device *dev) spin_lock_init(&br->lock); INIT_LIST_HEAD(&br->port_list); INIT_HLIST_HEAD(&br->fdb_list); + INIT_HLIST_HEAD(&br->frame_type_list); #if IS_ENABLED(CONFIG_BRIDGE_MRP) INIT_LIST_HEAD(&br->mrp_list); #endif +#if IS_ENABLED(CONFIG_BRIDGE_CFM) + INIT_HLIST_HEAD(&br->mep_list); +#endif spin_lock_init(&br->hash_lock); br->bridge_id.prio[0] = 0x80; diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c index a0e9a7937412..f7d2f472ae24 100644 --- a/net/bridge/br_if.c +++ b/net/bridge/br_if.c @@ -334,6 +334,7 @@ static void del_nbp(struct net_bridge_port *p) spin_unlock_bh(&br->lock); br_mrp_port_del(br, p); + br_cfm_port_del(br, p); br_ifinfo_notify(RTM_DELLINK, NULL, p); diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index 59a318b9f646..bece03bf83c4 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -254,6 +254,21 @@ frame_finish: return RX_HANDLER_CONSUMED; } +/* Return 0 if the frame was not processed otherwise 1 + * note: already called with rcu_read_lock + */ +static int br_process_frame_type(struct net_bridge_port *p, + struct sk_buff *skb) +{ + struct br_frame_type *tmp; + + hlist_for_each_entry_rcu(tmp, &p->br->frame_type_list, list) + if (unlikely(tmp->type == skb->protocol)) + return tmp->frame_handler(p, skb); + + return 0; +} + /* * Return NULL if skb is handled * note: already called with rcu_read_lock @@ -343,7 +358,7 @@ static rx_handler_result_t br_handle_frame(struct sk_buff **pskb) } } - if (unlikely(br_mrp_process(p, skb))) + if (unlikely(br_process_frame_type(p, skb))) return RX_HANDLER_PASS; forward: @@ -380,3 +395,19 @@ rx_handler_func_t *br_get_rx_handler(const struct net_device *dev) return br_handle_frame; } + +void br_add_frame(struct net_bridge *br, struct br_frame_type *ft) +{ + hlist_add_head_rcu(&ft->list, &br->frame_type_list); +} + +void br_del_frame(struct net_bridge *br, struct br_frame_type *ft) +{ + struct br_frame_type *tmp; + + hlist_for_each_entry(tmp, &br->frame_type_list, list) + if (ft == tmp) { + hlist_del_rcu(&ft->list); + return; + } +} diff --git a/net/bridge/br_mrp.c b/net/bridge/br_mrp.c index b36689e6e7cb..f94d72bb7c32 100644 --- a/net/bridge/br_mrp.c +++ b/net/bridge/br_mrp.c @@ -6,6 +6,13 @@ static const u8 mrp_test_dmac[ETH_ALEN] = { 0x1, 0x15, 0x4e, 0x0, 0x0, 0x1 }; static const u8 mrp_in_test_dmac[ETH_ALEN] = { 0x1, 0x15, 0x4e, 0x0, 0x0, 0x3 }; +static int br_mrp_process(struct net_bridge_port *p, struct sk_buff *skb); + +static struct br_frame_type mrp_frame_type __read_mostly = { + .type = cpu_to_be16(ETH_P_MRP), + .frame_handler = br_mrp_process, +}; + static bool br_mrp_is_ring_port(struct net_bridge_port *p_port, struct net_bridge_port *s_port, struct net_bridge_port *port) @@ -445,6 +452,9 @@ static void br_mrp_del_impl(struct net_bridge *br, struct br_mrp *mrp) list_del_rcu(&mrp->list); kfree_rcu(mrp, rcu); + + if (list_empty(&br->mrp_list)) + br_del_frame(br, &mrp_frame_type); } /* Adds a new MRP instance. @@ -493,6 +503,9 @@ int br_mrp_add(struct net_bridge *br, struct br_mrp_instance *instance) spin_unlock_bh(&br->lock); rcu_assign_pointer(mrp->s_port, p); + if (list_empty(&br->mrp_list)) + br_add_frame(br, &mrp_frame_type); + INIT_DELAYED_WORK(&mrp->test_work, br_mrp_test_work_expired); INIT_DELAYED_WORK(&mrp->in_test_work, br_mrp_in_test_work_expired); list_add_tail_rcu(&mrp->list, &br->mrp_list); @@ -1172,15 +1185,13 @@ no_forward: * normal forwarding. * note: already called with rcu_read_lock */ -int br_mrp_process(struct net_bridge_port *p, struct sk_buff *skb) +static int br_mrp_process(struct net_bridge_port *p, struct sk_buff *skb) { /* If there is no MRP instance do normal forwarding */ if (likely(!(p->flags & BR_MRP_AWARE))) goto out; - if (unlikely(skb->protocol == htons(ETH_P_MRP))) - return br_mrp_rcv(p, skb, p->dev); - + return br_mrp_rcv(p, skb, p->dev); out: return 0; } diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c index 92d64abffa87..6952d4852942 100644 --- a/net/bridge/br_netlink.c +++ b/net/bridge/br_netlink.c @@ -16,6 +16,7 @@ #include "br_private.h" #include "br_private_stp.h" +#include "br_private_cfm.h" #include "br_private_tunnel.h" static int __get_num_vlan_infos(struct net_bridge_vlan_group *vg, @@ -93,9 +94,11 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev, { struct net_bridge_vlan_group *vg = NULL; struct net_bridge_port *p = NULL; - struct net_bridge *br; - int num_vlan_infos; + struct net_bridge *br = NULL; + u32 num_cfm_peer_mep_infos; + u32 num_cfm_mep_infos; size_t vinfo_sz = 0; + int num_vlan_infos; rcu_read_lock(); if (netif_is_bridge_port(dev)) { @@ -114,6 +117,49 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev, /* Each VLAN is returned in bridge_vlan_info along with flags */ vinfo_sz += num_vlan_infos * nla_total_size(sizeof(struct bridge_vlan_info)); + if (!(filter_mask & RTEXT_FILTER_CFM_STATUS)) + return vinfo_sz; + + if (!br) + return vinfo_sz; + + /* CFM status info must be added */ + br_cfm_mep_count(br, &num_cfm_mep_infos); + br_cfm_peer_mep_count(br, &num_cfm_peer_mep_infos); + + vinfo_sz += nla_total_size(0); /* IFLA_BRIDGE_CFM */ + /* For each status struct the MEP instance (u32) is added */ + /* MEP instance (u32) + br_cfm_mep_status */ + vinfo_sz += num_cfm_mep_infos * + /*IFLA_BRIDGE_CFM_MEP_STATUS_INSTANCE */ + (nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_MEP_STATUS_OPCODE_UNEXP_SEEN */ + + nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_MEP_STATUS_VERSION_UNEXP_SEEN */ + + nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_MEP_STATUS_RX_LEVEL_LOW_SEEN */ + + nla_total_size(sizeof(u32))); + /* MEP instance (u32) + br_cfm_cc_peer_status */ + vinfo_sz += num_cfm_peer_mep_infos * + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_INSTANCE */ + (nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_PEER_MEPID */ + + nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_CCM_DEFECT */ + + nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_RDI */ + + nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_PORT_TLV_VALUE */ + + nla_total_size(sizeof(u8)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_IF_TLV_VALUE */ + + nla_total_size(sizeof(u8)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_SEEN */ + + nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_TLV_SEEN */ + + nla_total_size(sizeof(u32)) + /* IFLA_BRIDGE_CFM_CC_PEER_STATUS_SEQ_UNEXP_SEEN */ + + nla_total_size(sizeof(u32))); + return vinfo_sz; } @@ -377,7 +423,8 @@ nla_put_failure: static int br_fill_ifinfo(struct sk_buff *skb, const struct net_bridge_port *port, u32 pid, u32 seq, int event, unsigned int flags, - u32 filter_mask, const struct net_device *dev) + u32 filter_mask, const struct net_device *dev, + bool getlink) { u8 operstate = netif_running(dev) ? dev->operstate : IF_OPER_DOWN; struct nlattr *af = NULL; @@ -426,7 +473,9 @@ static int br_fill_ifinfo(struct sk_buff *skb, if (filter_mask & (RTEXT_FILTER_BRVLAN | RTEXT_FILTER_BRVLAN_COMPRESSED | - RTEXT_FILTER_MRP)) { + RTEXT_FILTER_MRP | + RTEXT_FILTER_CFM_CONFIG | + RTEXT_FILTER_CFM_STATUS)) { af = nla_nest_start_noflag(skb, IFLA_AF_SPEC); if (!af) goto nla_put_failure; @@ -475,6 +524,36 @@ static int br_fill_ifinfo(struct sk_buff *skb, goto nla_put_failure; } + if (filter_mask & (RTEXT_FILTER_CFM_CONFIG | RTEXT_FILTER_CFM_STATUS)) { + struct nlattr *cfm_nest = NULL; + int err; + + if (!br_cfm_created(br) || port) + goto done; + + cfm_nest = nla_nest_start(skb, IFLA_BRIDGE_CFM); + if (!cfm_nest) + goto nla_put_failure; + + if (filter_mask & RTEXT_FILTER_CFM_CONFIG) { + rcu_read_lock(); + err = br_cfm_config_fill_info(skb, br); + rcu_read_unlock(); + if (err) + goto nla_put_failure; + } + + if (filter_mask & RTEXT_FILTER_CFM_STATUS) { + rcu_read_lock(); + err = br_cfm_status_fill_info(skb, br, getlink); + rcu_read_unlock(); + if (err) + goto nla_put_failure; + } + + nla_nest_end(skb, cfm_nest); + } + done: if (af) nla_nest_end(skb, af); @@ -486,11 +565,9 @@ nla_put_failure: return -EMSGSIZE; } -/* Notify listeners of a change in bridge or port information */ -void br_ifinfo_notify(int event, const struct net_bridge *br, - const struct net_bridge_port *port) +void br_info_notify(int event, const struct net_bridge *br, + const struct net_bridge_port *port, u32 filter) { - u32 filter = RTEXT_FILTER_BRVLAN_COMPRESSED; struct net_device *dev; struct sk_buff *skb; int err = -ENOBUFS; @@ -515,7 +592,7 @@ void br_ifinfo_notify(int event, const struct net_bridge *br, if (skb == NULL) goto errout; - err = br_fill_ifinfo(skb, port, 0, 0, event, 0, filter, dev); + err = br_fill_ifinfo(skb, port, 0, 0, event, 0, filter, dev, false); if (err < 0) { /* -EMSGSIZE implies BUG in br_nlmsg_size() */ WARN_ON(err == -EMSGSIZE); @@ -528,6 +605,15 @@ errout: rtnl_set_sk_err(net, RTNLGRP_LINK, err); } +/* Notify listeners of a change in bridge or port information */ +void br_ifinfo_notify(int event, const struct net_bridge *br, + const struct net_bridge_port *port) +{ + u32 filter = RTEXT_FILTER_BRVLAN_COMPRESSED; + + return br_info_notify(event, br, port, filter); +} + /* * Dump information about all ports, in response to GETLINK */ @@ -538,11 +624,13 @@ int br_getlink(struct sk_buff *skb, u32 pid, u32 seq, if (!port && !(filter_mask & RTEXT_FILTER_BRVLAN) && !(filter_mask & RTEXT_FILTER_BRVLAN_COMPRESSED) && - !(filter_mask & RTEXT_FILTER_MRP)) + !(filter_mask & RTEXT_FILTER_MRP) && + !(filter_mask & RTEXT_FILTER_CFM_CONFIG) && + !(filter_mask & RTEXT_FILTER_CFM_STATUS)) return 0; return br_fill_ifinfo(skb, port, pid, seq, RTM_NEWLINK, nlflags, - filter_mask, dev); + filter_mask, dev, true); } static int br_vlan_info(struct net_bridge *br, struct net_bridge_port *p, @@ -700,6 +788,11 @@ static int br_afspec(struct net_bridge *br, if (err) return err; break; + case IFLA_BRIDGE_CFM: + err = br_cfm_parse(br, p, attr, cmd, extack); + if (err) + return err; + break; } } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 345118e35c42..905d406a2fc7 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -383,7 +383,7 @@ enum net_bridge_opts { struct net_bridge { spinlock_t lock; spinlock_t hash_lock; - struct list_head port_list; + struct hlist_head frame_type_list; struct net_device *dev; struct pcpu_sw_netstats __percpu *stats; unsigned long options; @@ -395,6 +395,7 @@ struct net_bridge { #endif struct rhashtable fdb_hash_tbl; + struct list_head port_list; #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER) union { struct rtable fake_rtable; @@ -483,6 +484,9 @@ struct net_bridge { #if IS_ENABLED(CONFIG_BRIDGE_MRP) struct list_head mrp_list; #endif +#if IS_ENABLED(CONFIG_BRIDGE_CFM) + struct hlist_head mep_list; +#endif }; struct br_input_skb_cb { @@ -755,6 +759,16 @@ int nbp_backup_change(struct net_bridge_port *p, struct net_device *backup_dev); int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb); rx_handler_func_t *br_get_rx_handler(const struct net_device *dev); +struct br_frame_type { + __be16 type; + int (*frame_handler)(struct net_bridge_port *port, + struct sk_buff *skb); + struct hlist_node list; +}; + +void br_add_frame(struct net_bridge *br, struct br_frame_type *ft); +void br_del_frame(struct net_bridge *br, struct br_frame_type *ft); + static inline bool br_rx_handler_check_rcu(const struct net_device *dev) { return rcu_dereference(dev->rx_handler) == br_get_rx_handler(dev); @@ -1417,7 +1431,6 @@ extern int (*br_fdb_test_addr_hook)(struct net_device *dev, unsigned char *addr) #if IS_ENABLED(CONFIG_BRIDGE_MRP) int br_mrp_parse(struct net_bridge *br, struct net_bridge_port *p, struct nlattr *attr, int cmd, struct netlink_ext_ack *extack); -int br_mrp_process(struct net_bridge_port *p, struct sk_buff *skb); bool br_mrp_enabled(struct net_bridge *br); void br_mrp_port_del(struct net_bridge *br, struct net_bridge_port *p); int br_mrp_fill_info(struct sk_buff *skb, struct net_bridge *br); @@ -1429,11 +1442,6 @@ static inline int br_mrp_parse(struct net_bridge *br, struct net_bridge_port *p, return -EOPNOTSUPP; } -static inline int br_mrp_process(struct net_bridge_port *p, struct sk_buff *skb) -{ - return 0; -} - static inline bool br_mrp_enabled(struct net_bridge *br) { return false; @@ -1451,12 +1459,67 @@ static inline int br_mrp_fill_info(struct sk_buff *skb, struct net_bridge *br) #endif +/* br_cfm.c */ +#if IS_ENABLED(CONFIG_BRIDGE_CFM) +int br_cfm_parse(struct net_bridge *br, struct net_bridge_port *p, + struct nlattr *attr, int cmd, struct netlink_ext_ack *extack); +bool br_cfm_created(struct net_bridge *br); +void br_cfm_port_del(struct net_bridge *br, struct net_bridge_port *p); +int br_cfm_config_fill_info(struct sk_buff *skb, struct net_bridge *br); +int br_cfm_status_fill_info(struct sk_buff *skb, + struct net_bridge *br, + bool getlink); +int br_cfm_mep_count(struct net_bridge *br, u32 *count); +int br_cfm_peer_mep_count(struct net_bridge *br, u32 *count); +#else +static inline int br_cfm_parse(struct net_bridge *br, struct net_bridge_port *p, + struct nlattr *attr, int cmd, + struct netlink_ext_ack *extack) +{ + return -EOPNOTSUPP; +} + +static inline bool br_cfm_created(struct net_bridge *br) +{ + return false; +} + +static inline void br_cfm_port_del(struct net_bridge *br, + struct net_bridge_port *p) +{ +} + +static inline int br_cfm_config_fill_info(struct sk_buff *skb, struct net_bridge *br) +{ + return -EOPNOTSUPP; +} + +static inline int br_cfm_status_fill_info(struct sk_buff *skb, + struct net_bridge *br, + bool getlink) +{ + return -EOPNOTSUPP; +} + +static inline int br_cfm_mep_count(struct net_bridge *br, u32 *count) +{ + return -EOPNOTSUPP; +} + +static inline int br_cfm_peer_mep_count(struct net_bridge *br, u32 *count) +{ + return -EOPNOTSUPP; +} +#endif + /* br_netlink.c */ extern struct rtnl_link_ops br_link_ops; int br_netlink_init(void); void br_netlink_fini(void); void br_ifinfo_notify(int event, const struct net_bridge *br, const struct net_bridge_port *port); +void br_info_notify(int event, const struct net_bridge *br, + const struct net_bridge_port *port, u32 filter); int br_setlink(struct net_device *dev, struct nlmsghdr *nlmsg, u16 flags, struct netlink_ext_ack *extack); int br_dellink(struct net_device *dev, struct nlmsghdr *nlmsg, u16 flags); diff --git a/net/bridge/br_private_cfm.h b/net/bridge/br_private_cfm.h new file mode 100644 index 000000000000..a43a5e7fa2c3 --- /dev/null +++ b/net/bridge/br_private_cfm.h @@ -0,0 +1,147 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef _BR_PRIVATE_CFM_H_ +#define _BR_PRIVATE_CFM_H_ + +#include "br_private.h" +#include <uapi/linux/cfm_bridge.h> + +struct br_cfm_mep_create { + enum br_cfm_domain domain; /* Domain for this MEP */ + enum br_cfm_mep_direction direction; /* Up or Down MEP direction */ + u32 ifindex; /* Residence port */ +}; + +int br_cfm_mep_create(struct net_bridge *br, + const u32 instance, + struct br_cfm_mep_create *const create, + struct netlink_ext_ack *extack); + +int br_cfm_mep_delete(struct net_bridge *br, + const u32 instance, + struct netlink_ext_ack *extack); + +struct br_cfm_mep_config { + u32 mdlevel; + u32 mepid; /* MEPID for this MEP */ + struct mac_addr unicast_mac; /* The MEP unicast MAC */ +}; + +int br_cfm_mep_config_set(struct net_bridge *br, + const u32 instance, + const struct br_cfm_mep_config *const config, + struct netlink_ext_ack *extack); + +struct br_cfm_maid { + u8 data[CFM_MAID_LENGTH]; +}; + +struct br_cfm_cc_config { + /* Expected received CCM PDU MAID. */ + struct br_cfm_maid exp_maid; + + /* Expected received CCM PDU interval. */ + /* Transmitting CCM PDU interval when CCM tx is enabled. */ + enum br_cfm_ccm_interval exp_interval; + + bool enable; /* Enable/disable CCM PDU handling */ +}; + +int br_cfm_cc_config_set(struct net_bridge *br, + const u32 instance, + const struct br_cfm_cc_config *const config, + struct netlink_ext_ack *extack); + +int br_cfm_cc_peer_mep_add(struct net_bridge *br, const u32 instance, + u32 peer_mep_id, + struct netlink_ext_ack *extack); +int br_cfm_cc_peer_mep_remove(struct net_bridge *br, const u32 instance, + u32 peer_mep_id, + struct netlink_ext_ack *extack); + +/* Transmitted CCM Remote Defect Indication status set. + * This RDI is inserted in transmitted CCM PDUs if CCM transmission is enabled. + * See br_cfm_cc_ccm_tx() with interval != BR_CFM_CCM_INTERVAL_NONE + */ +int br_cfm_cc_rdi_set(struct net_bridge *br, const u32 instance, + const bool rdi, struct netlink_ext_ack *extack); + +/* OAM PDU Tx information */ +struct br_cfm_cc_ccm_tx_info { + struct mac_addr dmac; + /* The CCM will be transmitted for this period in seconds. + * Call br_cfm_cc_ccm_tx before timeout to keep transmission alive. + * When period is zero any ongoing transmission will be stopped. + */ + u32 period; + + bool seq_no_update; /* Update Tx CCM sequence number */ + bool if_tlv; /* Insert Interface Status TLV */ + u8 if_tlv_value; /* Interface Status TLV value */ + bool port_tlv; /* Insert Port Status TLV */ + u8 port_tlv_value; /* Port Status TLV value */ + /* Sender ID TLV ?? + * Organization-Specific TLV ?? + */ +}; + +int br_cfm_cc_ccm_tx(struct net_bridge *br, const u32 instance, + const struct br_cfm_cc_ccm_tx_info *const tx_info, + struct netlink_ext_ack *extack); + +struct br_cfm_mep_status { + /* Indications that an OAM PDU has been seen. */ + bool opcode_unexp_seen; /* RX of OAM PDU with unexpected opcode */ + bool version_unexp_seen; /* RX of OAM PDU with unexpected version */ + bool rx_level_low_seen; /* Rx of OAM PDU with level low */ +}; + +struct br_cfm_cc_peer_status { + /* This CCM related status is based on the latest received CCM PDU. */ + u8 port_tlv_value; /* Port Status TLV value */ + u8 if_tlv_value; /* Interface Status TLV value */ + + /* CCM has not been received for 3.25 intervals */ + u8 ccm_defect:1; + + /* (RDI == 1) for last received CCM PDU */ + u8 rdi:1; + + /* Indications that a CCM PDU has been seen. */ + u8 seen:1; /* CCM PDU received */ + u8 tlv_seen:1; /* CCM PDU with TLV received */ + /* CCM PDU with unexpected sequence number received */ + u8 seq_unexp_seen:1; +}; + +struct br_cfm_mep { + /* list header of MEP instances */ + struct hlist_node head; + u32 instance; + struct br_cfm_mep_create create; + struct br_cfm_mep_config config; + struct br_cfm_cc_config cc_config; + struct br_cfm_cc_ccm_tx_info cc_ccm_tx_info; + /* List of multiple peer MEPs */ + struct hlist_head peer_mep_list; + struct net_bridge_port __rcu *b_port; + unsigned long ccm_tx_end; + struct delayed_work ccm_tx_dwork; + u32 ccm_tx_snumber; + u32 ccm_rx_snumber; + struct br_cfm_mep_status status; + bool rdi; + struct rcu_head rcu; +}; + +struct br_cfm_peer_mep { + struct hlist_node head; + struct br_cfm_mep *mep; + struct delayed_work ccm_rx_dwork; + u32 mepid; + struct br_cfm_cc_peer_status cc_status; + u32 ccm_rx_count_miss; + struct rcu_head rcu; +}; + +#endif /* _BR_PRIVATE_CFM_H_ */ |
