Home Home > GIT Browse
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMichal Marek <mmarek@suse.cz>2011-11-25 11:53:29 +0100
committerMichal Marek <mmarek@suse.cz>2011-11-25 11:53:29 +0100
commit3ac468e83c5dd043fe60e7ebec98a722531676ff (patch)
tree693143662090d4cbe964072645895d0119940321
parentc8abb96ff245e93bef471058d8f20f4857eb0b15 (diff)
- patches.drivers/0003-ixgbe-consolidate-MRQC-and-MTQC-handling.patch:rpm-3.0.10-0.7
ixgbe: consolidate MRQC and MTQC handling (bnc#714789). - patches.drivers/0004-ixgbe-dcbnl-reduce-duplicated-code-and-indentation.patch: ixgbe: dcbnl reduce duplicated code and indentation (bnc#714789). - patches.drivers/0005-ixgbe-configure-minimal-packet-buffers-to-support-TC.patch: ixgbe: configure minimal packet buffers to support TC (bnc#714789). - patches.drivers/0006-ixgbe-consolidate-packet-buffer-allocation.patch: ixgbe: consolidate packet buffer allocation (bnc#714789). - patches.drivers/0007-ixgbe-add-basic-support-for-setting-and-getting-nfc-.patch: ixgbe: add basic support for setting and getting nfc controls (bnc#714789). - patches.drivers/0008-ixgbe-DCB-use-existing-TX-and-RX-queues.patch: ixgbe: DCB use existing TX and RX queues (bnc#714789). - patches.drivers/0009-ixgbe-add-support-for-displaying-ntuple-filters-via-.patch: ixgbe: add support for displaying ntuple filters via the nfc interface (bnc#714789). - patches.drivers/0010-ixgbe-fix-flags-relating-to-perfect-filters-to-suppo.patch: ixgbe: fix flags relating to perfect filters to support coexistence (bnc#714789). - patches.drivers/0011-ixgbe-remove-ntuple-filtering.patch: ixgbe: remove ntuple filtering (bnc#714789). - patches.drivers/0012-ixgbe-update-perfect-filter-framework-to-support-ret.patch: ixgbe: update perfect filter framework to support retaining filters (bnc#714789). - patches.drivers/0013-ixgbe-add-support-for-nfc-addition-and-removal-of-fi.patch: ixgbe: add support for nfc addition and removal of filters (bnc#714789). - patches.drivers/0014-ixgbe-remove-redundant-configuration-of-tx_sample_ra.patch: ixgbe: remove redundant configuration of tx_sample_rate (bnc#714789). - patches.drivers/0015-ixgbe-PFC-not-cleared-on-X540-devices.patch: ixgbe: PFC not cleared on X540 devices (bnc#714789). - patches.drivers/0016-ixgbe-DCB-and-perfect-filters-can-coexist.patch: ixgbe: DCB and perfect filters can coexist (bnc#714789). - patches.drivers/0017-ixgbe-consolidate-setup-for-multiple-traffic-classes.patch: ixgbe: consolidate, setup for multiple traffic classes (bnc#714789). - patches.drivers/0018-v2-ixgbe-Update-packet-buffer-reservation-to-correct.patch: v2 ixgbe: Update packet buffer reservation to correct fdir headroom size (bnc#714789). - patches.drivers/0019-ixgbe-update-P-FC-thresholds-to-account-for-X540-and.patch: ixgbe: update {P}FC thresholds to account for X540 and loopback (bnc#714789). - patches.drivers/0020-ixgbe-Make-certain-to-initialize-the-fdir_perfect_lo.patch: ixgbe: Make certain to initialize the fdir_perfect_lock in all cases (bnc#714789). - patches.drivers/0021-ixgbe-complete-FCoE-initialization-from-setapp-routi.patch: ixgbe: complete FCoE initialization from setapp() routine (bnc#714789). - patches.drivers/0022-ixgbe-DCB-do-not-call-set_state-from-IEEE-mode.patch: ixgbe: DCB, do not call set_state() from IEEE mode (bnc#714789). - patches.drivers/0023-ixgbe-add-structure-for-containing-RX-TX-rings-to-q_.patch: ixgbe: add structure for containing RX/TX rings to q_vector (bnc#714789). - patches.drivers/0024-ixgbe-Move-interrupt-related-values-out-of-ring-and-.patch: ixgbe: Move interrupt related values out of ring and into q_vector (bnc#714789). - patches.drivers/0025-ixgbe-DCB-X540-devices-support-max-traffic-class-of-.patch: ixgbe: DCB X540 devices support max traffic class of 4 (bnc#714789). - patches.drivers/0026-ixgbe-Move-all-values-that-deal-with-count-next_to_u.patch: ixgbe: Move all values that deal with count, next_to_use, next_to_clean to u16 (bnc#714789). - patches.drivers/0027-ixgbe-Update-method-used-for-determining-descriptor-.patch: ixgbe: Update method used for determining descriptor count for an skb (bnc#714789). - patches.drivers/0028-ixgbe-Add-one-function-that-handles-most-of-context-.patch: ixgbe: Add one function that handles most of context descriptor setup (bnc#714789). - patches.drivers/0029-ixgbe-Refactor-transmit-map-and-cleanup-routines.patch: ixgbe: Refactor transmit map and cleanup routines (bnc#714789). - patches.drivers/0030-ixgbe-Cleanup-FCOE-and-VLAN-handling-in-xmit_frame_r.patch: ixgbe: Cleanup FCOE and VLAN handling in xmit_frame_ring (bnc#714789). - patches.drivers/0031-ixgbe-Always-tag-VLAN-tagged-packets.patch: ixgbe: Always tag VLAN tagged packets (bnc#714789). - patches.drivers/0032-ixgbe-fixup-hard-dependencies-on-supporting-8-traffi.patch: ixgbe: fixup hard dependencies on supporting 8 traffic classes (bnc#714789).
-rw-r--r--kernel-source.changes85
-rw-r--r--patches.drivers/0003-ixgbe-consolidate-MRQC-and-MTQC-handling.patch131
-rw-r--r--patches.drivers/0004-ixgbe-dcbnl-reduce-duplicated-code-and-indentation.patch88
-rw-r--r--patches.drivers/0005-ixgbe-configure-minimal-packet-buffers-to-support-TC.patch409
-rw-r--r--patches.drivers/0006-ixgbe-consolidate-packet-buffer-allocation.patch569
-rw-r--r--patches.drivers/0007-ixgbe-add-basic-support-for-setting-and-getting-nfc-.patch54
-rw-r--r--patches.drivers/0008-ixgbe-DCB-use-existing-TX-and-RX-queues.patch227
-rw-r--r--patches.drivers/0009-ixgbe-add-support-for-displaying-ntuple-filters-via-.patch161
-rw-r--r--patches.drivers/0010-ixgbe-fix-flags-relating-to-perfect-filters-to-suppo.patch209
-rw-r--r--patches.drivers/0011-ixgbe-remove-ntuple-filtering.patch173
-rw-r--r--patches.drivers/0012-ixgbe-update-perfect-filter-framework-to-support-ret.patch890
-rw-r--r--patches.drivers/0013-ixgbe-add-support-for-nfc-addition-and-removal-of-fi.patch357
-rw-r--r--patches.drivers/0014-ixgbe-remove-redundant-configuration-of-tx_sample_ra.patch58
-rw-r--r--patches.drivers/0015-ixgbe-PFC-not-cleared-on-X540-devices.patch48
-rw-r--r--patches.drivers/0016-ixgbe-DCB-and-perfect-filters-can-coexist.patch29
-rw-r--r--patches.drivers/0017-ixgbe-consolidate-setup-for-multiple-traffic-classes.patch129
-rw-r--r--patches.drivers/0018-v2-ixgbe-Update-packet-buffer-reservation-to-correct.patch44
-rw-r--r--patches.drivers/0019-ixgbe-update-P-FC-thresholds-to-account-for-X540-and.patch429
-rw-r--r--patches.drivers/0020-ixgbe-Make-certain-to-initialize-the-fdir_perfect_lo.patch42
-rw-r--r--patches.drivers/0021-ixgbe-complete-FCoE-initialization-from-setapp-routi.patch109
-rw-r--r--patches.drivers/0022-ixgbe-DCB-do-not-call-set_state-from-IEEE-mode.patch239
-rw-r--r--patches.drivers/0023-ixgbe-add-structure-for-containing-RX-TX-rings-to-q_.patch447
-rw-r--r--patches.drivers/0024-ixgbe-Move-interrupt-related-values-out-of-ring-and-.patch479
-rw-r--r--patches.drivers/0025-ixgbe-DCB-X540-devices-support-max-traffic-class-of-.patch75
-rw-r--r--patches.drivers/0026-ixgbe-Move-all-values-that-deal-with-count-next_to_u.patch71
-rw-r--r--patches.drivers/0027-ixgbe-Update-method-used-for-determining-descriptor-.patch124
-rw-r--r--patches.drivers/0028-ixgbe-Add-one-function-that-handles-most-of-context-.patch547
-rw-r--r--patches.drivers/0029-ixgbe-Refactor-transmit-map-and-cleanup-routines.patch643
-rw-r--r--patches.drivers/0030-ixgbe-Cleanup-FCOE-and-VLAN-handling-in-xmit_frame_r.patch217
-rw-r--r--patches.drivers/0031-ixgbe-Always-tag-VLAN-tagged-packets.patch38
-rw-r--r--patches.drivers/0032-ixgbe-fixup-hard-dependencies-on-supporting-8-traffi.patch417
-rw-r--r--series.conf31
32 files changed, 7569 insertions, 0 deletions
diff --git a/kernel-source.changes b/kernel-source.changes
index 7cbb9c1d7e..b8edf9de6b 100644
--- a/kernel-source.changes
+++ b/kernel-source.changes
@@ -1,4 +1,89 @@
-------------------------------------------------------------------
+Fri Nov 25 11:53:23 CET 2011 - mmarek@suse.cz
+
+- patches.drivers/0003-ixgbe-consolidate-MRQC-and-MTQC-handling.patch:
+ ixgbe: consolidate MRQC and MTQC handling (bnc#714789).
+- patches.drivers/0004-ixgbe-dcbnl-reduce-duplicated-code-and-indentation.patch:
+ ixgbe: dcbnl reduce duplicated code and indentation
+ (bnc#714789).
+- patches.drivers/0005-ixgbe-configure-minimal-packet-buffers-to-support-TC.patch:
+ ixgbe: configure minimal packet buffers to support TC
+ (bnc#714789).
+- patches.drivers/0006-ixgbe-consolidate-packet-buffer-allocation.patch:
+ ixgbe: consolidate packet buffer allocation (bnc#714789).
+- patches.drivers/0007-ixgbe-add-basic-support-for-setting-and-getting-nfc-.patch:
+ ixgbe: add basic support for setting and getting nfc controls
+ (bnc#714789).
+- patches.drivers/0008-ixgbe-DCB-use-existing-TX-and-RX-queues.patch:
+ ixgbe: DCB use existing TX and RX queues (bnc#714789).
+- patches.drivers/0009-ixgbe-add-support-for-displaying-ntuple-filters-via-.patch:
+ ixgbe: add support for displaying ntuple filters via the nfc
+ interface (bnc#714789).
+- patches.drivers/0010-ixgbe-fix-flags-relating-to-perfect-filters-to-suppo.patch:
+ ixgbe: fix flags relating to perfect filters to support
+ coexistence (bnc#714789).
+- patches.drivers/0011-ixgbe-remove-ntuple-filtering.patch:
+ ixgbe: remove ntuple filtering (bnc#714789).
+- patches.drivers/0012-ixgbe-update-perfect-filter-framework-to-support-ret.patch:
+ ixgbe: update perfect filter framework to support retaining
+ filters (bnc#714789).
+- patches.drivers/0013-ixgbe-add-support-for-nfc-addition-and-removal-of-fi.patch:
+ ixgbe: add support for nfc addition and removal of filters
+ (bnc#714789).
+- patches.drivers/0014-ixgbe-remove-redundant-configuration-of-tx_sample_ra.patch:
+ ixgbe: remove redundant configuration of tx_sample_rate
+ (bnc#714789).
+- patches.drivers/0015-ixgbe-PFC-not-cleared-on-X540-devices.patch:
+ ixgbe: PFC not cleared on X540 devices (bnc#714789).
+- patches.drivers/0016-ixgbe-DCB-and-perfect-filters-can-coexist.patch:
+ ixgbe: DCB and perfect filters can coexist (bnc#714789).
+- patches.drivers/0017-ixgbe-consolidate-setup-for-multiple-traffic-classes.patch:
+ ixgbe: consolidate, setup for multiple traffic classes
+ (bnc#714789).
+- patches.drivers/0018-v2-ixgbe-Update-packet-buffer-reservation-to-correct.patch:
+ v2 ixgbe: Update packet buffer reservation to correct fdir
+ headroom size (bnc#714789).
+- patches.drivers/0019-ixgbe-update-P-FC-thresholds-to-account-for-X540-and.patch:
+ ixgbe: update {P}FC thresholds to account for X540 and loopback
+ (bnc#714789).
+- patches.drivers/0020-ixgbe-Make-certain-to-initialize-the-fdir_perfect_lo.patch:
+ ixgbe: Make certain to initialize the fdir_perfect_lock in
+ all cases (bnc#714789).
+- patches.drivers/0021-ixgbe-complete-FCoE-initialization-from-setapp-routi.patch:
+ ixgbe: complete FCoE initialization from setapp() routine
+ (bnc#714789).
+- patches.drivers/0022-ixgbe-DCB-do-not-call-set_state-from-IEEE-mode.patch:
+ ixgbe: DCB, do not call set_state() from IEEE mode (bnc#714789).
+- patches.drivers/0023-ixgbe-add-structure-for-containing-RX-TX-rings-to-q_.patch:
+ ixgbe: add structure for containing RX/TX rings to q_vector
+ (bnc#714789).
+- patches.drivers/0024-ixgbe-Move-interrupt-related-values-out-of-ring-and-.patch:
+ ixgbe: Move interrupt related values out of ring and into
+ q_vector (bnc#714789).
+- patches.drivers/0025-ixgbe-DCB-X540-devices-support-max-traffic-class-of-.patch:
+ ixgbe: DCB X540 devices support max traffic class of 4
+ (bnc#714789).
+- patches.drivers/0026-ixgbe-Move-all-values-that-deal-with-count-next_to_u.patch:
+ ixgbe: Move all values that deal with count, next_to_use,
+ next_to_clean to u16 (bnc#714789).
+- patches.drivers/0027-ixgbe-Update-method-used-for-determining-descriptor-.patch:
+ ixgbe: Update method used for determining descriptor count
+ for an skb (bnc#714789).
+- patches.drivers/0028-ixgbe-Add-one-function-that-handles-most-of-context-.patch:
+ ixgbe: Add one function that handles most of context descriptor
+ setup (bnc#714789).
+- patches.drivers/0029-ixgbe-Refactor-transmit-map-and-cleanup-routines.patch:
+ ixgbe: Refactor transmit map and cleanup routines (bnc#714789).
+- patches.drivers/0030-ixgbe-Cleanup-FCOE-and-VLAN-handling-in-xmit_frame_r.patch:
+ ixgbe: Cleanup FCOE and VLAN handling in xmit_frame_ring
+ (bnc#714789).
+- patches.drivers/0031-ixgbe-Always-tag-VLAN-tagged-packets.patch:
+ ixgbe: Always tag VLAN tagged packets (bnc#714789).
+- patches.drivers/0032-ixgbe-fixup-hard-dependencies-on-supporting-8-traffi.patch:
+ ixgbe: fixup hard dependencies on supporting 8 traffic classes
+ (bnc#714789).
+
+-------------------------------------------------------------------
Fri Nov 25 07:02:20 CET 2011 - jlee@suse.de
- patches.drivers/0001-qlge-fix-size-of-external-list-for-TX-address-descri.patch:
diff --git a/patches.drivers/0003-ixgbe-consolidate-MRQC-and-MTQC-handling.patch b/patches.drivers/0003-ixgbe-consolidate-MRQC-and-MTQC-handling.patch
new file mode 100644
index 0000000000..e7896bc63b
--- /dev/null
+++ b/patches.drivers/0003-ixgbe-consolidate-MRQC-and-MTQC-handling.patch
@@ -0,0 +1,131 @@
+From 72a32f1f3f68b7d95e7151b5f88831fb9906416e Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 26 Apr 2011 07:25:58 +0000
+Subject: [PATCH 3/32] ixgbe: consolidate MRQC and MTQC handling
+Git-commit: 72a32f1f3f68b7d95e7151b5f88831fb9906416e
+Patch-mainline: v3.1-rc1~316^2~392
+References: bnc#714789
+
+The MRQC and MTQC registers are configured in the main
+setup path but are also reconfigured in the DCB setup
+path. The DCB path fixes the DCB configuration by configuring
+the SECTXMINIFG gap which is required for DCB pause
+to operate correctly.
+
+This patch reduces the duplicate code and does all setup
+in ixgbe_setup_mtqc() and ixgbe_setup_mrqc().
+
+Additionally, this removes the IXGBE_QDE. This write never
+set the WRITE bit in the register so the write was not
+actually doing anything. Also this was to clear the register
+but, it is never set and defaults to zero. If this is
+needed for SRIOV it should be added correctly in a follow
+up patch. But it's never been working so removing it here
+should be OK.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_82599.c | 57 ------------------------------------
+ drivers/net/ixgbe/ixgbe_main.c | 7 ++++
+ 2 files changed, 7 insertions(+), 57 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_82599.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_82599.c
+@@ -376,62 +376,6 @@ static s32 ixgbe_dcb_config_tc_stats_825
+ }
+
+ /**
+- * ixgbe_dcb_config_82599 - Configure general DCB parameters
+- * @hw: pointer to hardware structure
+- *
+- * Configure general DCB parameters.
+- */
+-static s32 ixgbe_dcb_config_82599(struct ixgbe_hw *hw)
+-{
+- u32 reg;
+- u32 q;
+-
+- /* Disable the Tx desc arbiter so that MTQC can be changed */
+- reg = IXGBE_READ_REG(hw, IXGBE_RTTDCS);
+- reg |= IXGBE_RTTDCS_ARBDIS;
+- IXGBE_WRITE_REG(hw, IXGBE_RTTDCS, reg);
+-
+- /* Enable DCB for Rx with 8 TCs */
+- reg = IXGBE_READ_REG(hw, IXGBE_MRQC);
+- switch (reg & IXGBE_MRQC_MRQE_MASK) {
+- case 0:
+- case IXGBE_MRQC_RT4TCEN:
+- /* RSS disabled cases */
+- reg = (reg & ~IXGBE_MRQC_MRQE_MASK) | IXGBE_MRQC_RT8TCEN;
+- break;
+- case IXGBE_MRQC_RSSEN:
+- case IXGBE_MRQC_RTRSS4TCEN:
+- /* RSS enabled cases */
+- reg = (reg & ~IXGBE_MRQC_MRQE_MASK) | IXGBE_MRQC_RTRSS8TCEN;
+- break;
+- default:
+- /* Unsupported value, assume stale data, overwrite no RSS */
+- reg = (reg & ~IXGBE_MRQC_MRQE_MASK) | IXGBE_MRQC_RT8TCEN;
+- }
+- IXGBE_WRITE_REG(hw, IXGBE_MRQC, reg);
+-
+- /* Enable DCB for Tx with 8 TCs */
+- reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
+- IXGBE_WRITE_REG(hw, IXGBE_MTQC, reg);
+-
+- /* Disable drop for all queues */
+- for (q = 0; q < 128; q++)
+- IXGBE_WRITE_REG(hw, IXGBE_QDE, q << IXGBE_QDE_IDX_SHIFT);
+-
+- /* Enable the Tx desc arbiter */
+- reg = IXGBE_READ_REG(hw, IXGBE_RTTDCS);
+- reg &= ~IXGBE_RTTDCS_ARBDIS;
+- IXGBE_WRITE_REG(hw, IXGBE_RTTDCS, reg);
+-
+- /* Enable Security TX Buffer IFG for DCB */
+- reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+- reg |= IXGBE_SECTX_DCB;
+- IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+-
+- return 0;
+-}
+-
+-/**
+ * ixgbe_dcb_hw_config_82599 - Configure and enable DCB
+ * @hw: pointer to hardware structure
+ * @rx_pba: method to distribute packet buffer
+@@ -448,7 +392,6 @@ s32 ixgbe_dcb_hw_config_82599(struct ixg
+ u16 *max, u8 *bwg_id, u8 *prio_type, u8 *prio_tc)
+ {
+ ixgbe_dcb_config_packet_buffers_82599(hw, rx_pba);
+- ixgbe_dcb_config_82599(hw);
+ ixgbe_dcb_config_rx_arbiter_82599(hw, refill, max, bwg_id,
+ prio_type, prio_tc);
+ ixgbe_dcb_config_tx_desc_arbiter_82599(hw, refill, max,
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -2833,6 +2833,7 @@ static void ixgbe_setup_mtqc(struct ixgb
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 rttdcs;
+ u32 mask;
++ u32 reg;
+
+ if (hw->mac.type == ixgbe_mac_82598EB)
+ return;
+@@ -2855,6 +2856,12 @@ static void ixgbe_setup_mtqc(struct ixgb
+ /* We enable 8 traffic classes, DCB only */
+ IXGBE_WRITE_REG(hw, IXGBE_MTQC,
+ (IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ));
++
++ /* Enable Security TX Buffer IFG for DCB */
++ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
++ reg |= IXGBE_SECTX_DCB;
++ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
++
+ break;
+
+ default:
diff --git a/patches.drivers/0004-ixgbe-dcbnl-reduce-duplicated-code-and-indentation.patch b/patches.drivers/0004-ixgbe-dcbnl-reduce-duplicated-code-and-indentation.patch
new file mode 100644
index 0000000000..1f083c344d
--- /dev/null
+++ b/patches.drivers/0004-ixgbe-dcbnl-reduce-duplicated-code-and-indentation.patch
@@ -0,0 +1,88 @@
+From 1fcd86b51179518f7e69164e37353fb59cd6301e Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 26 Apr 2011 10:05:14 +0000
+Subject: [PATCH 4/32] ixgbe: dcbnl reduce duplicated code and indentation
+Git-commit: 1fcd86b51179518f7e69164e37353fb59cd6301e
+Patch-mainline: v3.1-rc1~316^2~394
+References: bnc#714789
+
+Replace duplicated code in if/else branches with single
+check and ixgbe_init_interrupt_scheme().
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 51 ++++++++++++++++++--------------------
+ 1 files changed, 24 insertions(+), 27 deletions(-)
+
+diff --git a/drivers/net/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+index 5e7ed22..293ff06 100644
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -146,37 +146,34 @@ static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state)
+ adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
+ if (!netdev_get_num_tc(netdev))
+ ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
+-
+- ixgbe_init_interrupt_scheme(adapter);
+- if (netif_running(netdev))
+- netdev->netdev_ops->ndo_open(netdev);
+ } else {
+ /* Turn off DCB */
+- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
+- if (netif_running(netdev))
+- netdev->netdev_ops->ndo_stop(netdev);
+- ixgbe_clear_interrupt_scheme(adapter);
++ if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
++ goto out;
+
+- adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
+- adapter->temp_dcb_cfg.pfc_mode_enable = false;
+- adapter->dcb_cfg.pfc_mode_enable = false;
+- adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
+- switch (adapter->hw.mac.type) {
+- case ixgbe_mac_82599EB:
+- case ixgbe_mac_X540:
+- adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
+- break;
+- default:
+- break;
+- }
+-
+- ixgbe_setup_tc(netdev, 0);
+-
+- ixgbe_init_interrupt_scheme(adapter);
+- if (netif_running(netdev))
+- netdev->netdev_ops->ndo_open(netdev);
++ if (netif_running(netdev))
++ netdev->netdev_ops->ndo_stop(netdev);
++ ixgbe_clear_interrupt_scheme(adapter);
++
++ adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
++ adapter->temp_dcb_cfg.pfc_mode_enable = false;
++ adapter->dcb_cfg.pfc_mode_enable = false;
++ adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
++ switch (adapter->hw.mac.type) {
++ case ixgbe_mac_82599EB:
++ case ixgbe_mac_X540:
++ adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
++ break;
++ default:
++ break;
+ }
++
++ ixgbe_setup_tc(netdev, 0);
+ }
++
++ ixgbe_init_interrupt_scheme(adapter);
++ if (netif_running(netdev))
++ netdev->netdev_ops->ndo_open(netdev);
+ out:
+ return err;
+ }
+--
+1.7.3.4
+
diff --git a/patches.drivers/0005-ixgbe-configure-minimal-packet-buffers-to-support-TC.patch b/patches.drivers/0005-ixgbe-configure-minimal-packet-buffers-to-support-TC.patch
new file mode 100644
index 0000000000..af62e6069c
--- /dev/null
+++ b/patches.drivers/0005-ixgbe-configure-minimal-packet-buffers-to-support-TC.patch
@@ -0,0 +1,409 @@
+From 8b1c0b24d9afd4a59a8aa9c778253bcff949395a Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 3 May 2011 02:26:48 +0000
+Subject: [PATCH 5/32] ixgbe: configure minimal packet buffers to support TC
+Git-commit: 8b1c0b24d9afd4a59a8aa9c778253bcff949395a
+Patch-mainline: v3.1-rc1~316^2~391
+References: bnc#714789
+
+ixgbe devices support different numbers of packet buffers either
+8 or 4. Here we only allocate the minimal number of packet
+buffers required to implement the net_devices number of traffic
+classes.
+
+Fewer traffic classes allows for larger packet buffers in
+hardware. Also more Tx/Rx queues can be given to each
+traffic class.
+
+This patch is mostly about propagating the number of traffic
+classes through the init path. Specifically this adds the 4TC
+cases to the MRQC and MTQC setup routines. Also ixgbe_setup_tc()
+was sanitized to handle other traffic class value.
+
+Finally changing the number of packet buffers in the hardware
+requires the device to reinit. So this moves the reinit work
+from DCB into the main ixgbe_setup_tc() routine to consolidate
+the reset code. Now dcbnl_xxx ops call ixgbe_setup_tc() to
+configure packet buffers if needed.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 16 --
+ drivers/net/ixgbe/ixgbe_main.c | 235 ++++++++++++++++++++++-----------------
+ drivers/net/ixgbe/ixgbe_type.h | 1
+ 3 files changed, 138 insertions(+), 114 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -125,9 +125,7 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ goto out;
+ }
+
+- if (netif_running(netdev))
+- netdev->netdev_ops->ndo_stop(netdev);
+- ixgbe_clear_interrupt_scheme(adapter);
++ adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
+
+ switch (adapter->hw.mac.type) {
+ case ixgbe_mac_82598EB:
+@@ -143,18 +141,12 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ break;
+ }
+
+- adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
+- if (!netdev_get_num_tc(netdev))
+- ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
++ ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
+ } else {
+ /* Turn off DCB */
+ if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+ goto out;
+
+- if (netif_running(netdev))
+- netdev->netdev_ops->ndo_stop(netdev);
+- ixgbe_clear_interrupt_scheme(adapter);
+-
+ adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
+ adapter->temp_dcb_cfg.pfc_mode_enable = false;
+ adapter->dcb_cfg.pfc_mode_enable = false;
+@@ -167,13 +159,9 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ default:
+ break;
+ }
+-
+ ixgbe_setup_tc(netdev, 0);
+ }
+
+- ixgbe_init_interrupt_scheme(adapter);
+- if (netif_running(netdev))
+- netdev->netdev_ops->ndo_open(netdev);
+ out:
+ return err;
+ }
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -2832,8 +2832,8 @@ static void ixgbe_setup_mtqc(struct ixgb
+ {
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 rttdcs;
+- u32 mask;
+ u32 reg;
++ u8 tcs = netdev_get_num_tc(adapter->netdev);
+
+ if (hw->mac.type == ixgbe_mac_82598EB)
+ return;
+@@ -2844,28 +2844,27 @@ static void ixgbe_setup_mtqc(struct ixgb
+ IXGBE_WRITE_REG(hw, IXGBE_RTTDCS, rttdcs);
+
+ /* set transmit pool layout */
+- mask = (IXGBE_FLAG_SRIOV_ENABLED | IXGBE_FLAG_DCB_ENABLED);
+- switch (adapter->flags & mask) {
+-
++ switch (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) {
+ case (IXGBE_FLAG_SRIOV_ENABLED):
+ IXGBE_WRITE_REG(hw, IXGBE_MTQC,
+ (IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF));
+ break;
++ default:
++ if (!tcs)
++ reg = IXGBE_MTQC_64Q_1PB;
++ else if (tcs <= 4)
++ reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
++ else
++ reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
+
+- case (IXGBE_FLAG_DCB_ENABLED):
+- /* We enable 8 traffic classes, DCB only */
+- IXGBE_WRITE_REG(hw, IXGBE_MTQC,
+- (IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ));
+-
+- /* Enable Security TX Buffer IFG for DCB */
+- reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+- reg |= IXGBE_SECTX_DCB;
+- IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+-
+- break;
++ IXGBE_WRITE_REG(hw, IXGBE_MTQC, reg);
+
+- default:
+- IXGBE_WRITE_REG(hw, IXGBE_MTQC, IXGBE_MTQC_64Q_1PB);
++ /* Enable Security TX Buffer IFG for multiple pb */
++ if (tcs) {
++ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
++ reg |= IXGBE_SECTX_DCB;
++ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
++ }
+ break;
+ }
+
+@@ -2956,7 +2955,7 @@ static void ixgbe_setup_mrqc(struct ixgb
+ u32 mrqc = 0, reta = 0;
+ u32 rxcsum;
+ int i, j;
+- int mask;
++ u8 tcs = netdev_get_num_tc(adapter->netdev);
+
+ /* Fill out hash function seeds */
+ for (i = 0; i < 10; i++)
+@@ -2978,33 +2977,28 @@ static void ixgbe_setup_mrqc(struct ixgb
+ rxcsum |= IXGBE_RXCSUM_PCSD;
+ IXGBE_WRITE_REG(hw, IXGBE_RXCSUM, rxcsum);
+
+- if (adapter->hw.mac.type == ixgbe_mac_82598EB)
+- mask = adapter->flags & IXGBE_FLAG_RSS_ENABLED;
+- else
+- mask = adapter->flags & (IXGBE_FLAG_RSS_ENABLED
+-#ifdef CONFIG_IXGBE_DCB
+- | IXGBE_FLAG_DCB_ENABLED
+-#endif
+- | IXGBE_FLAG_SRIOV_ENABLED
+- );
+-
+- switch (mask) {
+-#ifdef CONFIG_IXGBE_DCB
+- case (IXGBE_FLAG_DCB_ENABLED | IXGBE_FLAG_RSS_ENABLED):
+- mrqc = IXGBE_MRQC_RTRSS8TCEN;
+- break;
+- case (IXGBE_FLAG_DCB_ENABLED):
+- mrqc = IXGBE_MRQC_RT8TCEN;
+- break;
+-#endif /* CONFIG_IXGBE_DCB */
+- case (IXGBE_FLAG_RSS_ENABLED):
++ if (adapter->hw.mac.type == ixgbe_mac_82598EB &&
++ (adapter->flags & IXGBE_FLAG_RSS_ENABLED)) {
+ mrqc = IXGBE_MRQC_RSSEN;
+- break;
+- case (IXGBE_FLAG_SRIOV_ENABLED):
+- mrqc = IXGBE_MRQC_VMDQEN;
+- break;
+- default:
+- break;
++ } else {
++ int mask = adapter->flags & (IXGBE_FLAG_RSS_ENABLED
++ | IXGBE_FLAG_SRIOV_ENABLED);
++
++ switch (mask) {
++ case (IXGBE_FLAG_RSS_ENABLED):
++ if (!tcs)
++ mrqc = IXGBE_MRQC_RSSEN;
++ else if (tcs <= 4)
++ mrqc = IXGBE_MRQC_RTRSS4TCEN;
++ else
++ mrqc = IXGBE_MRQC_RTRSS8TCEN;
++ break;
++ case (IXGBE_FLAG_SRIOV_ENABLED):
++ mrqc = IXGBE_MRQC_VMDQEN;
++ break;
++ default:
++ break;
++ }
+ }
+
+ /* Perform hash on these packet types */
+@@ -4463,14 +4457,17 @@ static inline bool ixgbe_set_dcb_queues(
+ {
+ bool ret = false;
+ struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_DCB];
+- int i, q;
++ int tcs = netdev_get_num_tc(adapter->netdev);
++ int max_q, i, q;
+
+- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
++ if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED) || !tcs)
+ return ret;
+
++ max_q = adapter->netdev->num_tx_queues / tcs;
++
+ f->indices = 0;
+- for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+- q = min((int)num_online_cpus(), MAX_TRAFFIC_CLASS);
++ for (i = 0; i < tcs; i++) {
++ q = min((int)num_online_cpus(), max_q);
+ f->indices += q;
+ }
+
+@@ -4682,55 +4679,6 @@ static void ixgbe_get_first_reg_idx(stru
+ }
+ }
+
+-#define IXGBE_MAX_Q_PER_TC (IXGBE_MAX_DCB_INDICES / MAX_TRAFFIC_CLASS)
+-
+-/* ixgbe_setup_tc - routine to configure net_device for multiple traffic
+- * classes.
+- *
+- * @netdev: net device to configure
+- * @tc: number of traffic classes to enable
+- */
+-int ixgbe_setup_tc(struct net_device *dev, u8 tc)
+-{
+- int i;
+- unsigned int q, offset = 0;
+-
+- if (!tc) {
+- netdev_reset_tc(dev);
+- } else {
+- struct ixgbe_adapter *adapter = netdev_priv(dev);
+-
+- /* Hardware supports up to 8 traffic classes */
+- if (tc > MAX_TRAFFIC_CLASS || netdev_set_num_tc(dev, tc))
+- return -EINVAL;
+-
+- /* Partition Tx queues evenly amongst traffic classes */
+- for (i = 0; i < tc; i++) {
+- q = min((int)num_online_cpus(), IXGBE_MAX_Q_PER_TC);
+- netdev_set_prio_tc_map(dev, i, i);
+- netdev_set_tc_queue(dev, i, q, offset);
+- offset += q;
+- }
+-
+- /* This enables multiple traffic class support in the hardware
+- * which defaults to strict priority transmission by default.
+- * If traffic classes are already enabled perhaps through DCB
+- * code path then existing configuration will be used.
+- */
+- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
+- dev->dcbnl_ops && dev->dcbnl_ops->setdcbx) {
+- struct ieee_ets ets = {
+- .prio_tc = {0, 1, 2, 3, 4, 5, 6, 7},
+- };
+- u8 mode = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE;
+-
+- dev->dcbnl_ops->setdcbx(dev, mode);
+- dev->dcbnl_ops->ieee_setets(dev, &ets);
+- }
+- }
+- return 0;
+-}
+-
+ /**
+ * ixgbe_cache_ring_dcb - Descriptor ring to register mapping for DCB
+ * @adapter: board private structure to initialize
+@@ -4744,7 +4692,7 @@ static inline bool ixgbe_cache_ring_dcb(
+ int i, j, k;
+ u8 num_tcs = netdev_get_num_tc(dev);
+
+- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
++ if (!num_tcs)
+ return false;
+
+ for (i = 0, k = 0; i < num_tcs; i++) {
+@@ -7223,6 +7171,95 @@ static struct rtnl_link_stats64 *ixgbe_g
+ return stats;
+ }
+
++/* ixgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid.
++ * #adapter: pointer to ixgbe_adapter
++ * @tc: number of traffic classes currently enabled
++ *
++ * Configure a valid 802.1Qp to Rx packet buffer mapping ie confirm
++ * 802.1Q priority maps to a packet buffer that exists.
++ */
++static void ixgbe_validate_rtr(struct ixgbe_adapter *adapter, u8 tc)
++{
++ struct ixgbe_hw *hw = &adapter->hw;
++ u32 reg, rsave;
++ int i;
++
++ /* 82598 have a static priority to TC mapping that can not
++ * be changed so no validation is needed.
++ */
++ if (hw->mac.type == ixgbe_mac_82598EB)
++ return;
++
++ reg = IXGBE_READ_REG(hw, IXGBE_RTRUP2TC);
++ rsave = reg;
++
++ for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
++ u8 up2tc = reg >> (i * IXGBE_RTRUP2TC_UP_SHIFT);
++
++ /* If up2tc is out of bounds default to zero */
++ if (up2tc > tc)
++ reg &= ~(0x7 << IXGBE_RTRUP2TC_UP_SHIFT);
++ }
++
++ if (reg != rsave)
++ IXGBE_WRITE_REG(hw, IXGBE_RTRUP2TC, reg);
++
++ return;
++}
++
++
++/* ixgbe_setup_tc - routine to configure net_device for multiple traffic
++ * classes.
++ *
++ * @netdev: net device to configure
++ * @tc: number of traffic classes to enable
++ */
++int ixgbe_setup_tc(struct net_device *dev, u8 tc)
++{
++ unsigned int q, i, offset = 0;
++ struct ixgbe_adapter *adapter = netdev_priv(dev);
++ struct ixgbe_hw *hw = &adapter->hw;
++ int max_q = adapter->netdev->num_tx_queues / tc;
++
++ /* If DCB is anabled do not remove traffic classes, multiple
++ * traffic classes are required to implement DCB
++ */
++ if (!tc && (adapter->flags & IXGBE_FLAG_DCB_ENABLED))
++ return 0;
++
++ /* Hardware supports up to 8 traffic classes */
++ if (tc > MAX_TRAFFIC_CLASS ||
++ (hw->mac.type == ixgbe_mac_82598EB && tc < MAX_TRAFFIC_CLASS))
++ return -EINVAL;
++
++ /* Hardware has to reinitialize queues and interrupts to
++ * match packet buffer alignment. Unfortunantly, the
++ * hardware is not flexible enough to do this dynamically.
++ */
++ if (netif_running(dev))
++ ixgbe_close(dev);
++ ixgbe_clear_interrupt_scheme(adapter);
++
++ if (tc)
++ netdev_set_num_tc(dev, tc);
++ else
++ netdev_reset_tc(dev);
++
++ /* Partition Tx queues evenly amongst traffic classes */
++ for (i = 0; i < tc; i++) {
++ q = min((int)num_online_cpus(), max_q);
++ netdev_set_prio_tc_map(dev, i, i);
++ netdev_set_tc_queue(dev, i, q, offset);
++ offset += q;
++ }
++
++ ixgbe_init_interrupt_scheme(adapter);
++ ixgbe_validate_rtr(adapter, tc);
++ if (netif_running(dev))
++ ixgbe_open(dev);
++
++ return 0;
++}
+
+ static const struct net_device_ops ixgbe_netdev_ops = {
+ .ndo_open = ixgbe_open,
+@@ -7243,9 +7280,7 @@ static const struct net_device_ops ixgbe
+ .ndo_set_vf_tx_rate = ixgbe_ndo_set_vf_bw,
+ .ndo_get_vf_config = ixgbe_ndo_get_vf_config,
+ .ndo_get_stats64 = ixgbe_get_stats64,
+-#ifdef CONFIG_IXGBE_DCB
+ .ndo_setup_tc = ixgbe_setup_tc,
+-#endif
+ #ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = ixgbe_netpoll,
+ #endif
+--- a/drivers/net/ixgbe/ixgbe_type.h
++++ b/drivers/net/ixgbe/ixgbe_type.h
+@@ -1864,6 +1864,7 @@
+ #define IXGBE_MTQC_32VF 0x8 /* 4 TX Queues per pool w/32VF's */
+ #define IXGBE_MTQC_64VF 0x4 /* 2 TX Queues per pool w/64VF's */
+ #define IXGBE_MTQC_8TC_8TQ 0xC /* 8 TC if RT_ENA or 8 TQ if VT_ENA */
++#define IXGBE_MTQC_4TC_4TQ 0x8 /* 4 TC if RT_ENA or 4 TQ if VT_ENA */
+
+ /* Receive Descriptor bit definitions */
+ #define IXGBE_RXD_STAT_DD 0x01 /* Descriptor Done */
diff --git a/patches.drivers/0006-ixgbe-consolidate-packet-buffer-allocation.patch b/patches.drivers/0006-ixgbe-consolidate-packet-buffer-allocation.patch
new file mode 100644
index 0000000000..2acbf3eda7
--- /dev/null
+++ b/patches.drivers/0006-ixgbe-consolidate-packet-buffer-allocation.patch
@@ -0,0 +1,569 @@
+From 80605c6513207344d00b32e8d1e64bd34fdf1358 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Mon, 2 May 2011 12:34:10 +0000
+Subject: [PATCH 6/32] ixgbe: consolidate packet buffer allocation
+Git-commit: 80605c6513207344d00b32e8d1e64bd34fdf1358
+Patch-mainline: v3.1-rc1~316^2~393
+References: bnc#714789
+
+Consolidate packet buffer allocation currently being
+done in the DCB path and main path. This allows the
+feature set and packet buffer requirements to be done
+once.
+
+This is prep work to allow DCB to coexist with other
+features namely, flow director.
+
+Cc: Alexander Duyck <alexander.h.duyck@intel.com>
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_82598.c | 42 ++++++++++++++++++++++
+ drivers/net/ixgbe/ixgbe_82599.c | 39 ---------------------
+ drivers/net/ixgbe/ixgbe_common.c | 66 ++++++++++++++++++++++++++++++++++++
+ drivers/net/ixgbe/ixgbe_common.h | 3 +
+ drivers/net/ixgbe/ixgbe_dcb.c | 10 ++---
+ drivers/net/ixgbe/ixgbe_dcb.h | 7 ---
+ drivers/net/ixgbe/ixgbe_dcb_82598.c | 43 -----------------------
+ drivers/net/ixgbe/ixgbe_dcb_82598.h | 3 -
+ drivers/net/ixgbe/ixgbe_dcb_82599.c | 62 ---------------------------------
+ drivers/net/ixgbe/ixgbe_dcb_82599.h | 14 -------
+ drivers/net/ixgbe/ixgbe_main.c | 16 ++++++++
+ drivers/net/ixgbe/ixgbe_type.h | 24 +++++++++++++
+ drivers/net/ixgbe/ixgbe_x540.c | 1
+ 13 files changed, 160 insertions(+), 170 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_82598.c
++++ b/drivers/net/ixgbe/ixgbe_82598.c
+@@ -1242,6 +1242,47 @@ static void ixgbe_set_lan_id_multi_port_
+ }
+ }
+
++/**
++ * ixgbe_set_rxpba_82598 - Configure packet buffers
++ * @hw: pointer to hardware structure
++ * @dcb_config: pointer to ixgbe_dcb_config structure
++ *
++ * Configure packet buffers.
++ */
++static void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb, u32 headroom,
++ int strategy)
++{
++ u32 rxpktsize = IXGBE_RXPBSIZE_64KB;
++ u8 i = 0;
++
++ if (!num_pb)
++ return;
++
++ /* Setup Rx packet buffer sizes */
++ switch (strategy) {
++ case PBA_STRATEGY_WEIGHTED:
++ /* Setup the first four at 80KB */
++ rxpktsize = IXGBE_RXPBSIZE_80KB;
++ for (; i < 4; i++)
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
++ /* Setup the last four at 48KB...don't re-init i */
++ rxpktsize = IXGBE_RXPBSIZE_48KB;
++ /* Fall Through */
++ case PBA_STRATEGY_EQUAL:
++ default:
++ /* Divide the remaining Rx packet buffer evenly among the TCs */
++ for (; i < IXGBE_MAX_PACKET_BUFFERS; i++)
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
++ break;
++ }
++
++ /* Setup Tx packet buffer sizes */
++ for (i = 0; i < IXGBE_MAX_PACKET_BUFFERS; i++)
++ IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), IXGBE_TXPBSIZE_40KB);
++
++ return;
++}
++
+ static struct ixgbe_mac_operations mac_ops_82598 = {
+ .init_hw = &ixgbe_init_hw_generic,
+ .reset_hw = &ixgbe_reset_hw_82598,
+@@ -1257,6 +1298,7 @@ static struct ixgbe_mac_operations mac_o
+ .read_analog_reg8 = &ixgbe_read_analog_reg8_82598,
+ .write_analog_reg8 = &ixgbe_write_analog_reg8_82598,
+ .setup_link = &ixgbe_setup_mac_link_82598,
++ .set_rxpba = &ixgbe_set_rxpba_82598,
+ .check_link = &ixgbe_check_mac_link_82598,
+ .get_link_capabilities = &ixgbe_get_link_capabilities_82598,
+ .led_on = &ixgbe_led_on_generic,
+--- a/drivers/net/ixgbe/ixgbe_82599.c
++++ b/drivers/net/ixgbe/ixgbe_82599.c
+@@ -1114,27 +1114,8 @@ s32 ixgbe_reinit_fdir_tables_82599(struc
+ s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 pballoc)
+ {
+ u32 fdirctrl = 0;
+- u32 pbsize;
+ int i;
+
+- /*
+- * Before enabling Flow Director, the Rx Packet Buffer size
+- * must be reduced. The new value is the current size minus
+- * flow director memory usage size.
+- */
+- pbsize = (1 << (IXGBE_FDIR_PBALLOC_SIZE_SHIFT + pballoc));
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(0),
+- (IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(0)) - pbsize));
+-
+- /*
+- * The defaults in the HW for RX PB 1-7 are not zero and so should be
+- * initialized to zero for non DCB mode otherwise actual total RX PB
+- * would be bigger than programmed and filter space would run into
+- * the PB 0 region.
+- */
+- for (i = 1; i < 8; i++)
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
+-
+ /* Send interrupt when 64 filters are left */
+ fdirctrl |= 4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT;
+
+@@ -1202,27 +1183,8 @@ s32 ixgbe_init_fdir_signature_82599(stru
+ s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 pballoc)
+ {
+ u32 fdirctrl = 0;
+- u32 pbsize;
+ int i;
+
+- /*
+- * Before enabling Flow Director, the Rx Packet Buffer size
+- * must be reduced. The new value is the current size minus
+- * flow director memory usage size.
+- */
+- pbsize = (1 << (IXGBE_FDIR_PBALLOC_SIZE_SHIFT + pballoc));
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(0),
+- (IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(0)) - pbsize));
+-
+- /*
+- * The defaults in the HW for RX PB 1-7 are not zero and so should be
+- * initialized to zero for non DCB mode otherwise actual total RX PB
+- * would be bigger than programmed and filter space would run into
+- * the PB 0 region.
+- */
+- for (i = 1; i < 8; i++)
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
+-
+ /* Send interrupt when 64 filters are left */
+ fdirctrl |= 4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT;
+
+@@ -2146,6 +2108,7 @@ static struct ixgbe_mac_operations mac_o
+ .read_analog_reg8 = &ixgbe_read_analog_reg8_82599,
+ .write_analog_reg8 = &ixgbe_write_analog_reg8_82599,
+ .setup_link = &ixgbe_setup_mac_link_82599,
++ .set_rxpba = &ixgbe_set_rxpba_generic,
+ .check_link = &ixgbe_check_mac_link_generic,
+ .get_link_capabilities = &ixgbe_get_link_capabilities_82599,
+ .led_on = &ixgbe_led_on_generic,
+--- a/drivers/net/ixgbe/ixgbe_common.c
++++ b/drivers/net/ixgbe/ixgbe_common.c
+@@ -3267,3 +3267,69 @@ s32 ixgbe_get_device_caps_generic(struct
+
+ return 0;
+ }
++
++/**
++ * ixgbe_set_rxpba_generic - Initialize RX packet buffer
++ * @hw: pointer to hardware structure
++ * @num_pb: number of packet buffers to allocate
++ * @headroom: reserve n KB of headroom
++ * @strategy: packet buffer allocation strategy
++ **/
++void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw,
++ int num_pb,
++ u32 headroom,
++ int strategy)
++{
++ u32 pbsize = hw->mac.rx_pb_size;
++ int i = 0;
++ u32 rxpktsize, txpktsize, txpbthresh;
++
++ /* Reserve headroom */
++ pbsize -= headroom;
++
++ if (!num_pb)
++ num_pb = 1;
++
++ /* Divide remaining packet buffer space amongst the number
++ * of packet buffers requested using supplied strategy.
++ */
++ switch (strategy) {
++ case (PBA_STRATEGY_WEIGHTED):
++ /* pba_80_48 strategy weight first half of packet buffer with
++ * 5/8 of the packet buffer space.
++ */
++ rxpktsize = ((pbsize * 5 * 2) / (num_pb * 8));
++ pbsize -= rxpktsize * (num_pb / 2);
++ rxpktsize <<= IXGBE_RXPBSIZE_SHIFT;
++ for (; i < (num_pb / 2); i++)
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
++ /* Fall through to configure remaining packet buffers */
++ case (PBA_STRATEGY_EQUAL):
++ /* Divide the remaining Rx packet buffer evenly among the TCs */
++ rxpktsize = (pbsize / (num_pb - i)) << IXGBE_RXPBSIZE_SHIFT;
++ for (; i < num_pb; i++)
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
++ break;
++ default:
++ break;
++ }
++
++ /*
++ * Setup Tx packet buffer and threshold equally for all TCs
++ * TXPBTHRESH register is set in K so divide by 1024 and subtract
++ * 10 since the largest packet we support is just over 9K.
++ */
++ txpktsize = IXGBE_TXPBSIZE_MAX / num_pb;
++ txpbthresh = (txpktsize / 1024) - IXGBE_TXPKT_SIZE_MAX;
++ for (i = 0; i < num_pb; i++) {
++ IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), txpktsize);
++ IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
++ }
++
++ /* Clear unused TCs, if any, to zero buffer size*/
++ for (; i < IXGBE_MAX_PB; i++) {
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
++ IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
++ IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
++ }
++}
+--- a/drivers/net/ixgbe/ixgbe_common.h
++++ b/drivers/net/ixgbe/ixgbe_common.h
+@@ -100,6 +100,9 @@ void ixgbe_set_mac_anti_spoofing(struct
+ void ixgbe_set_vlan_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf);
+ s32 ixgbe_get_device_caps_generic(struct ixgbe_hw *hw, u16 *device_caps);
+
++void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw, int num_pb,
++ u32 headroom, int strategy);
++
+ #define IXGBE_WRITE_REG(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
+
+ #ifndef writeq
+--- a/drivers/net/ixgbe/ixgbe_dcb.c
++++ b/drivers/net/ixgbe/ixgbe_dcb.c
+@@ -258,15 +258,13 @@ s32 ixgbe_dcb_hw_config(struct ixgbe_hw
+
+ switch (hw->mac.type) {
+ case ixgbe_mac_82598EB:
+- ret = ixgbe_dcb_hw_config_82598(hw, dcb_config->rx_pba_cfg,
+- pfc_en, refill, max, bwgid,
+- ptype);
++ ret = ixgbe_dcb_hw_config_82598(hw, pfc_en, refill, max,
++ bwgid, ptype);
+ break;
+ case ixgbe_mac_82599EB:
+ case ixgbe_mac_X540:
+- ret = ixgbe_dcb_hw_config_82599(hw, dcb_config->rx_pba_cfg,
+- pfc_en, refill, max, bwgid,
+- ptype, prio_tc);
++ ret = ixgbe_dcb_hw_config_82599(hw, pfc_en, refill, max,
++ bwgid, ptype, prio_tc);
+ break;
+ default:
+ break;
+--- a/drivers/net/ixgbe/ixgbe_dcb.h
++++ b/drivers/net/ixgbe/ixgbe_dcb.h
+@@ -123,11 +123,6 @@ struct tc_configuration {
+ u8 tc; /* Traffic class (TC) */
+ };
+
+-enum dcb_rx_pba_cfg {
+- pba_equal, /* PBA[0-7] each use 64KB FIFO */
+- pba_80_48 /* PBA[0-3] each use 80KB, PBA[4-7] each use 48KB */
+-};
+-
+ struct dcb_num_tcs {
+ u8 pg_tcs;
+ u8 pfc_tcs;
+@@ -140,8 +135,6 @@ struct ixgbe_dcb_config {
+ u8 bw_percentage[2][MAX_BW_GROUP]; /* One each for Tx/Rx */
+ bool pfc_mode_enable;
+
+- enum dcb_rx_pba_cfg rx_pba_cfg;
+-
+ u32 dcb_cfg_version; /* Not used...OS-specific? */
+ u32 link_speed; /* For bandwidth allocation validation purpose */
+ };
+--- a/drivers/net/ixgbe/ixgbe_dcb_82598.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_82598.c
+@@ -32,45 +32,6 @@
+ #include "ixgbe_dcb_82598.h"
+
+ /**
+- * ixgbe_dcb_config_packet_buffers_82598 - Configure packet buffers
+- * @hw: pointer to hardware structure
+- * @dcb_config: pointer to ixgbe_dcb_config structure
+- *
+- * Configure packet buffers for DCB mode.
+- */
+-static s32 ixgbe_dcb_config_packet_buffers_82598(struct ixgbe_hw *hw, u8 rx_pba)
+-{
+- s32 ret_val = 0;
+- u32 value = IXGBE_RXPBSIZE_64KB;
+- u8 i = 0;
+-
+- /* Setup Rx packet buffer sizes */
+- switch (rx_pba) {
+- case pba_80_48:
+- /* Setup the first four at 80KB */
+- value = IXGBE_RXPBSIZE_80KB;
+- for (; i < 4; i++)
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), value);
+- /* Setup the last four at 48KB...don't re-init i */
+- value = IXGBE_RXPBSIZE_48KB;
+- /* Fall Through */
+- case pba_equal:
+- default:
+- for (; i < IXGBE_MAX_PACKET_BUFFERS; i++)
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), value);
+-
+- /* Setup Tx packet buffer sizes */
+- for (i = 0; i < IXGBE_MAX_PACKET_BUFFERS; i++) {
+- IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i),
+- IXGBE_TXPBSIZE_40KB);
+- }
+- break;
+- }
+-
+- return ret_val;
+-}
+-
+-/**
+ * ixgbe_dcb_config_rx_arbiter_82598 - Config Rx data arbiter
+ * @hw: pointer to hardware structure
+ * @dcb_config: pointer to ixgbe_dcb_config structure
+@@ -321,11 +282,9 @@ static s32 ixgbe_dcb_config_tc_stats_825
+ *
+ * Configure dcb settings and enable dcb mode.
+ */
+-s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw,
+- u8 rx_pba, u8 pfc_en, u16 *refill,
++s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
+ u16 *max, u8 *bwg_id, u8 *prio_type)
+ {
+- ixgbe_dcb_config_packet_buffers_82598(hw, rx_pba);
+ ixgbe_dcb_config_rx_arbiter_82598(hw, refill, max, prio_type);
+ ixgbe_dcb_config_tx_desc_arbiter_82598(hw, refill, max,
+ bwg_id, prio_type);
+--- a/drivers/net/ixgbe/ixgbe_dcb_82598.h
++++ b/drivers/net/ixgbe/ixgbe_dcb_82598.h
+@@ -91,8 +91,7 @@ s32 ixgbe_dcb_config_tx_data_arbiter_825
+ u8 *bwg_id,
+ u8 *prio_type);
+
+-s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw,
+- u8 rx_pba, u8 pfc_en, u16 *refill,
++s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
+ u16 *max, u8 *bwg_id, u8 *prio_type);
+
+ #endif /* _DCB_82598_CONFIG_H */
+--- a/drivers/net/ixgbe/ixgbe_dcb_82599.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_82599.c
+@@ -31,63 +31,6 @@
+ #include "ixgbe_dcb_82599.h"
+
+ /**
+- * ixgbe_dcb_config_packet_buffers_82599 - Configure DCB packet buffers
+- * @hw: pointer to hardware structure
+- * @rx_pba: method to distribute packet buffer
+- *
+- * Configure packet buffers for DCB mode.
+- */
+-static s32 ixgbe_dcb_config_packet_buffers_82599(struct ixgbe_hw *hw, u8 rx_pba)
+-{
+- int num_tcs = IXGBE_MAX_PACKET_BUFFERS;
+- u32 rx_pb_size = hw->mac.rx_pb_size << IXGBE_RXPBSIZE_SHIFT;
+- u32 rxpktsize;
+- u32 txpktsize;
+- u32 txpbthresh;
+- u8 i = 0;
+-
+- /*
+- * This really means configure the first half of the TCs
+- * (Traffic Classes) to use 5/8 of the Rx packet buffer
+- * space. To determine the size of the buffer for each TC,
+- * we are multiplying the average size by 5/4 and applying
+- * it to half of the traffic classes.
+- */
+- if (rx_pba == pba_80_48) {
+- rxpktsize = (rx_pb_size * 5) / (num_tcs * 4);
+- rx_pb_size -= rxpktsize * (num_tcs / 2);
+- for (; i < (num_tcs / 2); i++)
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
+- }
+-
+- /* Divide the remaining Rx packet buffer evenly among the TCs */
+- rxpktsize = rx_pb_size / (num_tcs - i);
+- for (; i < num_tcs; i++)
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
+-
+- /*
+- * Setup Tx packet buffer and threshold equally for all TCs
+- * TXPBTHRESH register is set in K so divide by 1024 and subtract
+- * 10 since the largest packet we support is just over 9K.
+- */
+- txpktsize = IXGBE_TXPBSIZE_MAX / num_tcs;
+- txpbthresh = (txpktsize / 1024) - IXGBE_TXPKT_SIZE_MAX;
+- for (i = 0; i < num_tcs; i++) {
+- IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), txpktsize);
+- IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
+- }
+-
+- /* Clear unused TCs, if any, to zero buffer size*/
+- for (; i < MAX_TRAFFIC_CLASS; i++) {
+- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
+- IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
+- IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
+- }
+-
+- return 0;
+-}
+-
+-/**
+ * ixgbe_dcb_config_rx_arbiter_82599 - Config Rx Data arbiter
+ * @hw: pointer to hardware structure
+ * @refill: refill credits index by traffic class
+@@ -378,7 +321,6 @@ static s32 ixgbe_dcb_config_tc_stats_825
+ /**
+ * ixgbe_dcb_hw_config_82599 - Configure and enable DCB
+ * @hw: pointer to hardware structure
+- * @rx_pba: method to distribute packet buffer
+ * @refill: refill credits index by traffic class
+ * @max: max credits index by traffic class
+ * @bwg_id: bandwidth grouping indexed by traffic class
+@@ -387,11 +329,9 @@ static s32 ixgbe_dcb_config_tc_stats_825
+ *
+ * Configure dcb settings and enable dcb mode.
+ */
+-s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw,
+- u8 rx_pba, u8 pfc_en, u16 *refill,
++s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
+ u16 *max, u8 *bwg_id, u8 *prio_type, u8 *prio_tc)
+ {
+- ixgbe_dcb_config_packet_buffers_82599(hw, rx_pba);
+ ixgbe_dcb_config_rx_arbiter_82599(hw, refill, max, bwg_id,
+ prio_type, prio_tc);
+ ixgbe_dcb_config_tx_desc_arbiter_82599(hw, refill, max,
+--- a/drivers/net/ixgbe/ixgbe_dcb_82599.h
++++ b/drivers/net/ixgbe/ixgbe_dcb_82599.h
+@@ -86,17 +86,6 @@
+ #define IXGBE_RTTPCS_ARBD_SHIFT 22
+ #define IXGBE_RTTPCS_ARBD_DCB 0x4 /* Arbitration delay in DCB mode */
+
+-#define IXGBE_TXPBSIZE_20KB 0x00005000 /* 20KB Packet Buffer */
+-#define IXGBE_TXPBSIZE_40KB 0x0000A000 /* 40KB Packet Buffer */
+-#define IXGBE_RXPBSIZE_48KB 0x0000C000 /* 48KB Packet Buffer */
+-#define IXGBE_RXPBSIZE_64KB 0x00010000 /* 64KB Packet Buffer */
+-#define IXGBE_RXPBSIZE_80KB 0x00014000 /* 80KB Packet Buffer */
+-#define IXGBE_RXPBSIZE_128KB 0x00020000 /* 128KB Packet Buffer */
+-#define IXGBE_TXPBSIZE_MAX 0x00028000 /* 160KB Packet Buffer*/
+-
+-#define IXGBE_TXPBTHRESH_DCB 0xA /* THRESH value for DCB mode */
+-#define IXGBE_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */
+-
+ /* SECTXMINIFG DCB */
+ #define IXGBE_SECTX_DCB 0x00001F00 /* DCB TX Buffer IFG */
+
+@@ -127,8 +116,7 @@ s32 ixgbe_dcb_config_tx_data_arbiter_825
+ u8 *prio_type,
+ u8 *prio_tc);
+
+-s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw,
+- u8 rx_pba, u8 pfc_en, u16 *refill,
++s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
+ u16 *max, u8 *bwg_id, u8 *prio_type,
+ u8 *prio_tc);
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -3798,12 +3798,27 @@ static void ixgbe_configure_dcb(struct i
+ }
+
+ #endif
++
++static void ixgbe_configure_pb(struct ixgbe_adapter *adapter)
++{
++ int hdrm = 0;
++ int num_tc = netdev_get_num_tc(adapter->netdev);
++ struct ixgbe_hw *hw = &adapter->hw;
++
++ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE ||
++ adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)
++ hdrm = 64 << adapter->fdir_pballoc;
++
++ hw->mac.ops.set_rxpba(&adapter->hw, num_tc, hdrm, PBA_STRATEGY_EQUAL);
++}
++
+ static void ixgbe_configure(struct ixgbe_adapter *adapter)
+ {
+ struct net_device *netdev = adapter->netdev;
+ struct ixgbe_hw *hw = &adapter->hw;
+ int i;
+
++ ixgbe_configure_pb(adapter);
+ #ifdef CONFIG_IXGBE_DCB
+ ixgbe_configure_dcb(adapter);
+ #endif
+@@ -5223,7 +5238,6 @@ static int __devinit ixgbe_sw_init(struc
+ }
+ adapter->dcb_cfg.bw_percentage[DCB_TX_CONFIG][0] = 100;
+ adapter->dcb_cfg.bw_percentage[DCB_RX_CONFIG][0] = 100;
+- adapter->dcb_cfg.rx_pba_cfg = pba_equal;
+ adapter->dcb_cfg.pfc_mode_enable = false;
+ adapter->dcb_set_bitmap = 0x00;
+ adapter->dcbx_cap = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE;
+--- a/drivers/net/ixgbe/ixgbe_type.h
++++ b/drivers/net/ixgbe/ixgbe_type.h
+@@ -1118,6 +1118,27 @@
+ #define IXGBE_GPIE_VTMODE_32 0x00008000 /* 32 VFs 4 queues per VF */
+ #define IXGBE_GPIE_VTMODE_64 0x0000C000 /* 64 VFs 2 queues per VF */
+
++/* Packet Buffer Initialization */
++#define IXGBE_TXPBSIZE_20KB 0x00005000 /* 20KB Packet Buffer */
++#define IXGBE_TXPBSIZE_40KB 0x0000A000 /* 40KB Packet Buffer */
++#define IXGBE_RXPBSIZE_48KB 0x0000C000 /* 48KB Packet Buffer */
++#define IXGBE_RXPBSIZE_64KB 0x00010000 /* 64KB Packet Buffer */
++#define IXGBE_RXPBSIZE_80KB 0x00014000 /* 80KB Packet Buffer */
++#define IXGBE_RXPBSIZE_128KB 0x00020000 /* 128KB Packet Buffer */
++#define IXGBE_RXPBSIZE_MAX 0x00080000 /* 512KB Packet Buffer*/
++#define IXGBE_TXPBSIZE_MAX 0x00028000 /* 160KB Packet Buffer*/
++
++#define IXGBE_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */
++#define IXGBE_MAX_PB 8
++
++/* Packet buffer allocation strategies */
++enum {
++ PBA_STRATEGY_EQUAL = 0, /* Distribute PB space equally */
++#define PBA_STRATEGY_EQUAL PBA_STRATEGY_EQUAL
++ PBA_STRATEGY_WEIGHTED = 1, /* Weight front half of TCs */
++#define PBA_STRATEGY_WEIGHTED PBA_STRATEGY_WEIGHTED
++};
++
+ /* Transmit Flow Control status */
+ #define IXGBE_TFCS_TXOFF 0x00000001
+ #define IXGBE_TFCS_TXOFF0 0x00000100
+@@ -2620,6 +2641,9 @@ struct ixgbe_mac_operations {
+ s32 (*get_link_capabilities)(struct ixgbe_hw *, ixgbe_link_speed *,
+ bool *);
+
++ /* Packet Buffer Manipulation */
++ void (*set_rxpba)(struct ixgbe_hw *, int, u32, int);
++
+ /* LED */
+ s32 (*led_on)(struct ixgbe_hw *, u32);
+ s32 (*led_off)(struct ixgbe_hw *, u32);
+--- a/drivers/net/ixgbe/ixgbe_x540.c
++++ b/drivers/net/ixgbe/ixgbe_x540.c
+@@ -876,6 +876,7 @@ static struct ixgbe_mac_operations mac_o
+ .read_analog_reg8 = NULL,
+ .write_analog_reg8 = NULL,
+ .setup_link = &ixgbe_setup_mac_link_X540,
++ .set_rxpba = &ixgbe_set_rxpba_generic,
+ .check_link = &ixgbe_check_mac_link_generic,
+ .get_link_capabilities = &ixgbe_get_copper_link_capabilities_generic,
+ .led_on = &ixgbe_led_on_generic,
diff --git a/patches.drivers/0007-ixgbe-add-basic-support-for-setting-and-getting-nfc-.patch b/patches.drivers/0007-ixgbe-add-basic-support-for-setting-and-getting-nfc-.patch
new file mode 100644
index 0000000000..0294b79505
--- /dev/null
+++ b/patches.drivers/0007-ixgbe-add-basic-support-for-setting-and-getting-nfc-.patch
@@ -0,0 +1,54 @@
+From 91cd94bfe4f00fccf692e32dfa86a9fad0d61280 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Wed, 11 May 2011 07:18:41 +0000
+Subject: [PATCH 7/32] ixgbe: add basic support for setting and getting nfc controls
+Git-commit: 91cd94bfe4f00fccf692e32dfa86a9fad0d61280
+Patch-mainline: v3.1-rc1~316^2~359^2~7
+References: bnc#714789
+
+This change adds basic support for the obtaining of RSS ring counts.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_ethtool.c | 19 +++++++++++++++++++
+ 1 file changed, 19 insertions(+)
+
+--- a/drivers/net/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ixgbe/ixgbe_ethtool.c
+@@ -2483,6 +2483,24 @@ static int ixgbe_set_rx_ntuple(struct ne
+ return err ? -1 : 0;
+ }
+
++static int ixgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
++ void *rule_locs)
++{
++ struct ixgbe_adapter *adapter = netdev_priv(dev);
++ int ret = -EOPNOTSUPP;
++
++ switch (cmd->cmd) {
++ case ETHTOOL_GRXRINGS:
++ cmd->data = adapter->num_rx_queues;
++ ret = 0;
++ break;
++ default:
++ break;
++ }
++
++ return ret;
++}
++
+ static const struct ethtool_ops ixgbe_ethtool_ops = {
+ .get_settings = ixgbe_get_settings,
+ .set_settings = ixgbe_set_settings,
+@@ -2519,6 +2537,7 @@ static const struct ethtool_ops ixgbe_et
+ .get_flags = ethtool_op_get_flags,
+ .set_flags = ixgbe_set_flags,
+ .set_rx_ntuple = ixgbe_set_rx_ntuple,
++ .get_rxnfc = ixgbe_get_rxnfc,
+ };
+
+ void ixgbe_set_ethtool_ops(struct net_device *netdev)
diff --git a/patches.drivers/0008-ixgbe-DCB-use-existing-TX-and-RX-queues.patch b/patches.drivers/0008-ixgbe-DCB-use-existing-TX-and-RX-queues.patch
new file mode 100644
index 0000000000..fa3024a427
--- /dev/null
+++ b/patches.drivers/0008-ixgbe-DCB-use-existing-TX-and-RX-queues.patch
@@ -0,0 +1,227 @@
+From e901acd6fa5538436e08e8a862dd2c080297f852 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 26 Apr 2011 07:26:08 +0000
+Subject: [PATCH 8/32] ixgbe: DCB use existing TX and RX queues
+Git-commit: e901acd6fa5538436e08e8a862dd2c080297f852
+Patch-mainline: v3.1-rc1~316^2~390
+References: bnc#714789
+
+The number of TX and RX queues allocated depends on the device
+type, the current features set, online CPUs, and various
+compile flags.
+
+To enable DCB with multiple queues and allow it to coexist with
+all the features currently implemented it has to setup a valid
+queue count. This is done at init time using the FDIR and RSS
+max queue counts and allowing each TC to allocate a queue per
+CPU.
+
+DCB will now use available queues up to (8 x TCs) this is somewhat
+arbitrary cap but allows DCB to use up to 64 queues. Its easy to
+increase this later if that is needed.
+
+This is prep work to enable Flow Director with DCB. After this
+DCB can easily coexist with existing features and no longer
+needs its own DCB feature ring.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 2
+ drivers/net/ixgbe/ixgbe_main.c | 107 ++++++++++++++++++-----------------------
+ 2 files changed, 49 insertions(+), 60 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -244,7 +244,6 @@ struct ixgbe_ring {
+
+ enum ixgbe_ring_f_enum {
+ RING_F_NONE = 0,
+- RING_F_DCB,
+ RING_F_VMDQ, /* SR-IOV uses the same ring feature */
+ RING_F_RSS,
+ RING_F_FDIR,
+@@ -255,7 +254,6 @@ enum ixgbe_ring_f_enum {
+ RING_F_ARRAY_SIZE /* must be last in enum set */
+ };
+
+-#define IXGBE_MAX_DCB_INDICES 64
+ #define IXGBE_MAX_RSS_INDICES 16
+ #define IXGBE_MAX_VMDQ_INDICES 64
+ #define IXGBE_MAX_FDIR_INDICES 64
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -4434,72 +4434,72 @@ static inline bool ixgbe_set_fcoe_queues
+ if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
+ return false;
+
+- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
+-#ifdef CONFIG_IXGBE_DCB
+- int tc;
+- struct net_device *dev = adapter->netdev;
+-
+- tc = netdev_get_prio_tc_map(dev, adapter->fcoe.up);
+- f->indices = dev->tc_to_txq[tc].count;
+- f->mask = dev->tc_to_txq[tc].offset;
+-#endif
+- } else {
+- f->indices = min((int)num_online_cpus(), f->indices);
++ f->indices = min((int)num_online_cpus(), f->indices);
+
+- adapter->num_rx_queues = 1;
+- adapter->num_tx_queues = 1;
++ adapter->num_rx_queues = 1;
++ adapter->num_tx_queues = 1;
+
+- if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
+- e_info(probe, "FCoE enabled with RSS\n");
+- if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
+- (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+- ixgbe_set_fdir_queues(adapter);
+- else
+- ixgbe_set_rss_queues(adapter);
+- }
+- /* adding FCoE rx rings to the end */
+- f->mask = adapter->num_rx_queues;
+- adapter->num_rx_queues += f->indices;
+- adapter->num_tx_queues += f->indices;
++ if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
++ e_info(probe, "FCoE enabled with RSS\n");
++ if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
++ (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
++ ixgbe_set_fdir_queues(adapter);
++ else
++ ixgbe_set_rss_queues(adapter);
+ }
++ /* adding FCoE rx rings to the end */
++ f->mask = adapter->num_rx_queues;
++ adapter->num_rx_queues += f->indices;
++ adapter->num_tx_queues += f->indices;
+
+ return true;
+ }
+ #endif /* IXGBE_FCOE */
+
++/* Artificial max queue cap per traffic class in DCB mode */
++#define DCB_QUEUE_CAP 8
++
+ #ifdef CONFIG_IXGBE_DCB
+ static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter)
+ {
+- bool ret = false;
+- struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_DCB];
+- int tcs = netdev_get_num_tc(adapter->netdev);
+- int max_q, i, q;
++ int per_tc_q, q, i, offset = 0;
++ struct net_device *dev = adapter->netdev;
++ int tcs = netdev_get_num_tc(dev);
+
+- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED) || !tcs)
+- return ret;
++ if (!tcs)
++ return false;
+
+- max_q = adapter->netdev->num_tx_queues / tcs;
++ /* Map queue offset and counts onto allocated tx queues */
++ per_tc_q = min(dev->num_tx_queues / tcs, (unsigned int)DCB_QUEUE_CAP);
++ q = min((int)num_online_cpus(), per_tc_q);
+
+- f->indices = 0;
+ for (i = 0; i < tcs; i++) {
+- q = min((int)num_online_cpus(), max_q);
+- f->indices += q;
++ netdev_set_prio_tc_map(dev, i, i);
++ netdev_set_tc_queue(dev, i, q, offset);
++ offset += q;
+ }
+
+- f->mask = 0x7 << 3;
+- adapter->num_rx_queues = f->indices;
+- adapter->num_tx_queues = f->indices;
+- ret = true;
++ adapter->num_tx_queues = q * tcs;
++ adapter->num_rx_queues = q * tcs;
+
+ #ifdef IXGBE_FCOE
+- /* FCoE enabled queues require special configuration done through
+- * configure_fcoe() and others. Here we map FCoE indices onto the
+- * DCB queue pairs allowing FCoE to own configuration later.
++ /* FCoE enabled queues require special configuration indexed
++ * by feature specific indices and mask. Here we map FCoE
++ * indices onto the DCB queue pairs allowing FCoE to own
++ * configuration later.
+ */
+- ixgbe_set_fcoe_queues(adapter);
++ if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) {
++ int tc;
++ struct ixgbe_ring_feature *f =
++ &adapter->ring_feature[RING_F_FCOE];
++
++ tc = netdev_get_prio_tc_map(dev, adapter->fcoe.up);
++ f->indices = dev->tc_to_txq[tc].count;
++ f->mask = dev->tc_to_txq[tc].offset;
++ }
+ #endif
+
+- return ret;
++ return true;
+ }
+ #endif
+
+@@ -5189,7 +5189,6 @@ static int __devinit ixgbe_sw_init(struc
+ rss = min(IXGBE_MAX_RSS_INDICES, (int)num_online_cpus());
+ adapter->ring_feature[RING_F_RSS].indices = rss;
+ adapter->flags |= IXGBE_FLAG_RSS_ENABLED;
+- adapter->ring_feature[RING_F_DCB].indices = IXGBE_MAX_DCB_INDICES;
+ switch (hw->mac.type) {
+ case ixgbe_mac_82598EB:
+ if (hw->device_id == IXGBE_DEV_ID_82598AT)
+@@ -7230,10 +7229,8 @@ static void ixgbe_validate_rtr(struct ix
+ */
+ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
+ {
+- unsigned int q, i, offset = 0;
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
+ struct ixgbe_hw *hw = &adapter->hw;
+- int max_q = adapter->netdev->num_tx_queues / tc;
+
+ /* If DCB is anabled do not remove traffic classes, multiple
+ * traffic classes are required to implement DCB
+@@ -7259,14 +7256,6 @@ int ixgbe_setup_tc(struct net_device *de
+ else
+ netdev_reset_tc(dev);
+
+- /* Partition Tx queues evenly amongst traffic classes */
+- for (i = 0; i < tc; i++) {
+- q = min((int)num_online_cpus(), max_q);
+- netdev_set_prio_tc_map(dev, i, i);
+- netdev_set_tc_queue(dev, i, q, offset);
+- offset += q;
+- }
+-
+ ixgbe_init_interrupt_scheme(adapter);
+ ixgbe_validate_rtr(adapter, tc);
+ if (netif_running(dev))
+@@ -7454,14 +7443,16 @@ static int __devinit ixgbe_probe(struct
+ pci_set_master(pdev);
+ pci_save_state(pdev);
+
++#ifdef CONFIG_IXGBE_DCB
++ indices *= MAX_TRAFFIC_CLASS;
++#endif
++
+ if (ii->mac == ixgbe_mac_82598EB)
+ indices = min_t(unsigned int, indices, IXGBE_MAX_RSS_INDICES);
+ else
+ indices = min_t(unsigned int, indices, IXGBE_MAX_FDIR_INDICES);
+
+-#if defined(CONFIG_DCB)
+- indices = max_t(unsigned int, indices, IXGBE_MAX_DCB_INDICES);
+-#elif defined(IXGBE_FCOE)
++#ifdef IXGBE_FCOE
+ indices += min_t(unsigned int, num_possible_cpus(),
+ IXGBE_MAX_FCOE_INDICES);
+ #endif
diff --git a/patches.drivers/0009-ixgbe-add-support-for-displaying-ntuple-filters-via-.patch b/patches.drivers/0009-ixgbe-add-support-for-displaying-ntuple-filters-via-.patch
new file mode 100644
index 0000000000..d594b89785
--- /dev/null
+++ b/patches.drivers/0009-ixgbe-add-support-for-displaying-ntuple-filters-via-.patch
@@ -0,0 +1,161 @@
+From 3e05334f8be83e8529f1cbf4f4dea06a4d51d676 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Wed, 11 May 2011 07:18:47 +0000
+Subject: [PATCH 9/32] ixgbe: add support for displaying ntuple filters via the nfc interface
+Git-commit: 3e05334f8be83e8529f1cbf4f4dea06a4d51d676
+Patch-mainline: v3.1-rc1~316^2~359^2~6
+References: bnc#714789
+
+This code adds support for displaying the filters that were added via the
+nfc interface. This is primarily to test the interface for now, but I am
+also looking into the feasibility of moving all of the ntuple filter code
+in ixgbe over to the nfc interface since it seems to be better implemented.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 11 ++++
+ drivers/net/ixgbe/ixgbe_ethtool.c | 102 ++++++++++++++++++++++++++++++++++++++
+ 2 files changed, 113 insertions(+)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -483,6 +483,17 @@ struct ixgbe_adapter {
+ struct vf_macvlans vf_mvs;
+ struct vf_macvlans *mv_list;
+ bool antispoofing_enabled;
++
++ struct hlist_head fdir_filter_list;
++ union ixgbe_atr_input fdir_mask;
++ int fdir_filter_count;
++};
++
++struct ixgbe_fdir_filter {
++ struct hlist_node fdir_node;
++ union ixgbe_atr_input filter;
++ u16 sw_idx;
++ u16 action;
+ };
+
+ enum ixbge_state_t {
+--- a/drivers/net/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ixgbe/ixgbe_ethtool.c
+@@ -2483,6 +2483,97 @@ static int ixgbe_set_rx_ntuple(struct ne
+ return err ? -1 : 0;
+ }
+
++static int ixgbe_get_ethtool_fdir_entry(struct ixgbe_adapter *adapter,
++ struct ethtool_rxnfc *cmd)
++{
++ union ixgbe_atr_input *mask = &adapter->fdir_mask;
++ struct ethtool_rx_flow_spec *fsp =
++ (struct ethtool_rx_flow_spec *)&cmd->fs;
++ struct hlist_node *node, *node2;
++ struct ixgbe_fdir_filter *rule = NULL;
++
++ /* report total rule count */
++ cmd->data = (1024 << adapter->fdir_pballoc) - 2;
++
++ hlist_for_each_entry_safe(rule, node, node2,
++ &adapter->fdir_filter_list, fdir_node) {
++ if (fsp->location <= rule->sw_idx)
++ break;
++ }
++
++ if (!rule || fsp->location != rule->sw_idx)
++ return -EINVAL;
++
++ /* fill out the flow spec entry */
++
++ /* set flow type field */
++ switch (rule->filter.formatted.flow_type) {
++ case IXGBE_ATR_FLOW_TYPE_TCPV4:
++ fsp->flow_type = TCP_V4_FLOW;
++ break;
++ case IXGBE_ATR_FLOW_TYPE_UDPV4:
++ fsp->flow_type = UDP_V4_FLOW;
++ break;
++ case IXGBE_ATR_FLOW_TYPE_SCTPV4:
++ fsp->flow_type = SCTP_V4_FLOW;
++ break;
++ case IXGBE_ATR_FLOW_TYPE_IPV4:
++ fsp->flow_type = IP_USER_FLOW;
++ fsp->h_u.usr_ip4_spec.ip_ver = ETH_RX_NFC_IP4;
++ fsp->h_u.usr_ip4_spec.proto = 0;
++ fsp->m_u.usr_ip4_spec.proto = 0;
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ fsp->h_u.tcp_ip4_spec.psrc = rule->filter.formatted.src_port;
++ fsp->m_u.tcp_ip4_spec.psrc = mask->formatted.src_port;
++ fsp->h_u.tcp_ip4_spec.pdst = rule->filter.formatted.dst_port;
++ fsp->m_u.tcp_ip4_spec.pdst = mask->formatted.dst_port;
++ fsp->h_u.tcp_ip4_spec.ip4src = rule->filter.formatted.src_ip[0];
++ fsp->m_u.tcp_ip4_spec.ip4src = mask->formatted.src_ip[0];
++ fsp->h_u.tcp_ip4_spec.ip4dst = rule->filter.formatted.dst_ip[0];
++ fsp->m_u.tcp_ip4_spec.ip4dst = mask->formatted.dst_ip[0];
++ fsp->h_ext.vlan_tci = rule->filter.formatted.vlan_id;
++ fsp->m_ext.vlan_tci = mask->formatted.vlan_id;
++ fsp->h_ext.vlan_etype = rule->filter.formatted.flex_bytes;
++ fsp->m_ext.vlan_etype = mask->formatted.flex_bytes;
++ fsp->h_ext.data[1] = htonl(rule->filter.formatted.vm_pool);
++ fsp->m_ext.data[1] = htonl(mask->formatted.vm_pool);
++ fsp->flow_type |= FLOW_EXT;
++
++ /* record action */
++ if (rule->action == IXGBE_FDIR_DROP_QUEUE)
++ fsp->ring_cookie = RX_CLS_FLOW_DISC;
++ else
++ fsp->ring_cookie = rule->action;
++
++ return 0;
++}
++
++static int ixgbe_get_ethtool_fdir_all(struct ixgbe_adapter *adapter,
++ struct ethtool_rxnfc *cmd,
++ u32 *rule_locs)
++{
++ struct hlist_node *node, *node2;
++ struct ixgbe_fdir_filter *rule;
++ int cnt = 0;
++
++ /* report total rule count */
++ cmd->data = (1024 << adapter->fdir_pballoc) - 2;
++
++ hlist_for_each_entry_safe(rule, node, node2,
++ &adapter->fdir_filter_list, fdir_node) {
++ if (cnt == cmd->rule_cnt)
++ return -EMSGSIZE;
++ rule_locs[cnt] = rule->sw_idx;
++ cnt++;
++ }
++
++ return 0;
++}
++
+ static int ixgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ void *rule_locs)
+ {
+@@ -2494,6 +2585,17 @@ static int ixgbe_get_rxnfc(struct net_de
+ cmd->data = adapter->num_rx_queues;
+ ret = 0;
+ break;
++ case ETHTOOL_GRXCLSRLCNT:
++ cmd->rule_cnt = adapter->fdir_filter_count;
++ ret = 0;
++ break;
++ case ETHTOOL_GRXCLSRULE:
++ ret = ixgbe_get_ethtool_fdir_entry(adapter, cmd);
++ break;
++ case ETHTOOL_GRXCLSRLALL:
++ ret = ixgbe_get_ethtool_fdir_all(adapter, cmd,
++ (u32 *)rule_locs);
++ break;
+ default:
+ break;
+ }
diff --git a/patches.drivers/0010-ixgbe-fix-flags-relating-to-perfect-filters-to-suppo.patch b/patches.drivers/0010-ixgbe-fix-flags-relating-to-perfect-filters-to-suppo.patch
new file mode 100644
index 0000000000..d3bbd3982e
--- /dev/null
+++ b/patches.drivers/0010-ixgbe-fix-flags-relating-to-perfect-filters-to-suppo.patch
@@ -0,0 +1,209 @@
+From 03ecf91aae757eeb70763a3393227c4597c87b23 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Fri, 20 May 2011 07:36:17 +0000
+Subject: [PATCH 10/32] ixgbe: fix flags relating to perfect filters to support coexistence
+Git-commit: 03ecf91aae757eeb70763a3393227c4597c87b23
+Patch-mainline: v3.1-rc1~316^2~359^2~9
+References: bnc#714789
+
+I am removing the requirement that Ntuple filters have the same
+number of queues and requirements as ATR. As a result this change will
+make it so that all the Ntuple flag does is disable ATR for now.
+
+This change fixes an issue in which we were incorrectly re-enabling ATR
+when we exited perfect filter mode. This was due to the fact that the
+logic assumed RSS and DCB were mutually exclusive which is no longer the
+case.
+
+To correct this we just need to add a check to guarantee DCB is disabled
+before re-enabling ATR.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 13 ++++++-------
+ drivers/net/ixgbe/ixgbe_ethtool.c | 24 ++++++++++++------------
+ drivers/net/ixgbe/ixgbe_main.c | 34 ++++++++++++----------------------
+ 3 files changed, 30 insertions(+), 41 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -114,11 +114,12 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ u8 err = 0;
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
+
++ /* verify there is something to do, if not then exit */
++ if (!!state != !(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
++ return err;
++
+ if (state > 0) {
+ /* Turn on DCB */
+- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED)
+- goto out;
+-
+ if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) {
+ e_err(drv, "Enable failed, needs MSI-X\n");
+ err = 1;
+@@ -144,9 +145,6 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
+ } else {
+ /* Turn off DCB */
+- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+- goto out;
+-
+ adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
+ adapter->temp_dcb_cfg.pfc_mode_enable = false;
+ adapter->dcb_cfg.pfc_mode_enable = false;
+@@ -154,7 +152,8 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ switch (adapter->hw.mac.type) {
+ case ixgbe_mac_82599EB:
+ case ixgbe_mac_X540:
+- adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
++ if (!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
++ adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
+ break;
+ default:
+ break;
+--- a/drivers/net/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ixgbe/ixgbe_ethtool.c
+@@ -2322,20 +2322,20 @@ static int ixgbe_set_flags(struct net_de
+ * Check if Flow Director n-tuple support was enabled or disabled. If
+ * the state changed, we need to reset.
+ */
+- if ((adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE) &&
+- (!(data & ETH_FLAG_NTUPLE))) {
+- /* turn off Flow Director perfect, set hash and reset */
++ if (!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)) {
++ /* turn off ATR, enable perfect filters and reset */
++ if (data & ETH_FLAG_NTUPLE) {
++ adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
++ adapter->flags |= IXGBE_FLAG_FDIR_PERFECT_CAPABLE;
++ need_reset = true;
++ }
++ } else if (!(data & ETH_FLAG_NTUPLE)) {
++ /* turn off Flow Director, set ATR and reset */
+ adapter->flags &= ~IXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+- adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
++ if ((adapter->flags & IXGBE_FLAG_RSS_ENABLED) &&
++ !(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
++ adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
+ need_reset = true;
+- } else if ((!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)) &&
+- (data & ETH_FLAG_NTUPLE)) {
+- /* turn off Flow Director hash, enable perfect and reset */
+- adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
+- adapter->flags |= IXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+- need_reset = true;
+- } else {
+- /* no state change */
+ }
+
+ if (need_reset) {
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -1617,9 +1617,8 @@ static void ixgbe_configure_msix(struct
+ q_vector->eitr = adapter->rx_eitr_param;
+
+ ixgbe_write_eitr(q_vector);
+- /* If Flow Director is enabled, set interrupt affinity */
+- if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
+- (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)) {
++ /* If ATR is enabled, set interrupt affinity */
++ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) {
+ /*
+ * Allocate the affinity_hint cpumask, assign the mask
+ * for this vector, and set our affinity_hint for
+@@ -2536,8 +2535,7 @@ static inline void ixgbe_irq_enable(stru
+ default:
+ break;
+ }
+- if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE ||
+- adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)
++ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)
+ mask |= IXGBE_EIMS_FLOW_DIR;
+
+ IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMS, mask);
+@@ -3836,8 +3834,6 @@ static void ixgbe_configure(struct ixgbe
+ adapter->tx_ring[i]->atr_sample_rate =
+ adapter->atr_sample_rate;
+ ixgbe_init_fdir_signature_82599(hw, adapter->fdir_pballoc);
+- } else if (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE) {
+- ixgbe_init_fdir_perfect_82599(hw, adapter->fdir_pballoc);
+ }
+ ixgbe_configure_virtualization(adapter);
+
+@@ -4403,15 +4399,13 @@ static inline bool ixgbe_set_fdir_queues
+ f_fdir->mask = 0;
+
+ /* Flow Director must have RSS enabled */
+- if (adapter->flags & IXGBE_FLAG_RSS_ENABLED &&
+- ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE ||
+- (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)))) {
++ if ((adapter->flags & IXGBE_FLAG_RSS_ENABLED) &&
++ (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)) {
+ adapter->num_tx_queues = f_fdir->indices;
+ adapter->num_rx_queues = f_fdir->indices;
+ ret = true;
+ } else {
+ adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
+- adapter->flags &= ~IXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+ }
+ return ret;
+ }
+@@ -4441,12 +4435,12 @@ static inline bool ixgbe_set_fcoe_queues
+
+ if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
+ e_info(probe, "FCoE enabled with RSS\n");
+- if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
+- (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
++ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)
+ ixgbe_set_fdir_queues(adapter);
+ else
+ ixgbe_set_rss_queues(adapter);
+ }
++
+ /* adding FCoE rx rings to the end */
+ f->mask = adapter->num_rx_queues;
+ adapter->num_rx_queues += f->indices;
+@@ -4739,9 +4733,8 @@ static inline bool ixgbe_cache_ring_fdir
+ int i;
+ bool ret = false;
+
+- if (adapter->flags & IXGBE_FLAG_RSS_ENABLED &&
+- ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
+- (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))) {
++ if ((adapter->flags & IXGBE_FLAG_RSS_ENABLED) &&
++ (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)) {
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ adapter->rx_ring[i]->reg_idx = i;
+ for (i = 0; i < adapter->num_tx_queues; i++)
+@@ -4770,8 +4763,7 @@ static inline bool ixgbe_cache_ring_fcoe
+ return false;
+
+ if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
+- if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
+- (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
++ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)
+ ixgbe_cache_ring_fdir(adapter);
+ else
+ ixgbe_cache_ring_rss(adapter);
+@@ -4951,14 +4943,12 @@ static int ixgbe_set_interrupt_capabilit
+
+ adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
+ adapter->flags &= ~IXGBE_FLAG_RSS_ENABLED;
+- if (adapter->flags & (IXGBE_FLAG_FDIR_HASH_CAPABLE |
+- IXGBE_FLAG_FDIR_PERFECT_CAPABLE)) {
++ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) {
+ e_err(probe,
+- "Flow Director is not supported while multiple "
++ "ATR is not supported while multiple "
+ "queues are disabled. Disabling Flow Director\n");
+ }
+ adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
+- adapter->flags &= ~IXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+ adapter->atr_sample_rate = 0;
+ if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)
+ ixgbe_disable_sriov(adapter);
diff --git a/patches.drivers/0011-ixgbe-remove-ntuple-filtering.patch b/patches.drivers/0011-ixgbe-remove-ntuple-filtering.patch
new file mode 100644
index 0000000000..53c1247cba
--- /dev/null
+++ b/patches.drivers/0011-ixgbe-remove-ntuple-filtering.patch
@@ -0,0 +1,173 @@
+From b29a21694f7d12e40537e1e587ec47725849769b Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Wed, 11 May 2011 07:18:26 +0000
+Subject: [PATCH 11/32] ixgbe: remove ntuple filtering
+Git-commit: b29a21694f7d12e40537e1e587ec47725849769b
+Patch-mainline: v3.1-rc1~316^2~359^2~10
+References: bnc#714789
+
+Due to numerous issues in ntuple filters it has been decided to move the
+interface over to the network flow classification interface. As a first
+step to achieving this I first need to remove the old ntuple interface.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_ethtool.c | 136 --------------------------------------
+ 1 file changed, 136 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ixgbe/ixgbe_ethtool.c
+@@ -2348,141 +2348,6 @@ static int ixgbe_set_flags(struct net_de
+ return 0;
+ }
+
+-static int ixgbe_set_rx_ntuple(struct net_device *dev,
+- struct ethtool_rx_ntuple *cmd)
+-{
+- struct ixgbe_adapter *adapter = netdev_priv(dev);
+- struct ethtool_rx_ntuple_flow_spec *fs = &cmd->fs;
+- union ixgbe_atr_input input_struct;
+- struct ixgbe_atr_input_masks input_masks;
+- int target_queue;
+- int err;
+-
+- if (adapter->hw.mac.type == ixgbe_mac_82598EB)
+- return -EOPNOTSUPP;
+-
+- /*
+- * Don't allow programming if the action is a queue greater than
+- * the number of online Tx queues.
+- */
+- if ((fs->action >= adapter->num_tx_queues) ||
+- (fs->action < ETHTOOL_RXNTUPLE_ACTION_DROP))
+- return -EINVAL;
+-
+- memset(&input_struct, 0, sizeof(union ixgbe_atr_input));
+- memset(&input_masks, 0, sizeof(struct ixgbe_atr_input_masks));
+-
+- /* record flow type */
+- switch (fs->flow_type) {
+- case IPV4_FLOW:
+- input_struct.formatted.flow_type = IXGBE_ATR_FLOW_TYPE_IPV4;
+- break;
+- case TCP_V4_FLOW:
+- input_struct.formatted.flow_type = IXGBE_ATR_FLOW_TYPE_TCPV4;
+- break;
+- case UDP_V4_FLOW:
+- input_struct.formatted.flow_type = IXGBE_ATR_FLOW_TYPE_UDPV4;
+- break;
+- case SCTP_V4_FLOW:
+- input_struct.formatted.flow_type = IXGBE_ATR_FLOW_TYPE_SCTPV4;
+- break;
+- default:
+- return -1;
+- }
+-
+- /* copy vlan tag minus the CFI bit */
+- if ((fs->vlan_tag & 0xEFFF) || (~fs->vlan_tag_mask & 0xEFFF)) {
+- input_struct.formatted.vlan_id = htons(fs->vlan_tag & 0xEFFF);
+- if (!fs->vlan_tag_mask) {
+- input_masks.vlan_id_mask = htons(0xEFFF);
+- } else {
+- switch (~fs->vlan_tag_mask & 0xEFFF) {
+- /* all of these are valid vlan-mask values */
+- case 0xEFFF:
+- case 0xE000:
+- case 0x0FFF:
+- case 0x0000:
+- input_masks.vlan_id_mask =
+- htons(~fs->vlan_tag_mask);
+- break;
+- /* exit with error if vlan-mask is invalid */
+- default:
+- e_err(drv, "Partial VLAN ID or "
+- "priority mask in vlan-mask is not "
+- "supported by hardware\n");
+- return -1;
+- }
+- }
+- }
+-
+- /* make sure we only use the first 2 bytes of user data */
+- if ((fs->data & 0xFFFF) || (~fs->data_mask & 0xFFFF)) {
+- input_struct.formatted.flex_bytes = htons(fs->data & 0xFFFF);
+- if (!(fs->data_mask & 0xFFFF)) {
+- input_masks.flex_mask = 0xFFFF;
+- } else if (~fs->data_mask & 0xFFFF) {
+- e_err(drv, "Partial user-def-mask is not "
+- "supported by hardware\n");
+- return -1;
+- }
+- }
+-
+- /*
+- * Copy input into formatted structures
+- *
+- * These assignments are based on the following logic
+- * If neither input or mask are set assume value is masked out.
+- * If input is set, but mask is not mask should default to accept all.
+- * If input is not set, but mask is set then mask likely results in 0.
+- * If input is set and mask is set then assign both.
+- */
+- if (fs->h_u.tcp_ip4_spec.ip4src || ~fs->m_u.tcp_ip4_spec.ip4src) {
+- input_struct.formatted.src_ip[0] = fs->h_u.tcp_ip4_spec.ip4src;
+- if (!fs->m_u.tcp_ip4_spec.ip4src)
+- input_masks.src_ip_mask[0] = 0xFFFFFFFF;
+- else
+- input_masks.src_ip_mask[0] =
+- ~fs->m_u.tcp_ip4_spec.ip4src;
+- }
+- if (fs->h_u.tcp_ip4_spec.ip4dst || ~fs->m_u.tcp_ip4_spec.ip4dst) {
+- input_struct.formatted.dst_ip[0] = fs->h_u.tcp_ip4_spec.ip4dst;
+- if (!fs->m_u.tcp_ip4_spec.ip4dst)
+- input_masks.dst_ip_mask[0] = 0xFFFFFFFF;
+- else
+- input_masks.dst_ip_mask[0] =
+- ~fs->m_u.tcp_ip4_spec.ip4dst;
+- }
+- if (fs->h_u.tcp_ip4_spec.psrc || ~fs->m_u.tcp_ip4_spec.psrc) {
+- input_struct.formatted.src_port = fs->h_u.tcp_ip4_spec.psrc;
+- if (!fs->m_u.tcp_ip4_spec.psrc)
+- input_masks.src_port_mask = 0xFFFF;
+- else
+- input_masks.src_port_mask = ~fs->m_u.tcp_ip4_spec.psrc;
+- }
+- if (fs->h_u.tcp_ip4_spec.pdst || ~fs->m_u.tcp_ip4_spec.pdst) {
+- input_struct.formatted.dst_port = fs->h_u.tcp_ip4_spec.pdst;
+- if (!fs->m_u.tcp_ip4_spec.pdst)
+- input_masks.dst_port_mask = 0xFFFF;
+- else
+- input_masks.dst_port_mask = ~fs->m_u.tcp_ip4_spec.pdst;
+- }
+-
+- /* determine if we need to drop or route the packet */
+- if (fs->action == ETHTOOL_RXNTUPLE_ACTION_DROP)
+- target_queue = MAX_RX_QUEUES - 1;
+- else
+- target_queue = fs->action;
+-
+- spin_lock(&adapter->fdir_perfect_lock);
+- err = ixgbe_fdir_add_perfect_filter_82599(&adapter->hw,
+- &input_struct,
+- &input_masks, 0,
+- target_queue);
+- spin_unlock(&adapter->fdir_perfect_lock);
+-
+- return err ? -1 : 0;
+-}
+-
+ static int ixgbe_get_ethtool_fdir_entry(struct ixgbe_adapter *adapter,
+ struct ethtool_rxnfc *cmd)
+ {
+@@ -2638,7 +2503,6 @@ static const struct ethtool_ops ixgbe_et
+ .set_coalesce = ixgbe_set_coalesce,
+ .get_flags = ethtool_op_get_flags,
+ .set_flags = ixgbe_set_flags,
+- .set_rx_ntuple = ixgbe_set_rx_ntuple,
+ .get_rxnfc = ixgbe_get_rxnfc,
+ };
+
diff --git a/patches.drivers/0012-ixgbe-update-perfect-filter-framework-to-support-ret.patch b/patches.drivers/0012-ixgbe-update-perfect-filter-framework-to-support-ret.patch
new file mode 100644
index 0000000000..1613ac086f
--- /dev/null
+++ b/patches.drivers/0012-ixgbe-update-perfect-filter-framework-to-support-ret.patch
@@ -0,0 +1,890 @@
+From c04f6ca84866ef207e009a08e4c34ca241df7aa2 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Wed, 11 May 2011 07:18:36 +0000
+Subject: [PATCH 12/32] ixgbe: update perfect filter framework to support retaining filters
+Git-commit: c04f6ca84866ef207e009a08e4c34ca241df7aa2
+Patch-mainline: v3.1-rc1~316^2~359^2~8
+References: bnc#714789
+
+This change is meant to update the internal framework of ixgbe so that
+perfect filters can be stored and tracked via software.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 18 -
+ drivers/net/ixgbe/ixgbe_82599.c | 607 ++++++++++++++++++++++------------------
+ drivers/net/ixgbe/ixgbe_main.c | 2
+ drivers/net/ixgbe/ixgbe_type.h | 25 -
+ 4 files changed, 370 insertions(+), 282 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -555,16 +555,22 @@ extern void ixgbe_alloc_rx_buffers(struc
+ extern void ixgbe_write_eitr(struct ixgbe_q_vector *);
+ extern int ethtool_ioctl(struct ifreq *ifr);
+ extern s32 ixgbe_reinit_fdir_tables_82599(struct ixgbe_hw *hw);
+-extern s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 pballoc);
+-extern s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 pballoc);
++extern s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 fdirctrl);
++extern s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl);
+ extern s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
+ union ixgbe_atr_hash_dword input,
+ union ixgbe_atr_hash_dword common,
+ u8 queue);
+-extern s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw,
+- union ixgbe_atr_input *input,
+- struct ixgbe_atr_input_masks *input_masks,
+- u16 soft_id, u8 queue);
++extern s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
++ union ixgbe_atr_input *input_mask);
++extern s32 ixgbe_fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
++ union ixgbe_atr_input *input,
++ u16 soft_id, u8 queue);
++extern s32 ixgbe_fdir_erase_perfect_filter_82599(struct ixgbe_hw *hw,
++ union ixgbe_atr_input *input,
++ u16 soft_id);
++extern void ixgbe_atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
++ union ixgbe_atr_input *mask);
+ extern void ixgbe_configure_rscctl(struct ixgbe_adapter *adapter,
+ struct ixgbe_ring *ring);
+ extern void ixgbe_clear_rscctl(struct ixgbe_adapter *adapter,
+--- a/drivers/net/ixgbe/ixgbe_82599.c
++++ b/drivers/net/ixgbe/ixgbe_82599.c
+@@ -1107,115 +1107,87 @@ s32 ixgbe_reinit_fdir_tables_82599(struc
+ }
+
+ /**
+- * ixgbe_init_fdir_signature_82599 - Initialize Flow Director signature filters
++ * ixgbe_set_fdir_rxpba_82599 - Initialize Flow Director Rx packet buffer
+ * @hw: pointer to hardware structure
+ * @pballoc: which mode to allocate filters with
+ **/
+-s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 pballoc)
++static s32 ixgbe_set_fdir_rxpba_82599(struct ixgbe_hw *hw, const u32 pballoc)
+ {
+- u32 fdirctrl = 0;
++ u32 fdir_pbsize = hw->mac.rx_pb_size << IXGBE_RXPBSIZE_SHIFT;
++ u32 current_rxpbsize = 0;
+ int i;
+
+- /* Send interrupt when 64 filters are left */
+- fdirctrl |= 4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT;
+-
+- /* Set the maximum length per hash bucket to 0xA filters */
+- fdirctrl |= 0xA << IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT;
+-
++ /* reserve space for Flow Director filters */
+ switch (pballoc) {
+- case IXGBE_FDIR_PBALLOC_64K:
+- /* 8k - 1 signature filters */
+- fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
++ case IXGBE_FDIR_PBALLOC_256K:
++ fdir_pbsize -= 256 << IXGBE_RXPBSIZE_SHIFT;
+ break;
+ case IXGBE_FDIR_PBALLOC_128K:
+- /* 16k - 1 signature filters */
+- fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
++ fdir_pbsize -= 128 << IXGBE_RXPBSIZE_SHIFT;
+ break;
+- case IXGBE_FDIR_PBALLOC_256K:
+- /* 32k - 1 signature filters */
+- fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
++ case IXGBE_FDIR_PBALLOC_64K:
++ fdir_pbsize -= 64 << IXGBE_RXPBSIZE_SHIFT;
+ break;
++ case IXGBE_FDIR_PBALLOC_NONE:
+ default:
+- /* bad value */
+- return IXGBE_ERR_CONFIG;
++ return IXGBE_ERR_PARAM;
+ };
+
+- /* Move the flexible bytes to use the ethertype - shift 6 words */
+- fdirctrl |= (0x6 << IXGBE_FDIRCTRL_FLEX_SHIFT);
+-
+-
+- /* Prime the keys for hashing */
+- IXGBE_WRITE_REG(hw, IXGBE_FDIRHKEY, IXGBE_ATR_BUCKET_HASH_KEY);
+- IXGBE_WRITE_REG(hw, IXGBE_FDIRSKEY, IXGBE_ATR_SIGNATURE_HASH_KEY);
+-
+- /*
+- * Poll init-done after we write the register. Estimated times:
+- * 10G: PBALLOC = 11b, timing is 60us
+- * 1G: PBALLOC = 11b, timing is 600us
+- * 100M: PBALLOC = 11b, timing is 6ms
+- *
+- * Multiple these timings by 4 if under full Rx load
+- *
+- * So we'll poll for IXGBE_FDIR_INIT_DONE_POLL times, sleeping for
+- * 1 msec per poll time. If we're at line rate and drop to 100M, then
+- * this might not finish in our poll time, but we can live with that
+- * for now.
+- */
+- IXGBE_WRITE_REG(hw, IXGBE_FDIRCTRL, fdirctrl);
+- IXGBE_WRITE_FLUSH(hw);
+- for (i = 0; i < IXGBE_FDIR_INIT_DONE_POLL; i++) {
+- if (IXGBE_READ_REG(hw, IXGBE_FDIRCTRL) &
+- IXGBE_FDIRCTRL_INIT_DONE)
+- break;
+- usleep_range(1000, 2000);
++ /* determine current RX packet buffer size */
++ for (i = 0; i < 8; i++)
++ current_rxpbsize += IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
++
++ /* if there is already room for the filters do nothing */
++ if (current_rxpbsize <= fdir_pbsize)
++ return 0;
++
++ if (current_rxpbsize > hw->mac.rx_pb_size) {
++ /*
++ * if rxpbsize is greater than max then HW max the Rx buffer
++ * sizes are unconfigured or misconfigured since HW default is
++ * to give the full buffer to each traffic class resulting in
++ * the total size being buffer size 8x actual size
++ *
++ * This assumes no DCB since the RXPBSIZE registers appear to
++ * be unconfigured.
++ */
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(0), fdir_pbsize);
++ for (i = 1; i < 8; i++)
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
++ } else {
++ /*
++ * Since the Rx packet buffer appears to have already been
++ * configured we need to shrink each packet buffer by enough
++ * to make room for the filters. As such we take each rxpbsize
++ * value and multiply it by a fraction representing the size
++ * needed over the size we currently have.
++ *
++ * We need to reduce fdir_pbsize and current_rxpbsize to
++ * 1/1024 of their original values in order to avoid
++ * overflowing the u32 being used to store rxpbsize.
++ */
++ fdir_pbsize >>= IXGBE_RXPBSIZE_SHIFT;
++ current_rxpbsize >>= IXGBE_RXPBSIZE_SHIFT;
++ for (i = 0; i < 8; i++) {
++ u32 rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
++ rxpbsize *= fdir_pbsize;
++ rxpbsize /= current_rxpbsize;
++ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
++ }
+ }
+- if (i >= IXGBE_FDIR_INIT_DONE_POLL)
+- hw_dbg(hw, "Flow Director Signature poll time exceeded!\n");
+
+ return 0;
+ }
+
+ /**
+- * ixgbe_init_fdir_perfect_82599 - Initialize Flow Director perfect filters
++ * ixgbe_fdir_enable_82599 - Initialize Flow Director control registers
+ * @hw: pointer to hardware structure
+- * @pballoc: which mode to allocate filters with
++ * @fdirctrl: value to write to flow director control register
+ **/
+-s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 pballoc)
++static void ixgbe_fdir_enable_82599(struct ixgbe_hw *hw, u32 fdirctrl)
+ {
+- u32 fdirctrl = 0;
+ int i;
+
+- /* Send interrupt when 64 filters are left */
+- fdirctrl |= 4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT;
+-
+- /* Initialize the drop queue to Rx queue 127 */
+- fdirctrl |= (127 << IXGBE_FDIRCTRL_DROP_Q_SHIFT);
+-
+- switch (pballoc) {
+- case IXGBE_FDIR_PBALLOC_64K:
+- /* 2k - 1 perfect filters */
+- fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
+- break;
+- case IXGBE_FDIR_PBALLOC_128K:
+- /* 4k - 1 perfect filters */
+- fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
+- break;
+- case IXGBE_FDIR_PBALLOC_256K:
+- /* 8k - 1 perfect filters */
+- fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
+- break;
+- default:
+- /* bad value */
+- return IXGBE_ERR_CONFIG;
+- };
+-
+- /* Turn perfect match filtering on */
+- fdirctrl |= IXGBE_FDIRCTRL_PERFECT_MATCH;
+- fdirctrl |= IXGBE_FDIRCTRL_REPORT_STATUS;
+-
+- /* Move the flexible bytes to use the ethertype - shift 6 words */
+- fdirctrl |= (0x6 << IXGBE_FDIRCTRL_FLEX_SHIFT);
+-
+ /* Prime the keys for hashing */
+ IXGBE_WRITE_REG(hw, IXGBE_FDIRHKEY, IXGBE_ATR_BUCKET_HASH_KEY);
+ IXGBE_WRITE_REG(hw, IXGBE_FDIRSKEY, IXGBE_ATR_SIGNATURE_HASH_KEY);
+@@ -1233,10 +1205,6 @@ s32 ixgbe_init_fdir_perfect_82599(struct
+ * this might not finish in our poll time, but we can live with that
+ * for now.
+ */
+-
+- /* Set the maximum length per hash bucket to 0xA filters */
+- fdirctrl |= (0xA << IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT);
+-
+ IXGBE_WRITE_REG(hw, IXGBE_FDIRCTRL, fdirctrl);
+ IXGBE_WRITE_FLUSH(hw);
+ for (i = 0; i < IXGBE_FDIR_INIT_DONE_POLL; i++) {
+@@ -1245,101 +1213,77 @@ s32 ixgbe_init_fdir_perfect_82599(struct
+ break;
+ usleep_range(1000, 2000);
+ }
+- if (i >= IXGBE_FDIR_INIT_DONE_POLL)
+- hw_dbg(hw, "Flow Director Perfect poll time exceeded!\n");
+
+- return 0;
++ if (i >= IXGBE_FDIR_INIT_DONE_POLL)
++ hw_dbg(hw, "Flow Director poll time exceeded!\n");
+ }
+
+-
+ /**
+- * ixgbe_atr_compute_hash_82599 - Compute the hashes for SW ATR
+- * @stream: input bitstream to compute the hash on
+- * @key: 32-bit hash key
++ * ixgbe_init_fdir_signature_82599 - Initialize Flow Director signature filters
++ * @hw: pointer to hardware structure
++ * @fdirctrl: value to write to flow director control register, initially
++ * contains just the value of the Rx packet buffer allocation
+ **/
+-static u32 ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
+- u32 key)
++s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 fdirctrl)
+ {
+- /*
+- * The algorithm is as follows:
+- * Hash[15:0] = Sum { S[n] x K[n+16] }, n = 0...350
+- * where Sum {A[n]}, n = 0...n is bitwise XOR of A[0], A[1]...A[n]
+- * and A[n] x B[n] is bitwise AND between same length strings
+- *
+- * K[n] is 16 bits, defined as:
+- * for n modulo 32 >= 15, K[n] = K[n % 32 : (n % 32) - 15]
+- * for n modulo 32 < 15, K[n] =
+- * K[(n % 32:0) | (31:31 - (14 - (n % 32)))]
+- *
+- * S[n] is 16 bits, defined as:
+- * for n >= 15, S[n] = S[n:n - 15]
+- * for n < 15, S[n] = S[(n:0) | (350:350 - (14 - n))]
+- *
+- * To simplify for programming, the algorithm is implemented
+- * in software this way:
+- *
+- * key[31:0], hi_hash_dword[31:0], lo_hash_dword[31:0], hash[15:0]
+- *
+- * for (i = 0; i < 352; i+=32)
+- * hi_hash_dword[31:0] ^= Stream[(i+31):i];
+- *
+- * lo_hash_dword[15:0] ^= Stream[15:0];
+- * lo_hash_dword[15:0] ^= hi_hash_dword[31:16];
+- * lo_hash_dword[31:16] ^= hi_hash_dword[15:0];
+- *
+- * hi_hash_dword[31:0] ^= Stream[351:320];
+- *
+- * if(key[0])
+- * hash[15:0] ^= Stream[15:0];
+- *
+- * for (i = 0; i < 16; i++) {
+- * if (key[i])
+- * hash[15:0] ^= lo_hash_dword[(i+15):i];
+- * if (key[i + 16])
+- * hash[15:0] ^= hi_hash_dword[(i+15):i];
+- * }
+- *
+- */
+- __be32 common_hash_dword = 0;
+- u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan;
+- u32 hash_result = 0;
+- u8 i;
++ s32 err;
+
+- /* record the flow_vm_vlan bits as they are a key part to the hash */
+- flow_vm_vlan = ntohl(atr_input->dword_stream[0]);
++ /* Before enabling Flow Director, verify the Rx Packet Buffer size */
++ err = ixgbe_set_fdir_rxpba_82599(hw, fdirctrl);
++ if (err)
++ return err;
+
+- /* generate common hash dword */
+- for (i = 10; i; i -= 2)
+- common_hash_dword ^= atr_input->dword_stream[i] ^
+- atr_input->dword_stream[i - 1];
++ /*
++ * Continue setup of fdirctrl register bits:
++ * Move the flexible bytes to use the ethertype - shift 6 words
++ * Set the maximum length per hash bucket to 0xA filters
++ * Send interrupt when 64 filters are left
++ */
++ fdirctrl |= (0x6 << IXGBE_FDIRCTRL_FLEX_SHIFT) |
++ (0xA << IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT) |
++ (4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT);
+
+- hi_hash_dword = ntohl(common_hash_dword);
++ /* write hashes and fdirctrl register, poll for completion */
++ ixgbe_fdir_enable_82599(hw, fdirctrl);
+
+- /* low dword is word swapped version of common */
+- lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16);
++ return 0;
++}
+
+- /* apply flow ID/VM pool/VLAN ID bits to hash words */
+- hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16);
++/**
++ * ixgbe_init_fdir_perfect_82599 - Initialize Flow Director perfect filters
++ * @hw: pointer to hardware structure
++ * @fdirctrl: value to write to flow director control register, initially
++ * contains just the value of the Rx packet buffer allocation
++ **/
++s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl)
++{
++ s32 err;
+
+- /* Process bits 0 and 16 */
+- if (key & 0x0001) hash_result ^= lo_hash_dword;
+- if (key & 0x00010000) hash_result ^= hi_hash_dword;
++ /* Before enabling Flow Director, verify the Rx Packet Buffer size */
++ err = ixgbe_set_fdir_rxpba_82599(hw, fdirctrl);
++ if (err)
++ return err;
+
+ /*
+- * apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to
+- * delay this because bit 0 of the stream should not be processed
+- * so we do not add the vlan until after bit 0 was processed
++ * Continue setup of fdirctrl register bits:
++ * Turn perfect match filtering on
++ * Report hash in RSS field of Rx wb descriptor
++ * Initialize the drop queue
++ * Move the flexible bytes to use the ethertype - shift 6 words
++ * Set the maximum length per hash bucket to 0xA filters
++ * Send interrupt when 64 (0x4 * 16) filters are left
+ */
+- lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16);
++ fdirctrl |= IXGBE_FDIRCTRL_PERFECT_MATCH |
++ IXGBE_FDIRCTRL_REPORT_STATUS |
++ (IXGBE_FDIR_DROP_QUEUE << IXGBE_FDIRCTRL_DROP_Q_SHIFT) |
++ (0x6 << IXGBE_FDIRCTRL_FLEX_SHIFT) |
++ (0xA << IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT) |
++ (4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT);
+
++ /* write hashes and fdirctrl register, poll for completion */
++ ixgbe_fdir_enable_82599(hw, fdirctrl);
+
+- /* process the remaining 30 bits in the key 2 bits at a time */
+- for (i = 15; i; i-- ) {
+- if (key & (0x0001 << i)) hash_result ^= lo_hash_dword >> i;
+- if (key & (0x00010000 << i)) hash_result ^= hi_hash_dword >> i;
+- }
+-
+- return hash_result & IXGBE_ATR_HASH_MASK;
++ return 0;
+ }
+
+ /*
+@@ -1476,7 +1420,6 @@ s32 ixgbe_fdir_add_signature_filter_8259
+ */
+ fdirhashcmd = (u64)fdircmd << 32;
+ fdirhashcmd |= ixgbe_atr_compute_sig_hash_82599(input, common);
+-
+ IXGBE_WRITE_REG64(hw, IXGBE_FDIRHASH, fdirhashcmd);
+
+ hw_dbg(hw, "Tx Queue=%x hash=%x\n", queue, (u32)fdirhashcmd);
+@@ -1484,6 +1427,101 @@ s32 ixgbe_fdir_add_signature_filter_8259
+ return 0;
+ }
+
++#define IXGBE_COMPUTE_BKT_HASH_ITERATION(_n) \
++do { \
++ u32 n = (_n); \
++ if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
++ bucket_hash ^= lo_hash_dword >> n; \
++ if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
++ bucket_hash ^= hi_hash_dword >> n; \
++} while (0);
++
++/**
++ * ixgbe_atr_compute_perfect_hash_82599 - Compute the perfect filter hash
++ * @atr_input: input bitstream to compute the hash on
++ * @input_mask: mask for the input bitstream
++ *
++ * This function serves two main purposes. First it applys the input_mask
++ * to the atr_input resulting in a cleaned up atr_input data stream.
++ * Secondly it computes the hash and stores it in the bkt_hash field at
++ * the end of the input byte stream. This way it will be available for
++ * future use without needing to recompute the hash.
++ **/
++void ixgbe_atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
++ union ixgbe_atr_input *input_mask)
++{
++
++ u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan;
++ u32 bucket_hash = 0;
++
++ /* Apply masks to input data */
++ input->dword_stream[0] &= input_mask->dword_stream[0];
++ input->dword_stream[1] &= input_mask->dword_stream[1];
++ input->dword_stream[2] &= input_mask->dword_stream[2];
++ input->dword_stream[3] &= input_mask->dword_stream[3];
++ input->dword_stream[4] &= input_mask->dword_stream[4];
++ input->dword_stream[5] &= input_mask->dword_stream[5];
++ input->dword_stream[6] &= input_mask->dword_stream[6];
++ input->dword_stream[7] &= input_mask->dword_stream[7];
++ input->dword_stream[8] &= input_mask->dword_stream[8];
++ input->dword_stream[9] &= input_mask->dword_stream[9];
++ input->dword_stream[10] &= input_mask->dword_stream[10];
++
++ /* record the flow_vm_vlan bits as they are a key part to the hash */
++ flow_vm_vlan = ntohl(input->dword_stream[0]);
++
++ /* generate common hash dword */
++ hi_hash_dword = ntohl(input->dword_stream[1] ^
++ input->dword_stream[2] ^
++ input->dword_stream[3] ^
++ input->dword_stream[4] ^
++ input->dword_stream[5] ^
++ input->dword_stream[6] ^
++ input->dword_stream[7] ^
++ input->dword_stream[8] ^
++ input->dword_stream[9] ^
++ input->dword_stream[10]);
++
++ /* low dword is word swapped version of common */
++ lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16);
++
++ /* apply flow ID/VM pool/VLAN ID bits to hash words */
++ hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16);
++
++ /* Process bits 0 and 16 */
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(0);
++
++ /*
++ * apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to
++ * delay this because bit 0 of the stream should not be processed
++ * so we do not add the vlan until after bit 0 was processed
++ */
++ lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16);
++
++ /* Process remaining 30 bit of the key */
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(1);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(2);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(3);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(4);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(5);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(6);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(7);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(8);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(9);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(10);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(11);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(12);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(13);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(14);
++ IXGBE_COMPUTE_BKT_HASH_ITERATION(15);
++
++ /*
++ * Limit hash to 13 bits since max bucket count is 8K.
++ * Store result at the end of the input stream.
++ */
++ input->formatted.bkt_hash = bucket_hash & 0x1FFF;
++}
++
+ /**
+ * ixgbe_get_fdirtcpm_82599 - generate a tcp port from atr_input_masks
+ * @input_mask: mask to be bit swapped
+@@ -1493,11 +1531,11 @@ s32 ixgbe_fdir_add_signature_filter_8259
+ * generate a correctly swapped value we need to bit swap the mask and that
+ * is what is accomplished by this function.
+ **/
+-static u32 ixgbe_get_fdirtcpm_82599(struct ixgbe_atr_input_masks *input_masks)
++static u32 ixgbe_get_fdirtcpm_82599(union ixgbe_atr_input *input_mask)
+ {
+- u32 mask = ntohs(input_masks->dst_port_mask);
++ u32 mask = ntohs(input_mask->formatted.dst_port);
+ mask <<= IXGBE_FDIRTCPM_DPORTM_SHIFT;
+- mask |= ntohs(input_masks->src_port_mask);
++ mask |= ntohs(input_mask->formatted.src_port);
+ mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1);
+ mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2);
+ mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4);
+@@ -1519,52 +1557,14 @@ static u32 ixgbe_get_fdirtcpm_82599(stru
+ IXGBE_WRITE_REG((a), (reg), IXGBE_STORE_AS_BE32(ntohl(value)))
+
+ #define IXGBE_STORE_AS_BE16(_value) \
+- (((u16)(_value) >> 8) | ((u16)(_value) << 8))
++ ntohs(((u16)(_value) >> 8) | ((u16)(_value) << 8))
+
+-/**
+- * ixgbe_fdir_add_perfect_filter_82599 - Adds a perfect filter
+- * @hw: pointer to hardware structure
+- * @input: input bitstream
+- * @input_masks: bitwise masks for relevant fields
+- * @soft_id: software index into the silicon hash tables for filter storage
+- * @queue: queue index to direct traffic to
+- *
+- * Note that the caller to this function must lock before calling, since the
+- * hardware writes must be protected from one another.
+- **/
+-s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw,
+- union ixgbe_atr_input *input,
+- struct ixgbe_atr_input_masks *input_masks,
+- u16 soft_id, u8 queue)
++s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
++ union ixgbe_atr_input *input_mask)
+ {
+- u32 fdirhash;
+- u32 fdircmd;
+- u32 fdirport, fdirtcpm;
+- u32 fdirvlan;
+- /* start with VLAN, flex bytes, VM pool, and IPv6 destination masked */
+- u32 fdirm = IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP | IXGBE_FDIRM_FLEX |
+- IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6;
+-
+- /*
+- * Check flow_type formatting, and bail out before we touch the hardware
+- * if there's a configuration issue
+- */
+- switch (input->formatted.flow_type) {
+- case IXGBE_ATR_FLOW_TYPE_IPV4:
+- /* use the L4 protocol mask for raw IPv4/IPv6 traffic */
+- fdirm |= IXGBE_FDIRM_L4P;
+- case IXGBE_ATR_FLOW_TYPE_SCTPV4:
+- if (input_masks->dst_port_mask || input_masks->src_port_mask) {
+- hw_dbg(hw, " Error on src/dst port mask\n");
+- return IXGBE_ERR_CONFIG;
+- }
+- case IXGBE_ATR_FLOW_TYPE_TCPV4:
+- case IXGBE_ATR_FLOW_TYPE_UDPV4:
+- break;
+- default:
+- hw_dbg(hw, " Error on flow type input\n");
+- return IXGBE_ERR_CONFIG;
+- }
++ /* mask IPv6 since it is currently not supported */
++ u32 fdirm = IXGBE_FDIRM_DIPv6;
++ u32 fdirtcpm;
+
+ /*
+ * Program the relevant mask registers. If src/dst_port or src/dst_addr
+@@ -1576,41 +1576,71 @@ s32 ixgbe_fdir_add_perfect_filter_82599(
+ * point in time.
+ */
+
+- /* Program FDIRM */
+- switch (ntohs(input_masks->vlan_id_mask) & 0xEFFF) {
+- case 0xEFFF:
+- /* Unmask VLAN ID - bit 0 and fall through to unmask prio */
+- fdirm &= ~IXGBE_FDIRM_VLANID;
+- case 0xE000:
+- /* Unmask VLAN prio - bit 1 */
+- fdirm &= ~IXGBE_FDIRM_VLANP;
++ /* verify bucket hash is cleared on hash generation */
++ if (input_mask->formatted.bkt_hash)
++ hw_dbg(hw, " bucket hash should always be 0 in mask\n");
++
++ /* Program FDIRM and verify partial masks */
++ switch (input_mask->formatted.vm_pool & 0x7F) {
++ case 0x0:
++ fdirm |= IXGBE_FDIRM_POOL;
++ case 0x7F:
+ break;
+- case 0x0FFF:
+- /* Unmask VLAN ID - bit 0 */
+- fdirm &= ~IXGBE_FDIRM_VLANID;
++ default:
++ hw_dbg(hw, " Error on vm pool mask\n");
++ return IXGBE_ERR_CONFIG;
++ }
++
++ switch (input_mask->formatted.flow_type & IXGBE_ATR_L4TYPE_MASK) {
++ case 0x0:
++ fdirm |= IXGBE_FDIRM_L4P;
++ if (input_mask->formatted.dst_port ||
++ input_mask->formatted.src_port) {
++ hw_dbg(hw, " Error on src/dst port mask\n");
++ return IXGBE_ERR_CONFIG;
++ }
++ case IXGBE_ATR_L4TYPE_MASK:
+ break;
++ default:
++ hw_dbg(hw, " Error on flow type mask\n");
++ return IXGBE_ERR_CONFIG;
++ }
++
++ switch (ntohs(input_mask->formatted.vlan_id) & 0xEFFF) {
+ case 0x0000:
+- /* do nothing, vlans already masked */
++ /* mask VLAN ID, fall through to mask VLAN priority */
++ fdirm |= IXGBE_FDIRM_VLANID;
++ case 0x0FFF:
++ /* mask VLAN priority */
++ fdirm |= IXGBE_FDIRM_VLANP;
++ break;
++ case 0xE000:
++ /* mask VLAN ID only, fall through */
++ fdirm |= IXGBE_FDIRM_VLANID;
++ case 0xEFFF:
++ /* no VLAN fields masked */
+ break;
+ default:
+ hw_dbg(hw, " Error on VLAN mask\n");
+ return IXGBE_ERR_CONFIG;
+ }
+
+- if (input_masks->flex_mask & 0xFFFF) {
+- if ((input_masks->flex_mask & 0xFFFF) != 0xFFFF) {
+- hw_dbg(hw, " Error on flexible byte mask\n");
+- return IXGBE_ERR_CONFIG;
+- }
+- /* Unmask Flex Bytes - bit 4 */
+- fdirm &= ~IXGBE_FDIRM_FLEX;
++ switch (input_mask->formatted.flex_bytes & 0xFFFF) {
++ case 0x0000:
++ /* Mask Flex Bytes, fall through */
++ fdirm |= IXGBE_FDIRM_FLEX;
++ case 0xFFFF:
++ break;
++ default:
++ hw_dbg(hw, " Error on flexible byte mask\n");
++ return IXGBE_ERR_CONFIG;
+ }
+
+ /* Now mask VM pool and destination IPv6 - bits 5 and 2 */
+ IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
+
+ /* store the TCP/UDP port masks, bit reversed from port layout */
+- fdirtcpm = ixgbe_get_fdirtcpm_82599(input_masks);
++ fdirtcpm = ixgbe_get_fdirtcpm_82599(input_mask);
+
+ /* write both the same so that UDP and TCP use the same mask */
+ IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
+@@ -1618,24 +1648,32 @@ s32 ixgbe_fdir_add_perfect_filter_82599(
+
+ /* store source and destination IP masks (big-enian) */
+ IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRSIP4M,
+- ~input_masks->src_ip_mask[0]);
++ ~input_mask->formatted.src_ip[0]);
+ IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRDIP4M,
+- ~input_masks->dst_ip_mask[0]);
++ ~input_mask->formatted.dst_ip[0]);
+
+- /* Apply masks to input data */
+- input->formatted.vlan_id &= input_masks->vlan_id_mask;
+- input->formatted.flex_bytes &= input_masks->flex_mask;
+- input->formatted.src_port &= input_masks->src_port_mask;
+- input->formatted.dst_port &= input_masks->dst_port_mask;
+- input->formatted.src_ip[0] &= input_masks->src_ip_mask[0];
+- input->formatted.dst_ip[0] &= input_masks->dst_ip_mask[0];
++ return 0;
++}
+
+- /* record vlan (little-endian) and flex_bytes(big-endian) */
+- fdirvlan =
+- IXGBE_STORE_AS_BE16(ntohs(input->formatted.flex_bytes));
+- fdirvlan <<= IXGBE_FDIRVLAN_FLEX_SHIFT;
+- fdirvlan |= ntohs(input->formatted.vlan_id);
+- IXGBE_WRITE_REG(hw, IXGBE_FDIRVLAN, fdirvlan);
++s32 ixgbe_fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
++ union ixgbe_atr_input *input,
++ u16 soft_id, u8 queue)
++{
++ u32 fdirport, fdirvlan, fdirhash, fdircmd;
++
++ /* currently IPv6 is not supported, must be programmed with 0 */
++ IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRSIPv6(0),
++ input->formatted.src_ip[0]);
++ IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRSIPv6(1),
++ input->formatted.src_ip[1]);
++ IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRSIPv6(2),
++ input->formatted.src_ip[2]);
++
++ /* record the source address (big-endian) */
++ IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRIPSA, input->formatted.src_ip[0]);
++
++ /* record the first 32 bits of the destination address (big-endian) */
++ IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRIPDA, input->formatted.dst_ip[0]);
+
+ /* record source and destination port (little-endian)*/
+ fdirport = ntohs(input->formatted.dst_port);
+@@ -1643,29 +1681,80 @@ s32 ixgbe_fdir_add_perfect_filter_82599(
+ fdirport |= ntohs(input->formatted.src_port);
+ IXGBE_WRITE_REG(hw, IXGBE_FDIRPORT, fdirport);
+
+- /* record the first 32 bits of the destination address (big-endian) */
+- IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRIPDA, input->formatted.dst_ip[0]);
++ /* record vlan (little-endian) and flex_bytes(big-endian) */
++ fdirvlan = IXGBE_STORE_AS_BE16(input->formatted.flex_bytes);
++ fdirvlan <<= IXGBE_FDIRVLAN_FLEX_SHIFT;
++ fdirvlan |= ntohs(input->formatted.vlan_id);
++ IXGBE_WRITE_REG(hw, IXGBE_FDIRVLAN, fdirvlan);
+
+- /* record the source address (big-endian) */
+- IXGBE_WRITE_REG_BE32(hw, IXGBE_FDIRIPSA, input->formatted.src_ip[0]);
++ /* configure FDIRHASH register */
++ fdirhash = input->formatted.bkt_hash;
++ fdirhash |= soft_id << IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
++ IXGBE_WRITE_REG(hw, IXGBE_FDIRHASH, fdirhash);
++
++ /*
++ * flush all previous writes to make certain registers are
++ * programmed prior to issuing the command
++ */
++ IXGBE_WRITE_FLUSH(hw);
+
+ /* configure FDIRCMD register */
+ fdircmd = IXGBE_FDIRCMD_CMD_ADD_FLOW | IXGBE_FDIRCMD_FILTER_UPDATE |
+ IXGBE_FDIRCMD_LAST | IXGBE_FDIRCMD_QUEUE_EN;
++ if (queue == IXGBE_FDIR_DROP_QUEUE)
++ fdircmd |= IXGBE_FDIRCMD_DROP;
+ fdircmd |= input->formatted.flow_type << IXGBE_FDIRCMD_FLOW_TYPE_SHIFT;
+ fdircmd |= (u32)queue << IXGBE_FDIRCMD_RX_QUEUE_SHIFT;
++ fdircmd |= (u32)input->formatted.vm_pool << IXGBE_FDIRCMD_VT_POOL_SHIFT;
+
+- /* we only want the bucket hash so drop the upper 16 bits */
+- fdirhash = ixgbe_atr_compute_hash_82599(input,
+- IXGBE_ATR_BUCKET_HASH_KEY);
+- fdirhash |= soft_id << IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
+-
+- IXGBE_WRITE_REG(hw, IXGBE_FDIRHASH, fdirhash);
+ IXGBE_WRITE_REG(hw, IXGBE_FDIRCMD, fdircmd);
+
+ return 0;
+ }
+
++s32 ixgbe_fdir_erase_perfect_filter_82599(struct ixgbe_hw *hw,
++ union ixgbe_atr_input *input,
++ u16 soft_id)
++{
++ u32 fdirhash;
++ u32 fdircmd = 0;
++ u32 retry_count;
++ s32 err = 0;
++
++ /* configure FDIRHASH register */
++ fdirhash = input->formatted.bkt_hash;
++ fdirhash |= soft_id << IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
++ IXGBE_WRITE_REG(hw, IXGBE_FDIRHASH, fdirhash);
++
++ /* flush hash to HW */
++ IXGBE_WRITE_FLUSH(hw);
++
++ /* Query if filter is present */
++ IXGBE_WRITE_REG(hw, IXGBE_FDIRCMD, IXGBE_FDIRCMD_CMD_QUERY_REM_FILT);
++
++ for (retry_count = 10; retry_count; retry_count--) {
++ /* allow 10us for query to process */
++ udelay(10);
++ /* verify query completed successfully */
++ fdircmd = IXGBE_READ_REG(hw, IXGBE_FDIRCMD);
++ if (!(fdircmd & IXGBE_FDIRCMD_CMD_MASK))
++ break;
++ }
++
++ if (!retry_count)
++ err = IXGBE_ERR_FDIR_REINIT_FAILED;
++
++ /* if filter exists in hardware then remove it */
++ if (fdircmd & IXGBE_FDIRCMD_FILTER_VALID) {
++ IXGBE_WRITE_REG(hw, IXGBE_FDIRHASH, fdirhash);
++ IXGBE_WRITE_FLUSH(hw);
++ IXGBE_WRITE_REG(hw, IXGBE_FDIRCMD,
++ IXGBE_FDIRCMD_CMD_REMOVE_FLOW);
++ }
++
++ return err;
++}
++
+ /**
+ * ixgbe_read_analog_reg8_82599 - Reads 8 bit Omer analog register
+ * @hw: pointer to hardware structure
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -5199,7 +5199,7 @@ static int __devinit ixgbe_sw_init(struc
+ adapter->atr_sample_rate = 20;
+ adapter->ring_feature[RING_F_FDIR].indices =
+ IXGBE_MAX_FDIR_INDICES;
+- adapter->fdir_pballoc = 0;
++ adapter->fdir_pballoc = IXGBE_FDIR_PBALLOC_64K;
+ #ifdef IXGBE_FCOE
+ adapter->flags |= IXGBE_FLAG_FCOE_CAPABLE;
+ adapter->flags &= ~IXGBE_FLAG_FCOE_ENABLED;
+--- a/drivers/net/ixgbe/ixgbe_type.h
++++ b/drivers/net/ixgbe/ixgbe_type.h
+@@ -2053,9 +2053,10 @@ enum {
+ #define IXGBE_VFLREC(_i) (0x00700 + (_i * 4))
+
+ enum ixgbe_fdir_pballoc_type {
+- IXGBE_FDIR_PBALLOC_64K = 0,
+- IXGBE_FDIR_PBALLOC_128K,
+- IXGBE_FDIR_PBALLOC_256K,
++ IXGBE_FDIR_PBALLOC_NONE = 0,
++ IXGBE_FDIR_PBALLOC_64K = 1,
++ IXGBE_FDIR_PBALLOC_128K = 2,
++ IXGBE_FDIR_PBALLOC_256K = 3,
+ };
+ #define IXGBE_FDIR_PBALLOC_SIZE_SHIFT 16
+
+@@ -2109,7 +2110,7 @@ enum ixgbe_fdir_pballoc_type {
+ #define IXGBE_FDIRCMD_CMD_ADD_FLOW 0x00000001
+ #define IXGBE_FDIRCMD_CMD_REMOVE_FLOW 0x00000002
+ #define IXGBE_FDIRCMD_CMD_QUERY_REM_FILT 0x00000003
+-#define IXGBE_FDIRCMD_CMD_QUERY_REM_HASH 0x00000007
++#define IXGBE_FDIRCMD_FILTER_VALID 0x00000004
+ #define IXGBE_FDIRCMD_FILTER_UPDATE 0x00000008
+ #define IXGBE_FDIRCMD_IPv6DMATCH 0x00000010
+ #define IXGBE_FDIRCMD_L4TYPE_UDP 0x00000020
+@@ -2128,6 +2129,8 @@ enum ixgbe_fdir_pballoc_type {
+ #define IXGBE_FDIR_INIT_DONE_POLL 10
+ #define IXGBE_FDIRCMD_CMD_POLL 10
+
++#define IXGBE_FDIR_DROP_QUEUE 127
++
+ /* Transmit Descriptor - Advanced */
+ union ixgbe_adv_tx_desc {
+ struct {
+@@ -2312,7 +2315,7 @@ union ixgbe_atr_input {
+ * src_port - 2 bytes
+ * dst_port - 2 bytes
+ * flex_bytes - 2 bytes
+- * rsvd0 - 2 bytes - space reserved must be 0.
++ * bkt_hash - 2 bytes
+ */
+ struct {
+ u8 vm_pool;
+@@ -2323,7 +2326,7 @@ union ixgbe_atr_input {
+ __be16 src_port;
+ __be16 dst_port;
+ __be16 flex_bytes;
+- __be16 rsvd0;
++ __be16 bkt_hash;
+ } formatted;
+ __be32 dword_stream[11];
+ };
+@@ -2344,16 +2347,6 @@ union ixgbe_atr_hash_dword {
+ __be32 dword;
+ };
+
+-struct ixgbe_atr_input_masks {
+- __be16 rsvd0;
+- __be16 vlan_id_mask;
+- __be32 dst_ip_mask[4];
+- __be32 src_ip_mask[4];
+- __be16 src_port_mask;
+- __be16 dst_port_mask;
+- __be16 flex_mask;
+-};
+-
+ enum ixgbe_eeprom_type {
+ ixgbe_eeprom_uninitialized = 0,
+ ixgbe_eeprom_spi,
diff --git a/patches.drivers/0013-ixgbe-add-support-for-nfc-addition-and-removal-of-fi.patch b/patches.drivers/0013-ixgbe-add-support-for-nfc-addition-and-removal-of-fi.patch
new file mode 100644
index 0000000000..45c1e2c46a
--- /dev/null
+++ b/patches.drivers/0013-ixgbe-add-support-for-nfc-addition-and-removal-of-fi.patch
@@ -0,0 +1,357 @@
+From e4911d57a45ca30771c64b56e552891fcd105070 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Wed, 11 May 2011 07:18:52 +0000
+Subject: [PATCH 13/32] ixgbe: add support for nfc addition and removal of filters
+Git-commit: e4911d57a45ca30771c64b56e552891fcd105070
+Patch-mainline: v3.1-rc1~316^2~359^2~5
+References: bnc#714789
+
+This change is meant to allow for nfc to insert and remove filters in order
+to test the ethtool interface which includes it's own rules manager.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_ethtool.c | 245 ++++++++++++++++++++++++++++++++++++++
+ drivers/net/ixgbe/ixgbe_main.c | 45 ++++++
+ 2 files changed, 290 insertions(+)
+
+--- a/drivers/net/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ixgbe/ixgbe_ethtool.c
+@@ -2468,6 +2468,250 @@ static int ixgbe_get_rxnfc(struct net_de
+ return ret;
+ }
+
++static int ixgbe_update_ethtool_fdir_entry(struct ixgbe_adapter *adapter,
++ struct ixgbe_fdir_filter *input,
++ u16 sw_idx)
++{
++ struct ixgbe_hw *hw = &adapter->hw;
++ struct hlist_node *node, *node2, *parent;
++ struct ixgbe_fdir_filter *rule;
++ int err = -EINVAL;
++
++ parent = NULL;
++ rule = NULL;
++
++ hlist_for_each_entry_safe(rule, node, node2,
++ &adapter->fdir_filter_list, fdir_node) {
++ /* hash found, or no matching entry */
++ if (rule->sw_idx >= sw_idx)
++ break;
++ parent = node;
++ }
++
++ /* if there is an old rule occupying our place remove it */
++ if (rule && (rule->sw_idx == sw_idx)) {
++ if (!input || (rule->filter.formatted.bkt_hash !=
++ input->filter.formatted.bkt_hash)) {
++ err = ixgbe_fdir_erase_perfect_filter_82599(hw,
++ &rule->filter,
++ sw_idx);
++ }
++
++ hlist_del(&rule->fdir_node);
++ kfree(rule);
++ adapter->fdir_filter_count--;
++ }
++
++ /*
++ * If no input this was a delete, err should be 0 if a rule was
++ * successfully found and removed from the list else -EINVAL
++ */
++ if (!input)
++ return err;
++
++ /* initialize node and set software index */
++ INIT_HLIST_NODE(&input->fdir_node);
++
++ /* add filter to the list */
++ if (parent)
++ hlist_add_after(parent, &input->fdir_node);
++ else
++ hlist_add_head(&input->fdir_node,
++ &adapter->fdir_filter_list);
++
++ /* update counts */
++ adapter->fdir_filter_count++;
++
++ return 0;
++}
++
++static int ixgbe_flowspec_to_flow_type(struct ethtool_rx_flow_spec *fsp,
++ u8 *flow_type)
++{
++ switch (fsp->flow_type & ~FLOW_EXT) {
++ case TCP_V4_FLOW:
++ *flow_type = IXGBE_ATR_FLOW_TYPE_TCPV4;
++ break;
++ case UDP_V4_FLOW:
++ *flow_type = IXGBE_ATR_FLOW_TYPE_UDPV4;
++ break;
++ case SCTP_V4_FLOW:
++ *flow_type = IXGBE_ATR_FLOW_TYPE_SCTPV4;
++ break;
++ case IP_USER_FLOW:
++ switch (fsp->h_u.usr_ip4_spec.proto) {
++ case IPPROTO_TCP:
++ *flow_type = IXGBE_ATR_FLOW_TYPE_TCPV4;
++ break;
++ case IPPROTO_UDP:
++ *flow_type = IXGBE_ATR_FLOW_TYPE_UDPV4;
++ break;
++ case IPPROTO_SCTP:
++ *flow_type = IXGBE_ATR_FLOW_TYPE_SCTPV4;
++ break;
++ case 0:
++ if (!fsp->m_u.usr_ip4_spec.proto) {
++ *flow_type = IXGBE_ATR_FLOW_TYPE_IPV4;
++ break;
++ }
++ default:
++ return 0;
++ }
++ break;
++ default:
++ return 0;
++ }
++
++ return 1;
++}
++
++static int ixgbe_add_ethtool_fdir_entry(struct ixgbe_adapter *adapter,
++ struct ethtool_rxnfc *cmd)
++{
++ struct ethtool_rx_flow_spec *fsp =
++ (struct ethtool_rx_flow_spec *)&cmd->fs;
++ struct ixgbe_hw *hw = &adapter->hw;
++ struct ixgbe_fdir_filter *input;
++ union ixgbe_atr_input mask;
++ int err;
++
++ if (!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
++ return -EOPNOTSUPP;
++
++ /*
++ * Don't allow programming if the action is a queue greater than
++ * the number of online Rx queues.
++ */
++ if ((fsp->ring_cookie != RX_CLS_FLOW_DISC) &&
++ (fsp->ring_cookie >= adapter->num_rx_queues))
++ return -EINVAL;
++
++ /* Don't allow indexes to exist outside of available space */
++ if (fsp->location >= ((1024 << adapter->fdir_pballoc) - 2)) {
++ e_err(drv, "Location out of range\n");
++ return -EINVAL;
++ }
++
++ input = kzalloc(sizeof(*input), GFP_ATOMIC);
++ if (!input)
++ return -ENOMEM;
++
++ memset(&mask, 0, sizeof(union ixgbe_atr_input));
++
++ /* set SW index */
++ input->sw_idx = fsp->location;
++
++ /* record flow type */
++ if (!ixgbe_flowspec_to_flow_type(fsp,
++ &input->filter.formatted.flow_type)) {
++ e_err(drv, "Unrecognized flow type\n");
++ goto err_out;
++ }
++
++ mask.formatted.flow_type = IXGBE_ATR_L4TYPE_IPV6_MASK |
++ IXGBE_ATR_L4TYPE_MASK;
++
++ if (input->filter.formatted.flow_type == IXGBE_ATR_FLOW_TYPE_IPV4)
++ mask.formatted.flow_type &= IXGBE_ATR_L4TYPE_IPV6_MASK;
++
++ /* Copy input into formatted structures */
++ input->filter.formatted.src_ip[0] = fsp->h_u.tcp_ip4_spec.ip4src;
++ mask.formatted.src_ip[0] = fsp->m_u.tcp_ip4_spec.ip4src;
++ input->filter.formatted.dst_ip[0] = fsp->h_u.tcp_ip4_spec.ip4dst;
++ mask.formatted.dst_ip[0] = fsp->m_u.tcp_ip4_spec.ip4dst;
++ input->filter.formatted.src_port = fsp->h_u.tcp_ip4_spec.psrc;
++ mask.formatted.src_port = fsp->m_u.tcp_ip4_spec.psrc;
++ input->filter.formatted.dst_port = fsp->h_u.tcp_ip4_spec.pdst;
++ mask.formatted.dst_port = fsp->m_u.tcp_ip4_spec.pdst;
++
++ if (fsp->flow_type & FLOW_EXT) {
++ input->filter.formatted.vm_pool =
++ (unsigned char)ntohl(fsp->h_ext.data[1]);
++ mask.formatted.vm_pool =
++ (unsigned char)ntohl(fsp->m_ext.data[1]);
++ input->filter.formatted.vlan_id = fsp->h_ext.vlan_tci;
++ mask.formatted.vlan_id = fsp->m_ext.vlan_tci;
++ input->filter.formatted.flex_bytes =
++ fsp->h_ext.vlan_etype;
++ mask.formatted.flex_bytes = fsp->m_ext.vlan_etype;
++ }
++
++ /* determine if we need to drop or route the packet */
++ if (fsp->ring_cookie == RX_CLS_FLOW_DISC)
++ input->action = IXGBE_FDIR_DROP_QUEUE;
++ else
++ input->action = fsp->ring_cookie;
++
++ spin_lock(&adapter->fdir_perfect_lock);
++
++ if (hlist_empty(&adapter->fdir_filter_list)) {
++ /* save mask and program input mask into HW */
++ memcpy(&adapter->fdir_mask, &mask, sizeof(mask));
++ err = ixgbe_fdir_set_input_mask_82599(hw, &mask);
++ if (err) {
++ e_err(drv, "Error writing mask\n");
++ goto err_out_w_lock;
++ }
++ } else if (memcmp(&adapter->fdir_mask, &mask, sizeof(mask))) {
++ e_err(drv, "Only one mask supported per port\n");
++ goto err_out_w_lock;
++ }
++
++ /* apply mask and compute/store hash */
++ ixgbe_atr_compute_perfect_hash_82599(&input->filter, &mask);
++
++ /* program filters to filter memory */
++ err = ixgbe_fdir_write_perfect_filter_82599(hw,
++ &input->filter, input->sw_idx,
++ adapter->rx_ring[input->action]->reg_idx);
++ if (err)
++ goto err_out_w_lock;
++
++ ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
++
++ spin_unlock(&adapter->fdir_perfect_lock);
++
++ return err;
++err_out_w_lock:
++ spin_unlock(&adapter->fdir_perfect_lock);
++err_out:
++ kfree(input);
++ return -EINVAL;
++}
++
++static int ixgbe_del_ethtool_fdir_entry(struct ixgbe_adapter *adapter,
++ struct ethtool_rxnfc *cmd)
++{
++ struct ethtool_rx_flow_spec *fsp =
++ (struct ethtool_rx_flow_spec *)&cmd->fs;
++ int err;
++
++ spin_lock(&adapter->fdir_perfect_lock);
++ err = ixgbe_update_ethtool_fdir_entry(adapter, NULL, fsp->location);
++ spin_unlock(&adapter->fdir_perfect_lock);
++
++ return err;
++}
++
++static int ixgbe_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
++{
++ struct ixgbe_adapter *adapter = netdev_priv(dev);
++ int ret = -EOPNOTSUPP;
++
++ switch (cmd->cmd) {
++ case ETHTOOL_SRXCLSRLINS:
++ ret = ixgbe_add_ethtool_fdir_entry(adapter, cmd);
++ break;
++ case ETHTOOL_SRXCLSRLDEL:
++ ret = ixgbe_del_ethtool_fdir_entry(adapter, cmd);
++ break;
++ default:
++ break;
++ }
++
++ return ret;
++}
++
+ static const struct ethtool_ops ixgbe_ethtool_ops = {
+ .get_settings = ixgbe_get_settings,
+ .set_settings = ixgbe_set_settings,
+@@ -2504,6 +2748,7 @@ static const struct ethtool_ops ixgbe_et
+ .get_flags = ethtool_op_get_flags,
+ .set_flags = ixgbe_set_flags,
+ .get_rxnfc = ixgbe_get_rxnfc,
++ .set_rxnfc = ixgbe_set_rxnfc,
+ };
+
+ void ixgbe_set_ethtool_ops(struct net_device *netdev)
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -3810,6 +3810,28 @@ static void ixgbe_configure_pb(struct ix
+ hw->mac.ops.set_rxpba(&adapter->hw, num_tc, hdrm, PBA_STRATEGY_EQUAL);
+ }
+
++static void ixgbe_fdir_filter_restore(struct ixgbe_adapter *adapter)
++{
++ struct ixgbe_hw *hw = &adapter->hw;
++ struct hlist_node *node, *node2;
++ struct ixgbe_fdir_filter *filter;
++
++ spin_lock(&adapter->fdir_perfect_lock);
++
++ if (!hlist_empty(&adapter->fdir_filter_list))
++ ixgbe_fdir_set_input_mask_82599(hw, &adapter->fdir_mask);
++
++ hlist_for_each_entry_safe(filter, node, node2,
++ &adapter->fdir_filter_list, fdir_node) {
++ ixgbe_fdir_write_perfect_filter_82599(hw,
++ &filter->filter,
++ filter->sw_idx,
++ filter->action);
++ }
++
++ spin_unlock(&adapter->fdir_perfect_lock);
++}
++
+ static void ixgbe_configure(struct ixgbe_adapter *adapter)
+ {
+ struct net_device *netdev = adapter->netdev;
+@@ -3834,6 +3856,10 @@ static void ixgbe_configure(struct ixgbe
+ adapter->tx_ring[i]->atr_sample_rate =
+ adapter->atr_sample_rate;
+ ixgbe_init_fdir_signature_82599(hw, adapter->fdir_pballoc);
++ } else if (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE) {
++ ixgbe_init_fdir_perfect_82599(&adapter->hw,
++ adapter->fdir_pballoc);
++ ixgbe_fdir_filter_restore(adapter);
+ }
+ ixgbe_configure_virtualization(adapter);
+
+@@ -4210,6 +4236,23 @@ static void ixgbe_clean_all_tx_rings(str
+ ixgbe_clean_tx_ring(adapter->tx_ring[i]);
+ }
+
++static void ixgbe_fdir_filter_exit(struct ixgbe_adapter *adapter)
++{
++ struct hlist_node *node, *node2;
++ struct ixgbe_fdir_filter *filter;
++
++ spin_lock(&adapter->fdir_perfect_lock);
++
++ hlist_for_each_entry_safe(filter, node, node2,
++ &adapter->fdir_filter_list, fdir_node) {
++ hlist_del(&filter->fdir_node);
++ kfree(filter);
++ }
++ adapter->fdir_filter_count = 0;
++
++ spin_unlock(&adapter->fdir_perfect_lock);
++}
++
+ void ixgbe_down(struct ixgbe_adapter *adapter)
+ {
+ struct net_device *netdev = adapter->netdev;
+@@ -5596,6 +5639,8 @@ static int ixgbe_close(struct net_device
+ ixgbe_down(adapter);
+ ixgbe_free_irq(adapter);
+
++ ixgbe_fdir_filter_exit(adapter);
++
+ ixgbe_free_all_tx_resources(adapter);
+ ixgbe_free_all_rx_resources(adapter);
+
diff --git a/patches.drivers/0014-ixgbe-remove-redundant-configuration-of-tx_sample_ra.patch b/patches.drivers/0014-ixgbe-remove-redundant-configuration-of-tx_sample_ra.patch
new file mode 100644
index 0000000000..3f52cf0799
--- /dev/null
+++ b/patches.drivers/0014-ixgbe-remove-redundant-configuration-of-tx_sample_ra.patch
@@ -0,0 +1,58 @@
+From 4c1d7b4b5dec6b8a97202d88538c06733173b1c5 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Thu, 21 Jul 2011 00:40:30 +0000
+Subject: [PATCH 14/32] ixgbe: remove redundant configuration of tx_sample_rate
+Git-commit: 4c1d7b4b5dec6b8a97202d88538c06733173b1c5
+Patch-mainline: v3.2-rc1~129^2~268
+References: bnc#714789
+
+This change fixes a minor redundancy in that tx_sample_rate was set twice.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_main.c | 13 ++++---------
+ 1 file changed, 4 insertions(+), 9 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -3834,16 +3834,12 @@ static void ixgbe_fdir_filter_restore(st
+
+ static void ixgbe_configure(struct ixgbe_adapter *adapter)
+ {
+- struct net_device *netdev = adapter->netdev;
+- struct ixgbe_hw *hw = &adapter->hw;
+- int i;
+-
+ ixgbe_configure_pb(adapter);
+ #ifdef CONFIG_IXGBE_DCB
+ ixgbe_configure_dcb(adapter);
+ #endif
+
+- ixgbe_set_rx_mode(netdev);
++ ixgbe_set_rx_mode(adapter->netdev);
+ ixgbe_restore_vlan(adapter);
+
+ #ifdef IXGBE_FCOE
+@@ -3852,15 +3848,14 @@ static void ixgbe_configure(struct ixgbe
+
+ #endif /* IXGBE_FCOE */
+ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) {
+- for (i = 0; i < adapter->num_tx_queues; i++)
+- adapter->tx_ring[i]->atr_sample_rate =
+- adapter->atr_sample_rate;
+- ixgbe_init_fdir_signature_82599(hw, adapter->fdir_pballoc);
++ ixgbe_init_fdir_signature_82599(&adapter->hw,
++ adapter->fdir_pballoc);
+ } else if (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE) {
+ ixgbe_init_fdir_perfect_82599(&adapter->hw,
+ adapter->fdir_pballoc);
+ ixgbe_fdir_filter_restore(adapter);
+ }
++
+ ixgbe_configure_virtualization(adapter);
+
+ ixgbe_configure_tx(adapter);
diff --git a/patches.drivers/0015-ixgbe-PFC-not-cleared-on-X540-devices.patch b/patches.drivers/0015-ixgbe-PFC-not-cleared-on-X540-devices.patch
new file mode 100644
index 0000000000..4d6f688c3a
--- /dev/null
+++ b/patches.drivers/0015-ixgbe-PFC-not-cleared-on-X540-devices.patch
@@ -0,0 +1,48 @@
+From 634cdca5637475b74dbc7bd72208f5fdc5904d38 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Thu, 21 Jul 2011 22:43:29 +0000
+Subject: [PATCH 15/32] ixgbe: PFC not cleared on X540 devices
+Git-commit: 634cdca5637475b74dbc7bd72208f5fdc5904d38
+Patch-mainline: v3.2-rc1~129^2~359
+References: bnc#714789
+
+X540 devices do not clear PFC before sets. This results in
+the device possibly responding to PFC frames that the user
+has disabled. Although it would also be wrong for the peer
+to be transmitting these frames. Now we clear the register
+before set.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_82599.c | 4 +++-
+ drivers/net/ixgbe/ixgbe_type.h | 1 +
+ 2 files changed, 4 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_82599.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_82599.c
+@@ -252,8 +252,10 @@ s32 ixgbe_dcb_config_pfc_82599(struct ix
+ reg &= ~IXGBE_MFLCN_RFCE;
+ reg |= IXGBE_MFLCN_RPFCE | IXGBE_MFLCN_DPF;
+
+- if (hw->mac.type == ixgbe_mac_X540)
++ if (hw->mac.type == ixgbe_mac_X540) {
++ reg &= ~IXGBE_MFLCN_RPFCE_MASK;
+ reg |= pfc_en << IXGBE_MFLCN_RPFCE_SHIFT;
++ }
+
+ IXGBE_WRITE_REG(hw, IXGBE_MFLCN, reg);
+
+--- a/drivers/net/ixgbe/ixgbe_type.h
++++ b/drivers/net/ixgbe/ixgbe_type.h
+@@ -1831,6 +1831,7 @@ enum {
+ #define IXGBE_MFLCN_DPF 0x00000002 /* Discard Pause Frame */
+ #define IXGBE_MFLCN_RPFCE 0x00000004 /* Receive Priority FC Enable */
+ #define IXGBE_MFLCN_RFCE 0x00000008 /* Receive FC Enable */
++#define IXGBE_MFLCN_RPFCE_MASK 0x00000FE0 /* Receive FC Mask */
+
+ #define IXGBE_MFLCN_RPFCE_SHIFT 4
+
diff --git a/patches.drivers/0016-ixgbe-DCB-and-perfect-filters-can-coexist.patch b/patches.drivers/0016-ixgbe-DCB-and-perfect-filters-can-coexist.patch
new file mode 100644
index 0000000000..da47dca3a0
--- /dev/null
+++ b/patches.drivers/0016-ixgbe-DCB-and-perfect-filters-can-coexist.patch
@@ -0,0 +1,29 @@
+From b1bbdb206a52b7eb13c2e57ee794b90618f61002 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 26 Apr 2011 07:26:30 +0000
+Subject: [PATCH 16/32] ixgbe: DCB and perfect filters can coexist
+Git-commit: b1bbdb206a52b7eb13c2e57ee794b90618f61002
+Patch-mainline: v3.1-rc1~316^2~386
+References: bnc#714789
+
+Now flow directors perfect filters features can coexist with DCB.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -136,7 +136,6 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ case ixgbe_mac_82599EB:
+ case ixgbe_mac_X540:
+ adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
+- adapter->flags &= ~IXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+ break;
+ default:
+ break;
diff --git a/patches.drivers/0017-ixgbe-consolidate-setup-for-multiple-traffic-classes.patch b/patches.drivers/0017-ixgbe-consolidate-setup-for-multiple-traffic-classes.patch
new file mode 100644
index 0000000000..3b59f05a6d
--- /dev/null
+++ b/patches.drivers/0017-ixgbe-consolidate-setup-for-multiple-traffic-classes.patch
@@ -0,0 +1,129 @@
+From e7589eab92919483d624eb3356cf3ac80efc0790 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Mon, 18 Jul 2011 22:38:36 +0000
+Subject: [PATCH 17/32] ixgbe: consolidate, setup for multiple traffic classes
+Git-commit: e7589eab92919483d624eb3356cf3ac80efc0790
+Patch-mainline: v3.2-rc1~129^2~360
+References: bnc#714789
+
+This consolidates setup code for multiple traffic classes in
+the setup_tc routine.
+
+Prep work to allow IEEE DCBX to optimize for number of traffic
+classes. Also simplifies code paths.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 46 +++------------------------------------
+ drivers/net/ixgbe/ixgbe_main.c | 30 +++++++++++++++++++------
+ 2 files changed, 27 insertions(+), 49 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -118,49 +118,11 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ if (!!state != !(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+ return err;
+
+- if (state > 0) {
+- /* Turn on DCB */
+- if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) {
+- e_err(drv, "Enable failed, needs MSI-X\n");
+- err = 1;
+- goto out;
+- }
++ if (state > 0)
++ err = ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
++ else
++ err = ixgbe_setup_tc(netdev, 0);
+
+- adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
+-
+- switch (adapter->hw.mac.type) {
+- case ixgbe_mac_82598EB:
+- adapter->last_lfc_mode = adapter->hw.fc.current_mode;
+- adapter->hw.fc.requested_mode = ixgbe_fc_none;
+- break;
+- case ixgbe_mac_82599EB:
+- case ixgbe_mac_X540:
+- adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
+- break;
+- default:
+- break;
+- }
+-
+- ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
+- } else {
+- /* Turn off DCB */
+- adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
+- adapter->temp_dcb_cfg.pfc_mode_enable = false;
+- adapter->dcb_cfg.pfc_mode_enable = false;
+- adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
+- switch (adapter->hw.mac.type) {
+- case ixgbe_mac_82599EB:
+- case ixgbe_mac_X540:
+- if (!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+- adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
+- break;
+- default:
+- break;
+- }
+- ixgbe_setup_tc(netdev, 0);
+- }
+-
+-out:
+ return err;
+ }
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -7262,11 +7262,11 @@ int ixgbe_setup_tc(struct net_device *de
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
+ struct ixgbe_hw *hw = &adapter->hw;
+
+- /* If DCB is anabled do not remove traffic classes, multiple
+- * traffic classes are required to implement DCB
+- */
+- if (!tc && (adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+- return 0;
++ /* Multiple traffic classes requires multiple queues */
++ if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) {
++ e_err(drv, "Enable failed, needs MSI-X\n");
++ return -EINVAL;
++ }
+
+ /* Hardware supports up to 8 traffic classes */
+ if (tc > MAX_TRAFFIC_CLASS ||
+@@ -7281,11 +7281,27 @@ int ixgbe_setup_tc(struct net_device *de
+ ixgbe_close(dev);
+ ixgbe_clear_interrupt_scheme(adapter);
+
+- if (tc)
++ if (tc) {
+ netdev_set_num_tc(dev, tc);
+- else
++ adapter->last_lfc_mode = adapter->hw.fc.current_mode;
++
++ adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
++ adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
++
++ if (adapter->hw.mac.type == ixgbe_mac_82598EB)
++ adapter->hw.fc.requested_mode = ixgbe_fc_none;
++ } else {
+ netdev_reset_tc(dev);
+
++ adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
++
++ adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
++ adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
++
++ adapter->temp_dcb_cfg.pfc_mode_enable = false;
++ adapter->dcb_cfg.pfc_mode_enable = false;
++ }
++
+ ixgbe_init_interrupt_scheme(adapter);
+ ixgbe_validate_rtr(adapter, tc);
+ if (netif_running(dev))
diff --git a/patches.drivers/0018-v2-ixgbe-Update-packet-buffer-reservation-to-correct.patch b/patches.drivers/0018-v2-ixgbe-Update-packet-buffer-reservation-to-correct.patch
new file mode 100644
index 0000000000..aa46a8d3bf
--- /dev/null
+++ b/patches.drivers/0018-v2-ixgbe-Update-packet-buffer-reservation-to-correct.patch
@@ -0,0 +1,44 @@
+From f7e1027f61c40eca1acc36e806b8db4cad01f221 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Thu, 21 Jul 2011 00:40:35 +0000
+Subject: [PATCH 18/32] v2 ixgbe: Update packet buffer reservation to correct fdir headroom size
+Git-commit: f7e1027f61c40eca1acc36e806b8db4cad01f221
+Patch-mainline: v3.2-rc1~129^2~267
+References: bnc#714789
+
+This change fixes an issue in which the incorrect amount of headroom was
+being reserved for flow director filters.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_main.c | 10 ++++++----
+ 1 file changed, 6 insertions(+), 4 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -3799,15 +3799,17 @@ static void ixgbe_configure_dcb(struct i
+
+ static void ixgbe_configure_pb(struct ixgbe_adapter *adapter)
+ {
+- int hdrm = 0;
+- int num_tc = netdev_get_num_tc(adapter->netdev);
+ struct ixgbe_hw *hw = &adapter->hw;
++ int hdrm;
++ u8 tc = netdev_get_num_tc(adapter->netdev);
+
+ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE ||
+ adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)
+- hdrm = 64 << adapter->fdir_pballoc;
++ hdrm = 32 << adapter->fdir_pballoc;
++ else
++ hdrm = 0;
+
+- hw->mac.ops.set_rxpba(&adapter->hw, num_tc, hdrm, PBA_STRATEGY_EQUAL);
++ hw->mac.ops.set_rxpba(hw, tc, hdrm, PBA_STRATEGY_EQUAL);
+ }
+
+ static void ixgbe_fdir_filter_restore(struct ixgbe_adapter *adapter)
diff --git a/patches.drivers/0019-ixgbe-update-P-FC-thresholds-to-account-for-X540-and.patch b/patches.drivers/0019-ixgbe-update-P-FC-thresholds-to-account-for-X540-and.patch
new file mode 100644
index 0000000000..e2cca86316
--- /dev/null
+++ b/patches.drivers/0019-ixgbe-update-P-FC-thresholds-to-account-for-X540-and.patch
@@ -0,0 +1,429 @@
+From 9da712d2ede7e3e3a0da180351505310ee271773 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 23 Aug 2011 03:14:22 +0000
+Subject: [PATCH 19/32] ixgbe: update {P}FC thresholds to account for X540 and loopback
+Git-commit: 9da712d2ede7e3e3a0da180351505310ee271773
+Patch-mainline: v3.2-rc1~129^2~191^2~3
+References: bnc#714789
+
+Revise high and low threshold marks wrt flow control to account
+for the X540 devices and latency introduced by the loopback
+switch.
+
+Without this it was in theory possible to drop frames on a
+supposedly lossless link with X540 or SR-IOV enabled.
+
+Previously we used a magic number in a define to calculate the
+threshold values. This made it difficult to sort out exactly
+which latencies were or were not being accounted for. Here
+I was overly explicit and tried to used #define names that would
+be recognizable after reading the IEEE 802.1Qbb specification.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_82598.c | 8 --
+ drivers/net/ixgbe/ixgbe_common.c | 12 +--
+ drivers/net/ixgbe/ixgbe_dcb.h | 1
+ drivers/net/ixgbe/ixgbe_dcb_82598.c | 9 +-
+ drivers/net/ixgbe/ixgbe_dcb_82599.c | 8 --
+ drivers/net/ixgbe/ixgbe_main.c | 128 ++++++++++++++++++++++++++++++++++--
+ drivers/net/ixgbe/ixgbe_type.h | 64 +++++++++++++++---
+ 7 files changed, 191 insertions(+), 39 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_82598.c
++++ b/drivers/net/ixgbe/ixgbe_82598.c
+@@ -358,7 +358,6 @@ static s32 ixgbe_fc_enable_82598(struct
+ u32 fctrl_reg;
+ u32 rmcs_reg;
+ u32 reg;
+- u32 rx_pba_size;
+ u32 link_speed = 0;
+ bool link_up;
+
+@@ -461,16 +460,13 @@ static s32 ixgbe_fc_enable_82598(struct
+
+ /* Set up and enable Rx high/low water mark thresholds, enable XON. */
+ if (hw->fc.current_mode & ixgbe_fc_tx_pause) {
+- rx_pba_size = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(packetbuf_num));
+- rx_pba_size >>= IXGBE_RXPBSIZE_SHIFT;
+-
+- reg = (rx_pba_size - hw->fc.low_water) << 6;
++ reg = hw->fc.low_water << 6;
+ if (hw->fc.send_xon)
+ reg |= IXGBE_FCRTL_XONE;
+
+ IXGBE_WRITE_REG(hw, IXGBE_FCRTL(packetbuf_num), reg);
+
+- reg = (rx_pba_size - hw->fc.high_water) << 6;
++ reg = hw->fc.high_water[packetbuf_num] << 6;
+ reg |= IXGBE_FCRTH_FCEN;
+
+ IXGBE_WRITE_REG(hw, IXGBE_FCRTH(packetbuf_num), reg);
+--- a/drivers/net/ixgbe/ixgbe_common.c
++++ b/drivers/net/ixgbe/ixgbe_common.c
+@@ -1931,7 +1931,6 @@ s32 ixgbe_fc_enable_generic(struct ixgbe
+ s32 ret_val = 0;
+ u32 mflcn_reg, fccfg_reg;
+ u32 reg;
+- u32 rx_pba_size;
+ u32 fcrtl, fcrth;
+
+ #ifdef CONFIG_DCB
+@@ -2011,11 +2010,8 @@ s32 ixgbe_fc_enable_generic(struct ixgbe
+ IXGBE_WRITE_REG(hw, IXGBE_MFLCN, mflcn_reg);
+ IXGBE_WRITE_REG(hw, IXGBE_FCCFG, fccfg_reg);
+
+- rx_pba_size = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(packetbuf_num));
+- rx_pba_size >>= IXGBE_RXPBSIZE_SHIFT;
+-
+- fcrth = (rx_pba_size - hw->fc.high_water) << 10;
+- fcrtl = (rx_pba_size - hw->fc.low_water) << 10;
++ fcrth = hw->fc.high_water[packetbuf_num] << 10;
++ fcrtl = hw->fc.low_water << 10;
+
+ if (hw->fc.current_mode & ixgbe_fc_tx_pause) {
+ fcrth |= IXGBE_FCRTH_FCEN;
+@@ -2292,7 +2288,9 @@ static s32 ixgbe_setup_fc(struct ixgbe_h
+ * Validate the water mark configuration. Zero water marks are invalid
+ * because it causes the controller to just blast out fc packets.
+ */
+- if (!hw->fc.low_water || !hw->fc.high_water || !hw->fc.pause_time) {
++ if (!hw->fc.low_water ||
++ !hw->fc.high_water[packetbuf_num] ||
++ !hw->fc.pause_time) {
+ hw_dbg(hw, "Invalid water mark configuration\n");
+ ret_val = IXGBE_ERR_INVALID_LINK_SETTINGS;
+ goto out;
+--- a/drivers/net/ixgbe/ixgbe_dcb.h
++++ b/drivers/net/ixgbe/ixgbe_dcb.h
+@@ -35,7 +35,6 @@
+
+ #define IXGBE_MAX_PACKET_BUFFERS 8
+ #define MAX_USER_PRIORITY 8
+-#define MAX_TRAFFIC_CLASS 8
+ #define MAX_BW_GROUP 8
+ #define BW_PERCENT 100
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_82598.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_82598.c
+@@ -191,7 +191,7 @@ s32 ixgbe_dcb_config_tx_data_arbiter_825
+ */
+ s32 ixgbe_dcb_config_pfc_82598(struct ixgbe_hw *hw, u8 pfc_en)
+ {
+- u32 reg, rx_pba_size;
++ u32 reg;
+ u8 i;
+
+ if (pfc_en) {
+@@ -222,9 +222,8 @@ s32 ixgbe_dcb_config_pfc_82598(struct ix
+ */
+ for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+ int enabled = pfc_en & (1 << i);
+- rx_pba_size = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
+- rx_pba_size >>= IXGBE_RXPBSIZE_SHIFT;
+- reg = (rx_pba_size - hw->fc.low_water) << 10;
++
++ reg = hw->fc.low_water << 10;
+
+ if (enabled == pfc_enabled_tx ||
+ enabled == pfc_enabled_full)
+@@ -232,7 +231,7 @@ s32 ixgbe_dcb_config_pfc_82598(struct ix
+
+ IXGBE_WRITE_REG(hw, IXGBE_FCRTL(i), reg);
+
+- reg = (rx_pba_size - hw->fc.high_water) << 10;
++ reg = hw->fc.high_water[i] << 10;
+ if (enabled == pfc_enabled_tx ||
+ enabled == pfc_enabled_full)
+ reg |= IXGBE_FCRTH_FCEN;
+--- a/drivers/net/ixgbe/ixgbe_dcb_82599.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_82599.c
+@@ -210,21 +210,19 @@ s32 ixgbe_dcb_config_tx_data_arbiter_825
+ */
+ s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en)
+ {
+- u32 i, reg, rx_pba_size;
++ u32 i, reg;
+
+ /* Configure PFC Tx thresholds per TC */
+ for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+ int enabled = pfc_en & (1 << i);
+- rx_pba_size = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
+- rx_pba_size >>= IXGBE_RXPBSIZE_SHIFT;
+
+- reg = (rx_pba_size - hw->fc.low_water) << 10;
++ reg = hw->fc.low_water << 10;
+
+ if (enabled)
+ reg |= IXGBE_FCRTL_XONE;
+ IXGBE_WRITE_REG(hw, IXGBE_FCRTL_82599(i), reg);
+
+- reg = (rx_pba_size - hw->fc.high_water) << 10;
++ reg = hw->fc.high_water[i] << 10;
+ if (enabled)
+ reg |= IXGBE_FCRTH_FCEN;
+ IXGBE_WRITE_REG(hw, IXGBE_FCRTH_82599(i), reg);
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -3794,9 +3794,128 @@ static void ixgbe_configure_dcb(struct i
+ IXGBE_WRITE_REG(hw, IXGBE_RQTC, reg);
+ }
+ }
++#endif
++
++/* Additional bittime to account for IXGBE framing */
++#define IXGBE_ETH_FRAMING 20
++
++/*
++ * ixgbe_hpbthresh - calculate high water mark for flow control
++ *
++ * @adapter: board private structure to calculate for
++ * @pb - packet buffer to calculate
++ */
++static int ixgbe_hpbthresh(struct ixgbe_adapter *adapter, int pb)
++{
++ struct ixgbe_hw *hw = &adapter->hw;
++ struct net_device *dev = adapter->netdev;
++ int link, tc, kb, marker;
++ u32 dv_id, rx_pba;
++
++ /* Calculate max LAN frame size */
++ tc = link = dev->mtu + ETH_HLEN + ETH_FCS_LEN + IXGBE_ETH_FRAMING;
++
++#ifdef IXGBE_FCOE
++ /* FCoE traffic class uses FCOE jumbo frames */
++ if (dev->features & NETIF_F_FCOE_MTU) {
++ int fcoe_pb = 0;
+
++#ifdef CONFIG_IXGBE_DCB
++ fcoe_pb = netdev_get_prio_tc_map(dev, adapter->fcoe.up);
++
++#endif
++ if (fcoe_pb == pb && tc < IXGBE_FCOE_JUMBO_FRAME_SIZE)
++ tc = IXGBE_FCOE_JUMBO_FRAME_SIZE;
++ }
+ #endif
+
++ /* Calculate delay value for device */
++ switch (hw->mac.type) {
++ case ixgbe_mac_X540:
++ dv_id = IXGBE_DV_X540(link, tc);
++ break;
++ default:
++ dv_id = IXGBE_DV(link, tc);
++ break;
++ }
++
++ /* Loopback switch introduces additional latency */
++ if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)
++ dv_id += IXGBE_B2BT(tc);
++
++ /* Delay value is calculated in bit times convert to KB */
++ kb = IXGBE_BT2KB(dv_id);
++ rx_pba = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(pb)) >> 10;
++
++ marker = rx_pba - kb;
++
++ /* It is possible that the packet buffer is not large enough
++ * to provide required headroom. In this case throw an error
++ * to user and a do the best we can.
++ */
++ if (marker < 0) {
++ e_warn(drv, "Packet Buffer(%i) can not provide enough"
++ "headroom to support flow control."
++ "Decrease MTU or number of traffic classes\n", pb);
++ marker = tc + 1;
++ }
++
++ return marker;
++}
++
++/*
++ * ixgbe_lpbthresh - calculate low water mark for for flow control
++ *
++ * @adapter: board private structure to calculate for
++ * @pb - packet buffer to calculate
++ */
++static int ixgbe_lpbthresh(struct ixgbe_adapter *adapter)
++{
++ struct ixgbe_hw *hw = &adapter->hw;
++ struct net_device *dev = adapter->netdev;
++ int tc;
++ u32 dv_id;
++
++ /* Calculate max LAN frame size */
++ tc = dev->mtu + ETH_HLEN + ETH_FCS_LEN;
++
++ /* Calculate delay value for device */
++ switch (hw->mac.type) {
++ case ixgbe_mac_X540:
++ dv_id = IXGBE_LOW_DV_X540(tc);
++ break;
++ default:
++ dv_id = IXGBE_LOW_DV(tc);
++ break;
++ }
++
++ /* Delay value is calculated in bit times convert to KB */
++ return IXGBE_BT2KB(dv_id);
++}
++
++/*
++ * ixgbe_pbthresh_setup - calculate and setup high low water marks
++ */
++static void ixgbe_pbthresh_setup(struct ixgbe_adapter *adapter)
++{
++ struct ixgbe_hw *hw = &adapter->hw;
++ int num_tc = netdev_get_num_tc(adapter->netdev);
++ int i;
++
++ if (!num_tc)
++ num_tc = 1;
++
++ hw->fc.low_water = ixgbe_lpbthresh(adapter);
++
++ for (i = 0; i < num_tc; i++) {
++ hw->fc.high_water[i] = ixgbe_hpbthresh(adapter, i);
++
++ /* Low water marks must not be larger than high water marks */
++ if (hw->fc.low_water > hw->fc.high_water[i])
++ hw->fc.low_water = 0;
++ }
++}
++
+ static void ixgbe_configure_pb(struct ixgbe_adapter *adapter)
+ {
+ struct ixgbe_hw *hw = &adapter->hw;
+@@ -3810,6 +3929,7 @@ static void ixgbe_configure_pb(struct ix
+ hdrm = 0;
+
+ hw->mac.ops.set_rxpba(hw, tc, hdrm, PBA_STRATEGY_EQUAL);
++ ixgbe_pbthresh_setup(adapter);
+ }
+
+ static void ixgbe_fdir_filter_restore(struct ixgbe_adapter *adapter)
+@@ -5199,13 +5319,11 @@ static int __devinit ixgbe_sw_init(struc
+ {
+ struct ixgbe_hw *hw = &adapter->hw;
+ struct pci_dev *pdev = adapter->pdev;
+- struct net_device *dev = adapter->netdev;
+ unsigned int rss;
+ #ifdef CONFIG_IXGBE_DCB
+ int j;
+ struct tc_configuration *tc;
+ #endif
+- int max_frame = dev->mtu + ETH_HLEN + ETH_FCS_LEN;
+
+ /* PCI config space info */
+
+@@ -5281,8 +5399,7 @@ static int __devinit ixgbe_sw_init(struc
+ #ifdef CONFIG_DCB
+ adapter->last_lfc_mode = hw->fc.current_mode;
+ #endif
+- hw->fc.high_water = FC_HIGH_WATER(max_frame);
+- hw->fc.low_water = FC_LOW_WATER(max_frame);
++ ixgbe_pbthresh_setup(adapter);
+ hw->fc.pause_time = IXGBE_DEFAULT_FCPAUSE;
+ hw->fc.send_xon = true;
+ hw->fc.disable_fc_autoneg = false;
+@@ -5549,9 +5666,6 @@ static int ixgbe_change_mtu(struct net_d
+ /* must set new MTU before calling down or up */
+ netdev->mtu = new_mtu;
+
+- hw->fc.high_water = FC_HIGH_WATER(max_frame);
+- hw->fc.low_water = FC_LOW_WATER(max_frame);
+-
+ if (netif_running(netdev))
+ ixgbe_reinit_locked(adapter);
+
+--- a/drivers/net/ixgbe/ixgbe_type.h
++++ b/drivers/net/ixgbe/ixgbe_type.h
+@@ -400,6 +400,7 @@
+ #define IXGBE_WUPL_LENGTH_MASK 0xFFFF
+
+ /* DCB registers */
++#define MAX_TRAFFIC_CLASS 8
+ #define IXGBE_RMCS 0x03D00
+ #define IXGBE_DPMCS 0x07F40
+ #define IXGBE_PDPMCS 0x0CD00
+@@ -2273,13 +2274,60 @@ typedef u32 ixgbe_physical_layer;
+ #define IXGBE_PHYSICAL_LAYER_10GBASE_XAUI 0x1000
+ #define IXGBE_PHYSICAL_LAYER_SFP_ACTIVE_DA 0x2000
+
+-/* Flow Control Macros */
+-#define PAUSE_RTT 8
+-#define PAUSE_MTU(MTU) ((MTU + 1024 - 1) / 1024)
+-
+-#define FC_HIGH_WATER(MTU) ((((PAUSE_RTT + PAUSE_MTU(MTU)) * 144) + 99) / 100 +\
+- PAUSE_MTU(MTU))
+-#define FC_LOW_WATER(MTU) (2 * (2 * PAUSE_MTU(MTU) + PAUSE_RTT))
++/* Flow Control Data Sheet defined values
++ * Calculation and defines taken from 802.1bb Annex O
++ */
++
++/* BitTimes (BT) conversion */
++#define IXGBE_BT2KB(BT) ((BT + 1023) / (8 * 1024))
++#define IXGBE_B2BT(BT) (BT * 8)
++
++/* Calculate Delay to respond to PFC */
++#define IXGBE_PFC_D 672
++
++/* Calculate Cable Delay */
++#define IXGBE_CABLE_DC 5556 /* Delay Copper */
++#define IXGBE_CABLE_DO 5000 /* Delay Optical */
++
++/* Calculate Interface Delay X540 */
++#define IXGBE_PHY_DC 25600 /* Delay 10G BASET */
++#define IXGBE_MAC_DC 8192 /* Delay Copper XAUI interface */
++#define IXGBE_XAUI_DC (2 * 2048) /* Delay Copper Phy */
++
++#define IXGBE_ID_X540 (IXGBE_MAC_DC + IXGBE_XAUI_DC + IXGBE_PHY_DC)
++
++/* Calculate Interface Delay 82598, 82599 */
++#define IXGBE_PHY_D 12800
++#define IXGBE_MAC_D 4096
++#define IXGBE_XAUI_D (2 * 1024)
++
++#define IXGBE_ID (IXGBE_MAC_D + IXGBE_XAUI_D + IXGBE_PHY_D)
++
++/* Calculate Delay incurred from higher layer */
++#define IXGBE_HD 6144
++
++/* Calculate PCI Bus delay for low thresholds */
++#define IXGBE_PCI_DELAY 10000
++
++/* Calculate X540 delay value in bit times */
++#define IXGBE_FILL_RATE (36 / 25)
++
++#define IXGBE_DV_X540(LINK, TC) (IXGBE_FILL_RATE * \
++ (IXGBE_B2BT(LINK) + IXGBE_PFC_D + \
++ (2 * IXGBE_CABLE_DC) + \
++ (2 * IXGBE_ID_X540) + \
++ IXGBE_HD + IXGBE_B2BT(TC)))
++
++/* Calculate 82599, 82598 delay value in bit times */
++#define IXGBE_DV(LINK, TC) (IXGBE_FILL_RATE * \
++ (IXGBE_B2BT(LINK) + IXGBE_PFC_D + \
++ (2 * IXGBE_CABLE_DC) + (2 * IXGBE_ID) + \
++ IXGBE_HD + IXGBE_B2BT(TC)))
++
++/* Calculate low threshold delay values */
++#define IXGBE_LOW_DV_X540(TC) (2 * IXGBE_B2BT(TC) + \
++ (IXGBE_FILL_RATE * IXGBE_PCI_DELAY))
++#define IXGBE_LOW_DV(TC) (2 * IXGBE_LOW_DV_X540(TC))
+
+ /* Software ATR hash keys */
+ #define IXGBE_ATR_BUCKET_HASH_KEY 0x3DAD14E2
+@@ -2498,7 +2546,7 @@ struct ixgbe_bus_info {
+
+ /* Flow control parameters */
+ struct ixgbe_fc_info {
+- u32 high_water; /* Flow Control High-water */
++ u32 high_water[MAX_TRAFFIC_CLASS]; /* Flow Control High-water */
+ u32 low_water; /* Flow Control Low-water */
+ u16 pause_time; /* Flow Control Pause timer */
+ bool send_xon; /* Flow control send XON */
diff --git a/patches.drivers/0020-ixgbe-Make-certain-to-initialize-the-fdir_perfect_lo.patch b/patches.drivers/0020-ixgbe-Make-certain-to-initialize-the-fdir_perfect_lo.patch
new file mode 100644
index 0000000000..254f87e177
--- /dev/null
+++ b/patches.drivers/0020-ixgbe-Make-certain-to-initialize-the-fdir_perfect_lo.patch
@@ -0,0 +1,42 @@
+From 1fc5f0386461dcb3fcbda7a8bcac87f7a822bc56 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Thu, 2 Jun 2011 04:28:39 +0000
+Subject: [PATCH 20/32] ixgbe: Make certain to initialize the fdir_perfect_lock in all cases
+Git-commit: 1fc5f0386461dcb3fcbda7a8bcac87f7a822bc56
+Patch-mainline: v3.1-rc1~316^2~183^2
+References: bnc#714789
+
+This fix makes it so that the fdir_perfect_lock is initialized in all
+cases. This is necessary as the fdir_filter_exit routine will always
+attempt to take the lock before inspecting the filter table.
+
+Reported-by: Ben Greear <greearb@candelatech.com>
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_main.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -5350,8 +5350,6 @@ static int __devinit ixgbe_sw_init(struc
+ adapter->flags2 |= IXGBE_FLAG2_RSC_ENABLED;
+ if (hw->device_id == IXGBE_DEV_ID_82599_T3_LOM)
+ adapter->flags2 |= IXGBE_FLAG2_TEMP_SENSOR_CAPABLE;
+- /* n-tuple support exists, always init our spinlock */
+- spin_lock_init(&adapter->fdir_perfect_lock);
+ /* Flow Director hash filters enabled */
+ adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
+ adapter->atr_sample_rate = 20;
+@@ -5373,6 +5371,9 @@ static int __devinit ixgbe_sw_init(struc
+ break;
+ }
+
++ /* n-tuple support exists, always init our spinlock */
++ spin_lock_init(&adapter->fdir_perfect_lock);
++
+ #ifdef CONFIG_IXGBE_DCB
+ /* Configure DCB traffic classes */
+ for (j = 0; j < MAX_TRAFFIC_CLASS; j++) {
diff --git a/patches.drivers/0021-ixgbe-complete-FCoE-initialization-from-setapp-routi.patch b/patches.drivers/0021-ixgbe-complete-FCoE-initialization-from-setapp-routi.patch
new file mode 100644
index 0000000000..9496388736
--- /dev/null
+++ b/patches.drivers/0021-ixgbe-complete-FCoE-initialization-from-setapp-routi.patch
@@ -0,0 +1,109 @@
+From 9372453e87560f1602815bf08b9e2af0f5136a61 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Sat, 18 Jun 2011 07:39:52 +0000
+Subject: [PATCH 21/32] ixgbe: complete FCoE initialization from setapp() routine
+Git-commit: 9372453e87560f1602815bf08b9e2af0f5136a61
+Patch-mainline: v3.1-rc1~316^2~327^2~2
+References: bnc#714789
+
+Commit,
+
+commit c8ca76ebc6e50752c5311b92bb9aef7edb324577
+Author: John Fastabend <john.r.fastabend@intel.com>
+Date: Sat Mar 12 03:50:53 2011 +0000
+
+ ixgbe: DCB, further cleanups to app configuration
+
+Removed the getapp() routines from ixgbe because they are no
+longer needed. It also allowed the set hardware routines to
+use both IEEE 802.1Qaz app types and CEE app types. This
+added code to do bit shifting in the IEEE case.
+
+This patch reverts the checks and handles the IEEE case
+from the setapp entry point. I prefer this because it
+keeps the two paths from having to be aware of the DCB
+mode. This resolves a bug where I missed setting the
+selector bit in the IEEE spec value and left it in the
+CEE value. Now that they are separate routines these types
+of errors should not occur.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 38 +++++++++++++++++++++++++-------------
+ 1 file changed, 25 insertions(+), 13 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -292,24 +292,20 @@ static void ixgbe_dcbnl_get_pfc_cfg(stru
+ static u8 ixgbe_dcbnl_set_all(struct net_device *netdev)
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
++ int ret;
++#ifdef IXGBE_FCOE
+ struct dcb_app app = {
+ .selector = DCB_APP_IDTYPE_ETHTYPE,
+ .protocol = ETH_P_FCOE,
+ };
+ u8 up = dcb_getapp(netdev, &app);
+- int ret;
++#endif
+
+ ret = ixgbe_copy_dcb_cfg(&adapter->temp_dcb_cfg, &adapter->dcb_cfg,
+ MAX_TRAFFIC_CLASS);
+ if (ret)
+ return DCB_NO_HW_CHG;
+
+- /* In IEEE mode app data must be parsed into DCBX format for
+- * hardware routines.
+- */
+- if (adapter->dcbx_cap & DCB_CAP_DCBX_VER_IEEE)
+- up = (1 << up);
+-
+ #ifdef IXGBE_FCOE
+ if (up && (up != (1 << adapter->fcoe.up)))
+ adapter->dcb_set_bitmap |= BIT_APP_UPCHG;
+@@ -640,18 +636,34 @@ static int ixgbe_dcbnl_ieee_setapp(struc
+ struct dcb_app *app)
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
++ int err = -EINVAL;
+
+ if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
+- return -EINVAL;
++ return err;
+
+- dcb_setapp(dev, app);
++ err = dcb_ieee_setapp(dev, app);
+
+ #ifdef IXGBE_FCOE
+- if (app->selector == 1 && app->protocol == ETH_P_FCOE &&
+- adapter->fcoe.tc == app->priority)
+- ixgbe_dcbnl_set_all(dev);
++ if (!err && app->selector == IEEE_8021QAZ_APP_SEL_ETHERTYPE &&
++ app->protocol == ETH_P_FCOE) {
++ u8 app_mask = dcb_ieee_getapp_mask(dev, app);
++
++ if (app_mask & (1 << adapter->fcoe.up))
++ return err;
++
++ adapter->fcoe.up = app->priority;
++
++ if (netif_running(dev))
++ dev->netdev_ops->ndo_stop(dev);
++
++ ixgbe_clear_interrupt_scheme(adapter);
++ ixgbe_init_interrupt_scheme(adapter);
++
++ if (netif_running(dev))
++ dev->netdev_ops->ndo_open(dev);
++ }
+ #endif
+- return 0;
++ return err;
+ }
+
+ static u8 ixgbe_dcbnl_getdcbx(struct net_device *dev)
diff --git a/patches.drivers/0022-ixgbe-DCB-do-not-call-set_state-from-IEEE-mode.patch b/patches.drivers/0022-ixgbe-DCB-do-not-call-set_state-from-IEEE-mode.patch
new file mode 100644
index 0000000000..cdbd998f04
--- /dev/null
+++ b/patches.drivers/0022-ixgbe-DCB-do-not-call-set_state-from-IEEE-mode.patch
@@ -0,0 +1,239 @@
+From 4c09f3a0674119504af4e5805b327213055c412f Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Thu, 4 Aug 2011 05:47:07 +0000
+Subject: [PATCH 22/32] ixgbe: DCB, do not call set_state() from IEEE mode
+Git-commit: 4c09f3a0674119504af4e5805b327213055c412f
+Patch-mainline: v3.2-rc1~129^2~221^2~5
+References: bnc#714789
+
+The DCB CEE command set_state() will complete successfully
+but is misleading because it enables IEEE mode. After
+this patch the command is failed.
+
+And IEEE PFC/ETS is managed from ieee paths now instead
+of using CEE primitives.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb.c | 36 +++++++++++++++++++++++++-
+ drivers/net/ixgbe/ixgbe_dcb.h | 3 +-
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 54 ++++++++++++++-------------------------
+ drivers/net/ixgbe/ixgbe_main.c | 18 ++++++++-----
+ 4 files changed, 69 insertions(+), 42 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb.c
++++ b/drivers/net/ixgbe/ixgbe_dcb.c
+@@ -40,7 +40,8 @@
+ * hardware. The IEEE 802.1Qaz specification do not use bandwidth
+ * groups so this is much simplified from the CEE case.
+ */
+-s32 ixgbe_ieee_credits(__u8 *bw, __u16 *refill, __u16 *max, int max_frame)
++static s32 ixgbe_ieee_credits(__u8 *bw, __u16 *refill,
++ __u16 *max, int max_frame)
+ {
+ int min_percent = 100;
+ int min_credit, multiplier;
+@@ -291,6 +292,39 @@ s32 ixgbe_dcb_hw_pfc_config(struct ixgbe
+ return ret;
+ }
+
++s32 ixgbe_dcb_hw_ets(struct ixgbe_hw *hw, struct ieee_ets *ets, int max_frame)
++{
++ __u16 refill[IEEE_8021QAZ_MAX_TCS], max[IEEE_8021QAZ_MAX_TCS];
++ __u8 prio_type[IEEE_8021QAZ_MAX_TCS];
++ int i;
++
++ /* naively give each TC a bwg to map onto CEE hardware */
++ __u8 bwg_id[IEEE_8021QAZ_MAX_TCS] = {0, 1, 2, 3, 4, 5, 6, 7};
++
++ /* Map TSA onto CEE prio type */
++ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
++ switch (ets->tc_tsa[i]) {
++ case IEEE_8021QAZ_TSA_STRICT:
++ prio_type[i] = 2;
++ break;
++ case IEEE_8021QAZ_TSA_ETS:
++ prio_type[i] = 0;
++ break;
++ default:
++ /* Hardware only supports priority strict or
++ * ETS transmission selection algorithms if
++ * we receive some other value from dcbnl
++ * throw an error
++ */
++ return -EINVAL;
++ }
++ }
++
++ ixgbe_ieee_credits(ets->tc_tx_bw, refill, max, max_frame);
++ return ixgbe_dcb_hw_ets_config(hw, refill, max,
++ bwg_id, prio_type, ets->prio_tc);
++}
++
+ s32 ixgbe_dcb_hw_ets_config(struct ixgbe_hw *hw,
+ u16 *refill, u16 *max, u8 *bwg_id,
+ u8 *prio_type, u8 *prio_tc)
+--- a/drivers/net/ixgbe/ixgbe_dcb.h
++++ b/drivers/net/ixgbe/ixgbe_dcb.h
+@@ -29,6 +29,7 @@
+ #ifndef _DCB_CONFIG_H_
+ #define _DCB_CONFIG_H_
+
++#include <linux/dcbnl.h>
+ #include "ixgbe_type.h"
+
+ /* DCB data structures */
+@@ -146,11 +147,11 @@ void ixgbe_dcb_unpack_bwgid(struct ixgbe
+ void ixgbe_dcb_unpack_prio(struct ixgbe_dcb_config *, int, u8 *);
+
+ /* DCB credits calculation */
+-s32 ixgbe_ieee_credits(__u8 *bw, __u16 *refill, __u16 *max, int max_frame);
+ s32 ixgbe_dcb_calculate_tc_credits(struct ixgbe_hw *,
+ struct ixgbe_dcb_config *, int, u8);
+
+ /* DCB hw initialization */
++s32 ixgbe_dcb_hw_ets(struct ixgbe_hw *hw, struct ieee_ets *ets, int max);
+ s32 ixgbe_dcb_hw_ets_config(struct ixgbe_hw *hw, u16 *refill, u16 *max,
+ u8 *bwg_id, u8 *prio_type, u8 *tc_prio);
+ s32 ixgbe_dcb_hw_pfc_config(struct ixgbe_hw *hw, u8 pfc_en);
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -114,6 +114,10 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ u8 err = 0;
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
+
++ /* Fail command if not in CEE mode */
++ if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_CEE))
++ return 1;
++
+ /* verify there is something to do, if not then exit */
+ if (!!state != !(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+ return err;
+@@ -301,6 +305,10 @@ static u8 ixgbe_dcbnl_set_all(struct net
+ u8 up = dcb_getapp(netdev, &app);
+ #endif
+
++ /* Fail command if not in CEE mode */
++ if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_CEE))
++ return 1;
++
+ ret = ixgbe_copy_dcb_cfg(&adapter->temp_dcb_cfg, &adapter->dcb_cfg,
+ MAX_TRAFFIC_CLASS);
+ if (ret)
+@@ -537,13 +545,9 @@ static int ixgbe_dcbnl_ieee_setets(struc
+ struct ieee_ets *ets)
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
+- __u16 refill[IEEE_8021QAZ_MAX_TCS], max[IEEE_8021QAZ_MAX_TCS];
+- __u8 prio_type[IEEE_8021QAZ_MAX_TCS];
+ int max_frame = dev->mtu + ETH_HLEN + ETH_FCS_LEN;
+- int i, err;
+- __u64 *p = (__u64 *) ets->prio_tc;
+- /* naively give each TC a bwg to map onto CEE hardware */
+- __u8 bwg_id[IEEE_8021QAZ_MAX_TCS] = {0, 1, 2, 3, 4, 5, 6, 7};
++ int i;
++ __u8 max_tc = 0;
+
+ if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
+ return -EINVAL;
+@@ -557,34 +561,18 @@ static int ixgbe_dcbnl_ieee_setets(struc
+
+ memcpy(adapter->ixgbe_ieee_ets, ets, sizeof(*adapter->ixgbe_ieee_ets));
+
+- /* Map TSA onto CEE prio type */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+- switch (ets->tc_tsa[i]) {
+- case IEEE_8021QAZ_TSA_STRICT:
+- prio_type[i] = 2;
+- break;
+- case IEEE_8021QAZ_TSA_ETS:
+- prio_type[i] = 0;
+- break;
+- default:
+- /* Hardware only supports priority strict or
+- * ETS transmission selection algorithms if
+- * we receive some other value from dcbnl
+- * throw an error
+- */
+- return -EINVAL;
+- }
++ if (ets->prio_tc[i] > max_tc)
++ max_tc = ets->prio_tc[i];
+ }
+
+- if (*p)
+- ixgbe_dcbnl_set_state(dev, 1);
+- else
+- ixgbe_dcbnl_set_state(dev, 0);
++ if (max_tc)
++ max_tc++;
+
+- ixgbe_ieee_credits(ets->tc_tx_bw, refill, max, max_frame);
+- err = ixgbe_dcb_hw_ets_config(&adapter->hw, refill, max,
+- bwg_id, prio_type, ets->prio_tc);
+- return err;
++ if (max_tc != netdev_get_num_tc(dev))
++ ixgbe_setup_tc(dev, max_tc);
++
++ return ixgbe_dcb_hw_ets(&adapter->hw, ets, max_frame);
+ }
+
+ static int ixgbe_dcbnl_ieee_getpfc(struct net_device *dev,
+@@ -615,7 +603,6 @@ static int ixgbe_dcbnl_ieee_setpfc(struc
+ struct ieee_pfc *pfc)
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
+- int err;
+
+ if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
+ return -EINVAL;
+@@ -628,8 +615,7 @@ static int ixgbe_dcbnl_ieee_setpfc(struc
+ }
+
+ memcpy(adapter->ixgbe_ieee_pfc, pfc, sizeof(*adapter->ixgbe_ieee_pfc));
+- err = ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc->pfc_en);
+- return err;
++ return ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc->pfc_en);
+ }
+
+ static int ixgbe_dcbnl_ieee_setapp(struct net_device *dev,
+@@ -705,7 +691,7 @@ static u8 ixgbe_dcbnl_setdcbx(struct net
+ */
+ ixgbe_dcbnl_ieee_setets(dev, &ets);
+ ixgbe_dcbnl_ieee_setpfc(dev, &pfc);
+- ixgbe_dcbnl_set_state(dev, 0);
++ ixgbe_setup_tc(dev, 0);
+ }
+
+ return 0;
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -3769,12 +3769,18 @@ static void ixgbe_configure_dcb(struct i
+ } else {
+ struct net_device *dev = adapter->netdev;
+
+- if (adapter->ixgbe_ieee_ets)
+- dev->dcbnl_ops->ieee_setets(dev,
+- adapter->ixgbe_ieee_ets);
+- if (adapter->ixgbe_ieee_pfc)
+- dev->dcbnl_ops->ieee_setpfc(dev,
+- adapter->ixgbe_ieee_pfc);
++ if (adapter->ixgbe_ieee_ets) {
++ struct ieee_ets *ets = adapter->ixgbe_ieee_ets;
++ int max_frame = dev->mtu + ETH_HLEN + ETH_FCS_LEN;
++
++ ixgbe_dcb_hw_ets(&adapter->hw, ets, max_frame);
++ }
++
++ if (adapter->ixgbe_ieee_pfc) {
++ struct ieee_pfc *pfc = adapter->ixgbe_ieee_pfc;
++
++ ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc->pfc_en);
++ }
+ }
+
+ /* Enable RSS Hash per TC */
diff --git a/patches.drivers/0023-ixgbe-add-structure-for-containing-RX-TX-rings-to-q_.patch b/patches.drivers/0023-ixgbe-add-structure-for-containing-RX-TX-rings-to-q_.patch
new file mode 100644
index 0000000000..205b569f07
--- /dev/null
+++ b/patches.drivers/0023-ixgbe-add-structure-for-containing-RX-TX-rings-to-q_.patch
@@ -0,0 +1,447 @@
+From 08c8833b29cfa4343ff132eebc5648b234eb3f85 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Sat, 11 Jun 2011 01:45:03 +0000
+Subject: [PATCH 23/32] ixgbe: add structure for containing RX/TX rings to q_vector
+Git-commit: 08c8833b29cfa4343ff132eebc5648b234eb3f85
+Patch-mainline: v3.1-rc1~316^2~4^2~5
+References: bnc#714789
+
+This patch adds support for a ring container structure to be used within
+the q_vector. The basic idea is to provide a means of separating the RX
+and TX rings while maintaining a common structure for their containment.
+The advantage to this is that later we should be able to pass this
+structure to the update_itr functions without needing to pass individual
+rings.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 16 ++--
+ drivers/net/ixgbe/ixgbe_ethtool.c | 6 -
+ drivers/net/ixgbe/ixgbe_main.c | 150 +++++++++++++++++++-------------------
+ 3 files changed, 88 insertions(+), 84 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -270,6 +270,15 @@ struct ixgbe_ring_feature {
+ int mask;
+ } ____cacheline_internodealigned_in_smp;
+
++struct ixgbe_ring_container {
++#if MAX_RX_QUEUES > MAX_TX_QUEUES
++ DECLARE_BITMAP(idx, MAX_RX_QUEUES);
++#else
++ DECLARE_BITMAP(idx, MAX_TX_QUEUES);
++#endif
++ u8 count; /* total number of rings in vector */
++ u8 itr; /* current ITR setting for ring */
++};
+
+ #define MAX_RX_PACKET_BUFFERS ((adapter->flags & IXGBE_FLAG_DCB_ENABLED) \
+ ? 8 : 1)
+@@ -287,12 +296,7 @@ struct ixgbe_q_vector {
+ int cpu; /* CPU for DCA */
+ #endif
+ struct napi_struct napi;
+- DECLARE_BITMAP(rxr_idx, MAX_RX_QUEUES); /* Rx ring indices */
+- DECLARE_BITMAP(txr_idx, MAX_TX_QUEUES); /* Tx ring indices */
+- u8 rxr_count; /* Rx ring count assigned to this vector */
+- u8 txr_count; /* Tx ring count assigned to this vector */
+- u8 tx_itr;
+- u8 rx_itr;
++ struct ixgbe_ring_container rx, tx;
+ u32 eitr;
+ cpumask_var_t affinity_mask;
+ char name[IFNAMSIZ + 9];
+--- a/drivers/net/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ixgbe/ixgbe_ethtool.c
+@@ -2087,7 +2087,7 @@ static int ixgbe_get_coalesce(struct net
+ }
+
+ /* if in mixed tx/rx queues per vector mode, report only rx settings */
+- if (adapter->q_vector[0]->txr_count && adapter->q_vector[0]->rxr_count)
++ if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count)
+ return 0;
+
+ /* only valid if in constant ITR mode */
+@@ -2152,7 +2152,7 @@ static int ixgbe_set_coalesce(struct net
+ bool need_reset = false;
+
+ /* don't accept tx specific changes if we've got mixed RxTx vectors */
+- if (adapter->q_vector[0]->txr_count && adapter->q_vector[0]->rxr_count
++ if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count
+ && ec->tx_coalesce_usecs)
+ return -EINVAL;
+
+@@ -2226,7 +2226,7 @@ static int ixgbe_set_coalesce(struct net
+ int num_vectors = adapter->num_msix_vectors - NON_Q_VECTORS;
+ for (i = 0; i < num_vectors; i++) {
+ q_vector = adapter->q_vector[i];
+- if (q_vector->txr_count && !q_vector->rxr_count)
++ if (q_vector->tx.count && !q_vector->rx.count)
+ /* tx only */
+ q_vector->eitr = adapter->tx_eitr_param;
+ else
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -1028,17 +1028,17 @@ static void ixgbe_update_dca(struct ixgb
+ if (q_vector->cpu == cpu)
+ goto out_no_update;
+
+- r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
+- for (i = 0; i < q_vector->txr_count; i++) {
++ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
++ for (i = 0; i < q_vector->tx.count; i++) {
+ ixgbe_update_tx_dca(adapter, adapter->tx_ring[r_idx], cpu);
+- r_idx = find_next_bit(q_vector->txr_idx, adapter->num_tx_queues,
++ r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+ r_idx + 1);
+ }
+
+- r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
+- for (i = 0; i < q_vector->rxr_count; i++) {
++ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
++ for (i = 0; i < q_vector->rx.count; i++) {
+ ixgbe_update_rx_dca(adapter, adapter->rx_ring[r_idx], cpu);
+- r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
++ r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+ r_idx + 1);
+ }
+
+@@ -1588,31 +1588,31 @@ static void ixgbe_configure_msix(struct
+ for (v_idx = 0; v_idx < q_vectors; v_idx++) {
+ q_vector = adapter->q_vector[v_idx];
+ /* XXX for_each_set_bit(...) */
+- r_idx = find_first_bit(q_vector->rxr_idx,
++ r_idx = find_first_bit(q_vector->rx.idx,
+ adapter->num_rx_queues);
+
+- for (i = 0; i < q_vector->rxr_count; i++) {
++ for (i = 0; i < q_vector->rx.count; i++) {
+ u8 reg_idx = adapter->rx_ring[r_idx]->reg_idx;
+ ixgbe_set_ivar(adapter, 0, reg_idx, v_idx);
+- r_idx = find_next_bit(q_vector->rxr_idx,
++ r_idx = find_next_bit(q_vector->rx.idx,
+ adapter->num_rx_queues,
+ r_idx + 1);
+ }
+- r_idx = find_first_bit(q_vector->txr_idx,
++ r_idx = find_first_bit(q_vector->tx.idx,
+ adapter->num_tx_queues);
+
+- for (i = 0; i < q_vector->txr_count; i++) {
++ for (i = 0; i < q_vector->tx.count; i++) {
+ u8 reg_idx = adapter->tx_ring[r_idx]->reg_idx;
+ ixgbe_set_ivar(adapter, 1, reg_idx, v_idx);
+- r_idx = find_next_bit(q_vector->txr_idx,
++ r_idx = find_next_bit(q_vector->tx.idx,
+ adapter->num_tx_queues,
+ r_idx + 1);
+ }
+
+- if (q_vector->txr_count && !q_vector->rxr_count)
++ if (q_vector->tx.count && !q_vector->rx.count)
+ /* tx only */
+ q_vector->eitr = adapter->tx_eitr_param;
+- else if (q_vector->rxr_count)
++ else if (q_vector->rx.count)
+ /* rx or mixed */
+ q_vector->eitr = adapter->rx_eitr_param;
+
+@@ -1776,37 +1776,37 @@ static void ixgbe_set_itr_msix(struct ix
+ u32 new_itr;
+ u8 current_itr, ret_itr;
+
+- r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
+- for (i = 0; i < q_vector->txr_count; i++) {
++ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
++ for (i = 0; i < q_vector->tx.count; i++) {
+ struct ixgbe_ring *tx_ring = adapter->tx_ring[r_idx];
+ ret_itr = ixgbe_update_itr(adapter, q_vector->eitr,
+- q_vector->tx_itr,
++ q_vector->tx.itr,
+ tx_ring->total_packets,
+ tx_ring->total_bytes);
+ /* if the result for this queue would decrease interrupt
+ * rate for this vector then use that result */
+- q_vector->tx_itr = ((q_vector->tx_itr > ret_itr) ?
+- q_vector->tx_itr - 1 : ret_itr);
+- r_idx = find_next_bit(q_vector->txr_idx, adapter->num_tx_queues,
++ q_vector->tx.itr = ((q_vector->tx.itr > ret_itr) ?
++ q_vector->tx.itr - 1 : ret_itr);
++ r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+ r_idx + 1);
+ }
+
+- r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
+- for (i = 0; i < q_vector->rxr_count; i++) {
++ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
++ for (i = 0; i < q_vector->rx.count; i++) {
+ struct ixgbe_ring *rx_ring = adapter->rx_ring[r_idx];
+ ret_itr = ixgbe_update_itr(adapter, q_vector->eitr,
+- q_vector->rx_itr,
++ q_vector->rx.itr,
+ rx_ring->total_packets,
+ rx_ring->total_bytes);
+ /* if the result for this queue would decrease interrupt
+ * rate for this vector then use that result */
+- q_vector->rx_itr = ((q_vector->rx_itr > ret_itr) ?
+- q_vector->rx_itr - 1 : ret_itr);
+- r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
++ q_vector->rx.itr = ((q_vector->rx.itr > ret_itr) ?
++ q_vector->rx.itr - 1 : ret_itr);
++ r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+ r_idx + 1);
+ }
+
+- current_itr = max(q_vector->rx_itr, q_vector->tx_itr);
++ current_itr = max(q_vector->rx.itr, q_vector->tx.itr);
+
+ switch (current_itr) {
+ /* counts and packets in update_itr are dependent on these numbers */
+@@ -2067,15 +2067,15 @@ static irqreturn_t ixgbe_msix_clean_tx(i
+ struct ixgbe_ring *tx_ring;
+ int i, r_idx;
+
+- if (!q_vector->txr_count)
++ if (!q_vector->tx.count)
+ return IRQ_HANDLED;
+
+- r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
+- for (i = 0; i < q_vector->txr_count; i++) {
++ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
++ for (i = 0; i < q_vector->tx.count; i++) {
+ tx_ring = adapter->tx_ring[r_idx];
+ tx_ring->total_bytes = 0;
+ tx_ring->total_packets = 0;
+- r_idx = find_next_bit(q_vector->txr_idx, adapter->num_tx_queues,
++ r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+ r_idx + 1);
+ }
+
+@@ -2103,16 +2103,16 @@ static irqreturn_t ixgbe_msix_clean_rx(i
+ ixgbe_update_dca(q_vector);
+ #endif
+
+- r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
+- for (i = 0; i < q_vector->rxr_count; i++) {
++ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
++ for (i = 0; i < q_vector->rx.count; i++) {
+ rx_ring = adapter->rx_ring[r_idx];
+ rx_ring->total_bytes = 0;
+ rx_ring->total_packets = 0;
+- r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
++ r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+ r_idx + 1);
+ }
+
+- if (!q_vector->rxr_count)
++ if (!q_vector->rx.count)
+ return IRQ_HANDLED;
+
+ /* EIAM disabled interrupts (on this vector) for us */
+@@ -2129,24 +2129,24 @@ static irqreturn_t ixgbe_msix_clean_many
+ int r_idx;
+ int i;
+
+- if (!q_vector->txr_count && !q_vector->rxr_count)
++ if (!q_vector->tx.count && !q_vector->rx.count)
+ return IRQ_HANDLED;
+
+- r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
+- for (i = 0; i < q_vector->txr_count; i++) {
++ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
++ for (i = 0; i < q_vector->tx.count; i++) {
+ ring = adapter->tx_ring[r_idx];
+ ring->total_bytes = 0;
+ ring->total_packets = 0;
+- r_idx = find_next_bit(q_vector->txr_idx, adapter->num_tx_queues,
++ r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+ r_idx + 1);
+ }
+
+- r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
+- for (i = 0; i < q_vector->rxr_count; i++) {
++ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
++ for (i = 0; i < q_vector->rx.count; i++) {
+ ring = adapter->rx_ring[r_idx];
+ ring->total_bytes = 0;
+ ring->total_packets = 0;
+- r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
++ r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+ r_idx + 1);
+ }
+
+@@ -2178,7 +2178,7 @@ static int ixgbe_clean_rxonly(struct nap
+ ixgbe_update_dca(q_vector);
+ #endif
+
+- r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
++ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
+ rx_ring = adapter->rx_ring[r_idx];
+
+ ixgbe_clean_rx_irq(q_vector, rx_ring, &work_done, budget);
+@@ -2219,27 +2219,27 @@ static int ixgbe_clean_rxtx_many(struct
+ ixgbe_update_dca(q_vector);
+ #endif
+
+- r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
+- for (i = 0; i < q_vector->txr_count; i++) {
++ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
++ for (i = 0; i < q_vector->tx.count; i++) {
+ ring = adapter->tx_ring[r_idx];
+ tx_clean_complete &= ixgbe_clean_tx_irq(q_vector, ring);
+- r_idx = find_next_bit(q_vector->txr_idx, adapter->num_tx_queues,
++ r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+ r_idx + 1);
+ }
+
+ /* attempt to distribute budget to each queue fairly, but don't allow
+ * the budget to go below 1 because we'll exit polling */
+- budget /= (q_vector->rxr_count ?: 1);
++ budget /= (q_vector->rx.count ?: 1);
+ budget = max(budget, 1);
+- r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
+- for (i = 0; i < q_vector->rxr_count; i++) {
++ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
++ for (i = 0; i < q_vector->rx.count; i++) {
+ ring = adapter->rx_ring[r_idx];
+ ixgbe_clean_rx_irq(q_vector, ring, &work_done, budget);
+- r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
++ r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+ r_idx + 1);
+ }
+
+- r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
++ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
+ ring = adapter->rx_ring[r_idx];
+ /* If all Rx work done, exit the polling mode */
+ if (work_done < budget) {
+@@ -2277,7 +2277,7 @@ static int ixgbe_clean_txonly(struct nap
+ ixgbe_update_dca(q_vector);
+ #endif
+
+- r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
++ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
+ tx_ring = adapter->tx_ring[r_idx];
+
+ if (!ixgbe_clean_tx_irq(q_vector, tx_ring))
+@@ -2302,8 +2302,8 @@ static inline void map_vector_to_rxq(str
+ struct ixgbe_q_vector *q_vector = a->q_vector[v_idx];
+ struct ixgbe_ring *rx_ring = a->rx_ring[r_idx];
+
+- set_bit(r_idx, q_vector->rxr_idx);
+- q_vector->rxr_count++;
++ set_bit(r_idx, q_vector->rx.idx);
++ q_vector->rx.count++;
+ rx_ring->q_vector = q_vector;
+ }
+
+@@ -2313,8 +2313,8 @@ static inline void map_vector_to_txq(str
+ struct ixgbe_q_vector *q_vector = a->q_vector[v_idx];
+ struct ixgbe_ring *tx_ring = a->tx_ring[t_idx];
+
+- set_bit(t_idx, q_vector->txr_idx);
+- q_vector->txr_count++;
++ set_bit(t_idx, q_vector->tx.idx);
++ q_vector->tx.count++;
+ tx_ring->q_vector = q_vector;
+ }
+
+@@ -2405,10 +2405,10 @@ static int ixgbe_request_msix_irqs(struc
+ if (err)
+ return err;
+
+-#define SET_HANDLER(_v) (((_v)->rxr_count && (_v)->txr_count) \
++#define SET_HANDLER(_v) (((_v)->rx.count && (_v)->tx.count) \
+ ? &ixgbe_msix_clean_many : \
+- (_v)->rxr_count ? &ixgbe_msix_clean_rx : \
+- (_v)->txr_count ? &ixgbe_msix_clean_tx : \
++ (_v)->rx.count ? &ixgbe_msix_clean_rx : \
++ (_v)->tx.count ? &ixgbe_msix_clean_tx : \
+ NULL)
+ for (vector = 0; vector < q_vectors; vector++) {
+ struct ixgbe_q_vector *q_vector = adapter->q_vector[vector];
+@@ -2472,16 +2472,16 @@ static void ixgbe_set_itr(struct ixgbe_a
+ u32 new_itr = q_vector->eitr;
+ u8 current_itr;
+
+- q_vector->tx_itr = ixgbe_update_itr(adapter, new_itr,
+- q_vector->tx_itr,
++ q_vector->tx.itr = ixgbe_update_itr(adapter, new_itr,
++ q_vector->tx.itr,
+ tx_ring->total_packets,
+ tx_ring->total_bytes);
+- q_vector->rx_itr = ixgbe_update_itr(adapter, new_itr,
+- q_vector->rx_itr,
++ q_vector->rx.itr = ixgbe_update_itr(adapter, new_itr,
++ q_vector->rx.itr,
+ rx_ring->total_packets,
+ rx_ring->total_bytes);
+
+- current_itr = max(q_vector->rx_itr, q_vector->tx_itr);
++ current_itr = max(q_vector->rx.itr, q_vector->tx.itr);
+
+ switch (current_itr) {
+ /* counts and packets in update_itr are dependent on these numbers */
+@@ -2632,10 +2632,10 @@ static inline void ixgbe_reset_q_vectors
+
+ for (i = 0; i < q_vectors; i++) {
+ struct ixgbe_q_vector *q_vector = adapter->q_vector[i];
+- bitmap_zero(q_vector->rxr_idx, MAX_RX_QUEUES);
+- bitmap_zero(q_vector->txr_idx, MAX_TX_QUEUES);
+- q_vector->rxr_count = 0;
+- q_vector->txr_count = 0;
++ bitmap_zero(q_vector->rx.idx, MAX_RX_QUEUES);
++ bitmap_zero(q_vector->tx.idx, MAX_TX_QUEUES);
++ q_vector->rx.count = 0;
++ q_vector->tx.count = 0;
+ }
+ }
+
+@@ -2687,8 +2687,8 @@ static void ixgbe_free_irq(struct ixgbe_
+ i--;
+ for (; i >= 0; i--) {
+ /* free only the irqs that were actually requested */
+- if (!adapter->q_vector[i]->rxr_count &&
+- !adapter->q_vector[i]->txr_count)
++ if (!adapter->q_vector[i]->rx.count &&
++ !adapter->q_vector[i]->tx.count)
+ continue;
+
+ free_irq(adapter->msix_entries[i].vector,
+@@ -3698,10 +3698,10 @@ static void ixgbe_napi_enable_all(struct
+ q_vector = adapter->q_vector[q_idx];
+ napi = &q_vector->napi;
+ if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) {
+- if (!q_vector->rxr_count || !q_vector->txr_count) {
+- if (q_vector->txr_count == 1)
++ if (!q_vector->rx.count || !q_vector->tx.count) {
++ if (q_vector->tx.count == 1)
+ napi->poll = &ixgbe_clean_txonly;
+- else if (q_vector->rxr_count == 1)
++ else if (q_vector->rx.count == 1)
+ napi->poll = &ixgbe_clean_rxonly;
+ }
+ }
+@@ -5168,7 +5168,7 @@ static int ixgbe_alloc_q_vectors(struct
+ if (!q_vector)
+ goto err_out;
+ q_vector->adapter = adapter;
+- if (q_vector->txr_count && !q_vector->rxr_count)
++ if (q_vector->tx.count && !q_vector->rx.count)
+ q_vector->eitr = adapter->tx_eitr_param;
+ else
+ q_vector->eitr = adapter->rx_eitr_param;
+@@ -6177,7 +6177,7 @@ static void ixgbe_check_hang_subtask(str
+ /* get one bit for every active tx/rx interrupt vector */
+ for (i = 0; i < adapter->num_msix_vectors - NON_Q_VECTORS; i++) {
+ struct ixgbe_q_vector *qv = adapter->q_vector[i];
+- if (qv->rxr_count || qv->txr_count)
++ if (qv->rx.count || qv->tx.count)
+ eics |= ((u64)1 << i);
+ }
+ }
diff --git a/patches.drivers/0024-ixgbe-Move-interrupt-related-values-out-of-ring-and-.patch b/patches.drivers/0024-ixgbe-Move-interrupt-related-values-out-of-ring-and-.patch
new file mode 100644
index 0000000000..610223e322
--- /dev/null
+++ b/patches.drivers/0024-ixgbe-Move-interrupt-related-values-out-of-ring-and-.patch
@@ -0,0 +1,479 @@
+From bd19805803a954415ec36a559fd3b8a0a3647d7c Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Sat, 11 Jun 2011 01:45:08 +0000
+Subject: [PATCH 24/32] ixgbe: Move interrupt related values out of ring and into q_vector
+Git-commit: bd19805803a954415ec36a559fd3b8a0a3647d7c
+Patch-mainline: v3.1-rc1~316^2~4^2~4
+References: bnc#714789
+
+This change moves work_limit, total_packets, and total_bytes into the ring
+container struct of the q_vector. The advantage of this is that it should
+reduce the size of memory used in the event of multiple rings being
+assigned to a single q_vector. In addition it should help to reduce the
+total workload for calculating itr since now total_packets and total_bytes
+will be the total work done of the interrupt instead of for the ring.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 24 ++---
+ drivers/net/ixgbe/ixgbe_ethtool.c | 6 -
+ drivers/net/ixgbe/ixgbe_main.c | 170 ++++++++++----------------------------
+ 3 files changed, 62 insertions(+), 138 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -207,12 +207,10 @@ struct ixgbe_ring {
+ struct ixgbe_rx_buffer *rx_buffer_info;
+ };
+ unsigned long state;
+- u8 atr_sample_rate;
+- u8 atr_count;
++ u8 __iomem *tail;
++
+ u16 count; /* amount of descriptors */
+ u16 rx_buf_len;
+- u16 next_to_use;
+- u16 next_to_clean;
+
+ u8 queue_index; /* needed for multiqueue queue management */
+ u8 reg_idx; /* holds the special value that gets
+@@ -220,15 +218,13 @@ struct ixgbe_ring {
+ * associated with this ring, which is
+ * different for DCB and RSS modes
+ */
+- u8 dcb_tc;
+-
+- u16 work_limit; /* max work per interrupt */
+-
+- u8 __iomem *tail;
++ u8 atr_sample_rate;
++ u8 atr_count;
+
+- unsigned int total_bytes;
+- unsigned int total_packets;
++ u16 next_to_use;
++ u16 next_to_clean;
+
++ u8 dcb_tc;
+ struct ixgbe_queue_stats stats;
+ struct u64_stats_sync syncp;
+ union {
+@@ -276,6 +272,9 @@ struct ixgbe_ring_container {
+ #else
+ DECLARE_BITMAP(idx, MAX_TX_QUEUES);
+ #endif
++ unsigned int total_bytes; /* total bytes processed this int */
++ unsigned int total_packets; /* total packets processed this int */
++ u16 work_limit; /* total work allowed per interrupt */
+ u8 count; /* total number of rings in vector */
+ u8 itr; /* current ITR setting for ring */
+ };
+@@ -406,6 +405,9 @@ struct ixgbe_adapter {
+ u16 eitr_low;
+ u16 eitr_high;
+
++ /* Work limits */
++ u16 tx_work_limit;
++
+ /* TX */
+ struct ixgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
+ int num_tx_queues;
+--- a/drivers/net/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ixgbe/ixgbe_ethtool.c
+@@ -2068,7 +2068,7 @@ static int ixgbe_get_coalesce(struct net
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
+
+- ec->tx_max_coalesced_frames_irq = adapter->tx_ring[0]->work_limit;
++ ec->tx_max_coalesced_frames_irq = adapter->tx_work_limit;
+
+ /* only valid if in constant ITR mode */
+ switch (adapter->rx_itr_setting) {
+@@ -2157,7 +2157,7 @@ static int ixgbe_set_coalesce(struct net
+ return -EINVAL;
+
+ if (ec->tx_max_coalesced_frames_irq)
+- adapter->tx_ring[0]->work_limit = ec->tx_max_coalesced_frames_irq;
++ adapter->tx_work_limit = ec->tx_max_coalesced_frames_irq;
+
+ if (ec->rx_coalesce_usecs > 1) {
+ /* check the limits */
+@@ -2232,12 +2232,14 @@ static int ixgbe_set_coalesce(struct net
+ else
+ /* rx only or mixed */
+ q_vector->eitr = adapter->rx_eitr_param;
++ q_vector->tx.work_limit = adapter->tx_work_limit;
+ ixgbe_write_eitr(q_vector);
+ }
+ /* Legacy Interrupt Mode */
+ } else {
+ q_vector = adapter->q_vector[0];
+ q_vector->eitr = adapter->rx_eitr_param;
++ q_vector->tx.work_limit = adapter->tx_work_limit;
+ ixgbe_write_eitr(q_vector);
+ }
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -874,7 +874,7 @@ static bool ixgbe_clean_tx_irq(struct ix
+ eop_desc = IXGBE_TX_DESC_ADV(tx_ring, eop);
+
+ while ((eop_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD)) &&
+- (count < tx_ring->work_limit)) {
++ (count < q_vector->tx.work_limit)) {
+ bool cleaned = false;
+ rmb(); /* read buffer_info after eop_desc */
+ for ( ; !cleaned; count++) {
+@@ -903,11 +903,11 @@ static bool ixgbe_clean_tx_irq(struct ix
+ }
+
+ tx_ring->next_to_clean = i;
+- tx_ring->total_bytes += total_bytes;
+- tx_ring->total_packets += total_packets;
+- u64_stats_update_begin(&tx_ring->syncp);
+- tx_ring->stats.packets += total_packets;
+ tx_ring->stats.bytes += total_bytes;
++ tx_ring->stats.packets += total_packets;
++ u64_stats_update_begin(&tx_ring->syncp);
++ q_vector->tx.total_bytes += total_bytes;
++ q_vector->tx.total_packets += total_packets;
+ u64_stats_update_end(&tx_ring->syncp);
+
+ if (check_for_tx_hang(tx_ring) && ixgbe_check_tx_hang(tx_ring)) {
+@@ -955,7 +955,7 @@ static bool ixgbe_clean_tx_irq(struct ix
+ }
+ }
+
+- return count < tx_ring->work_limit;
++ return count < q_vector->tx.work_limit;
+ }
+
+ #ifdef CONFIG_IXGBE_DCA
+@@ -1557,12 +1557,12 @@ next_desc:
+ }
+ #endif /* IXGBE_FCOE */
+
+- rx_ring->total_packets += total_rx_packets;
+- rx_ring->total_bytes += total_rx_bytes;
+ u64_stats_update_begin(&rx_ring->syncp);
+ rx_ring->stats.packets += total_rx_packets;
+ rx_ring->stats.bytes += total_rx_bytes;
+ u64_stats_update_end(&rx_ring->syncp);
++ q_vector->rx.total_packets += total_rx_packets;
++ q_vector->rx.total_bytes += total_rx_bytes;
+ }
+
+ static int ixgbe_clean_rxonly(struct napi_struct *, int);
+@@ -1668,11 +1668,8 @@ enum latency_range {
+
+ /**
+ * ixgbe_update_itr - update the dynamic ITR value based on statistics
+- * @adapter: pointer to adapter
+- * @eitr: eitr setting (ints per sec) to give last timeslice
+- * @itr_setting: current throttle rate in ints/second
+- * @packets: the number of packets during this measurement interval
+- * @bytes: the number of bytes during this measurement interval
++ * @q_vector: structure containing interrupt and ring information
++ * @ring_container: structure containing ring performance data
+ *
+ * Stores a new ITR value based on packets and byte
+ * counts during the last interrupt. The advantage of per interrupt
+@@ -1684,17 +1681,18 @@ enum latency_range {
+ * this functionality is controlled by the InterruptThrottleRate module
+ * parameter (see ixgbe_param.c)
+ **/
+-static u8 ixgbe_update_itr(struct ixgbe_adapter *adapter,
+- u32 eitr, u8 itr_setting,
+- int packets, int bytes)
++static void ixgbe_update_itr(struct ixgbe_q_vector *q_vector,
++ struct ixgbe_ring_container *ring_container)
+ {
+- unsigned int retval = itr_setting;
+- u32 timepassed_us;
+ u64 bytes_perint;
++ struct ixgbe_adapter *adapter = q_vector->adapter;
++ int bytes = ring_container->total_bytes;
++ int packets = ring_container->total_packets;
++ u32 timepassed_us;
++ u8 itr_setting = ring_container->itr;
+
+ if (packets == 0)
+- goto update_itr_done;
+-
++ return;
+
+ /* simple throttlerate management
+ * 0-20MB/s lowest (100000 ints/s)
+@@ -1702,28 +1700,32 @@ static u8 ixgbe_update_itr(struct ixgbe_
+ * 100-1249MB/s bulk (8000 ints/s)
+ */
+ /* what was last interrupt timeslice? */
+- timepassed_us = 1000000/eitr;
++ timepassed_us = 1000000/q_vector->eitr;
+ bytes_perint = bytes / timepassed_us; /* bytes/usec */
+
+ switch (itr_setting) {
+ case lowest_latency:
+ if (bytes_perint > adapter->eitr_low)
+- retval = low_latency;
++ itr_setting = low_latency;
+ break;
+ case low_latency:
+ if (bytes_perint > adapter->eitr_high)
+- retval = bulk_latency;
++ itr_setting = bulk_latency;
+ else if (bytes_perint <= adapter->eitr_low)
+- retval = lowest_latency;
++ itr_setting = lowest_latency;
+ break;
+ case bulk_latency:
+ if (bytes_perint <= adapter->eitr_high)
+- retval = low_latency;
++ itr_setting = low_latency;
+ break;
+ }
+
+-update_itr_done:
+- return retval;
++ /* clear work counters since we have the values we need */
++ ring_container->total_bytes = 0;
++ ring_container->total_packets = 0;
++
++ /* write updated itr to ring container */
++ ring_container->itr = itr_setting;
+ }
+
+ /**
+@@ -1769,42 +1771,13 @@ void ixgbe_write_eitr(struct ixgbe_q_vec
+ IXGBE_WRITE_REG(hw, IXGBE_EITR(v_idx), itr_reg);
+ }
+
+-static void ixgbe_set_itr_msix(struct ixgbe_q_vector *q_vector)
++static void ixgbe_set_itr(struct ixgbe_q_vector *q_vector)
+ {
+- struct ixgbe_adapter *adapter = q_vector->adapter;
+- int i, r_idx;
+- u32 new_itr;
+- u8 current_itr, ret_itr;
+-
+- r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
+- for (i = 0; i < q_vector->tx.count; i++) {
+- struct ixgbe_ring *tx_ring = adapter->tx_ring[r_idx];
+- ret_itr = ixgbe_update_itr(adapter, q_vector->eitr,
+- q_vector->tx.itr,
+- tx_ring->total_packets,
+- tx_ring->total_bytes);
+- /* if the result for this queue would decrease interrupt
+- * rate for this vector then use that result */
+- q_vector->tx.itr = ((q_vector->tx.itr > ret_itr) ?
+- q_vector->tx.itr - 1 : ret_itr);
+- r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+- r_idx + 1);
+- }
++ u32 new_itr = q_vector->eitr;
++ u8 current_itr;
+
+- r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
+- for (i = 0; i < q_vector->rx.count; i++) {
+- struct ixgbe_ring *rx_ring = adapter->rx_ring[r_idx];
+- ret_itr = ixgbe_update_itr(adapter, q_vector->eitr,
+- q_vector->rx.itr,
+- rx_ring->total_packets,
+- rx_ring->total_bytes);
+- /* if the result for this queue would decrease interrupt
+- * rate for this vector then use that result */
+- q_vector->rx.itr = ((q_vector->rx.itr > ret_itr) ?
+- q_vector->rx.itr - 1 : ret_itr);
+- r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+- r_idx + 1);
+- }
++ ixgbe_update_itr(q_vector, &q_vector->tx);
++ ixgbe_update_itr(q_vector, &q_vector->rx);
+
+ current_itr = max(q_vector->rx.itr, q_vector->tx.itr);
+
+@@ -1817,16 +1790,17 @@ static void ixgbe_set_itr_msix(struct ix
+ new_itr = 20000; /* aka hwitr = ~200 */
+ break;
+ case bulk_latency:
+- default:
+ new_itr = 8000;
+ break;
++ default:
++ break;
+ }
+
+ if (new_itr != q_vector->eitr) {
+ /* do an exponential smoothing */
+ new_itr = ((q_vector->eitr * 9) + new_itr)/10;
+
+- /* save the algorithm value here, not the smoothed one */
++ /* save the algorithm value here */
+ q_vector->eitr = new_itr;
+
+ ixgbe_write_eitr(q_vector);
+@@ -2073,8 +2047,6 @@ static irqreturn_t ixgbe_msix_clean_tx(i
+ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
+ for (i = 0; i < q_vector->tx.count; i++) {
+ tx_ring = adapter->tx_ring[r_idx];
+- tx_ring->total_bytes = 0;
+- tx_ring->total_packets = 0;
+ r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+ r_idx + 1);
+ }
+@@ -2106,8 +2078,6 @@ static irqreturn_t ixgbe_msix_clean_rx(i
+ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
+ for (i = 0; i < q_vector->rx.count; i++) {
+ rx_ring = adapter->rx_ring[r_idx];
+- rx_ring->total_bytes = 0;
+- rx_ring->total_packets = 0;
+ r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+ r_idx + 1);
+ }
+@@ -2135,8 +2105,6 @@ static irqreturn_t ixgbe_msix_clean_many
+ r_idx = find_first_bit(q_vector->tx.idx, adapter->num_tx_queues);
+ for (i = 0; i < q_vector->tx.count; i++) {
+ ring = adapter->tx_ring[r_idx];
+- ring->total_bytes = 0;
+- ring->total_packets = 0;
+ r_idx = find_next_bit(q_vector->tx.idx, adapter->num_tx_queues,
+ r_idx + 1);
+ }
+@@ -2144,8 +2112,6 @@ static irqreturn_t ixgbe_msix_clean_many
+ r_idx = find_first_bit(q_vector->rx.idx, adapter->num_rx_queues);
+ for (i = 0; i < q_vector->rx.count; i++) {
+ ring = adapter->rx_ring[r_idx];
+- ring->total_bytes = 0;
+- ring->total_packets = 0;
+ r_idx = find_next_bit(q_vector->rx.idx, adapter->num_rx_queues,
+ r_idx + 1);
+ }
+@@ -2187,7 +2153,7 @@ static int ixgbe_clean_rxonly(struct nap
+ if (work_done < budget) {
+ napi_complete(napi);
+ if (adapter->rx_itr_setting & 1)
+- ixgbe_set_itr_msix(q_vector);
++ ixgbe_set_itr(q_vector);
+ if (!test_bit(__IXGBE_DOWN, &adapter->state))
+ ixgbe_irq_enable_queues(adapter,
+ ((u64)1 << q_vector->v_idx));
+@@ -2245,7 +2211,7 @@ static int ixgbe_clean_rxtx_many(struct
+ if (work_done < budget) {
+ napi_complete(napi);
+ if (adapter->rx_itr_setting & 1)
+- ixgbe_set_itr_msix(q_vector);
++ ixgbe_set_itr(q_vector);
+ if (!test_bit(__IXGBE_DOWN, &adapter->state))
+ ixgbe_irq_enable_queues(adapter,
+ ((u64)1 << q_vector->v_idx));
+@@ -2287,7 +2253,7 @@ static int ixgbe_clean_txonly(struct nap
+ if (work_done < budget) {
+ napi_complete(napi);
+ if (adapter->tx_itr_setting & 1)
+- ixgbe_set_itr_msix(q_vector);
++ ixgbe_set_itr(q_vector);
+ if (!test_bit(__IXGBE_DOWN, &adapter->state))
+ ixgbe_irq_enable_queues(adapter,
+ ((u64)1 << q_vector->v_idx));
+@@ -2316,6 +2282,7 @@ static inline void map_vector_to_txq(str
+ set_bit(t_idx, q_vector->tx.idx);
+ q_vector->tx.count++;
+ tx_ring->q_vector = q_vector;
++ q_vector->tx.work_limit = a->tx_work_limit;
+ }
+
+ /**
+@@ -2464,51 +2431,6 @@ free_queue_irqs:
+ return err;
+ }
+
+-static void ixgbe_set_itr(struct ixgbe_adapter *adapter)
+-{
+- struct ixgbe_q_vector *q_vector = adapter->q_vector[0];
+- struct ixgbe_ring *rx_ring = adapter->rx_ring[0];
+- struct ixgbe_ring *tx_ring = adapter->tx_ring[0];
+- u32 new_itr = q_vector->eitr;
+- u8 current_itr;
+-
+- q_vector->tx.itr = ixgbe_update_itr(adapter, new_itr,
+- q_vector->tx.itr,
+- tx_ring->total_packets,
+- tx_ring->total_bytes);
+- q_vector->rx.itr = ixgbe_update_itr(adapter, new_itr,
+- q_vector->rx.itr,
+- rx_ring->total_packets,
+- rx_ring->total_bytes);
+-
+- current_itr = max(q_vector->rx.itr, q_vector->tx.itr);
+-
+- switch (current_itr) {
+- /* counts and packets in update_itr are dependent on these numbers */
+- case lowest_latency:
+- new_itr = 100000;
+- break;
+- case low_latency:
+- new_itr = 20000; /* aka hwitr = ~200 */
+- break;
+- case bulk_latency:
+- new_itr = 8000;
+- break;
+- default:
+- break;
+- }
+-
+- if (new_itr != q_vector->eitr) {
+- /* do an exponential smoothing */
+- new_itr = ((q_vector->eitr * 9) + new_itr)/10;
+-
+- /* save the algorithm value here */
+- q_vector->eitr = new_itr;
+-
+- ixgbe_write_eitr(q_vector);
+- }
+-}
+-
+ /**
+ * ixgbe_irq_enable - Enable default interrupt generation settings
+ * @adapter: board private structure
+@@ -2607,10 +2529,6 @@ static irqreturn_t ixgbe_intr(int irq, v
+ ixgbe_check_fan_failure(adapter, eicr);
+
+ if (napi_schedule_prep(&(q_vector->napi))) {
+- adapter->tx_ring[0]->total_packets = 0;
+- adapter->tx_ring[0]->total_bytes = 0;
+- adapter->rx_ring[0]->total_packets = 0;
+- adapter->rx_ring[0]->total_bytes = 0;
+ /* would disable interrupts here but EIAM disabled it */
+ __napi_schedule(&(q_vector->napi));
+ }
+@@ -4502,7 +4420,7 @@ static int ixgbe_poll(struct napi_struct
+ if (work_done < budget) {
+ napi_complete(napi);
+ if (adapter->rx_itr_setting & 1)
+- ixgbe_set_itr(adapter);
++ ixgbe_set_itr(q_vector);
+ if (!test_bit(__IXGBE_DOWN, &adapter->state))
+ ixgbe_irq_enable_queues(adapter, IXGBE_EIMS_RTX_QUEUE);
+ }
+@@ -5425,6 +5343,9 @@ static int __devinit ixgbe_sw_init(struc
+ adapter->tx_ring_count = IXGBE_DEFAULT_TXD;
+ adapter->rx_ring_count = IXGBE_DEFAULT_RXD;
+
++ /* set default work limits */
++ adapter->tx_work_limit = adapter->tx_ring_count;
++
+ /* initialize eeprom parameters */
+ if (ixgbe_init_eeprom_params_generic(hw)) {
+ e_dev_err("EEPROM initialization failed\n");
+@@ -5471,7 +5392,6 @@ int ixgbe_setup_tx_resources(struct ixgb
+
+ tx_ring->next_to_use = 0;
+ tx_ring->next_to_clean = 0;
+- tx_ring->work_limit = tx_ring->count;
+ return 0;
+
+ err:
diff --git a/patches.drivers/0025-ixgbe-DCB-X540-devices-support-max-traffic-class-of-.patch b/patches.drivers/0025-ixgbe-DCB-X540-devices-support-max-traffic-class-of-.patch
new file mode 100644
index 0000000000..0a08b785eb
--- /dev/null
+++ b/patches.drivers/0025-ixgbe-DCB-X540-devices-support-max-traffic-class-of-.patch
@@ -0,0 +1,75 @@
+From 4de2a0224ae3c437e8a090b6ec8d304a7edff049 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 27 Sep 2011 03:52:01 +0000
+Subject: [PATCH 25/32] ixgbe: DCB X540 devices support max traffic class of 4
+Git-commit: 4de2a0224ae3c437e8a090b6ec8d304a7edff049
+Patch-mainline: v3.2-rc1~129^2~134
+References: bnc#714789
+
+X540 devices can only support up to 4 traffic classes and
+guarantee a "lossless" traffic class on some platforms.
+This patch sets the X540 devices to initialize a max
+traffic class value of 4 at probe time.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_main.c | 19 ++++++++++++++++++-
+ drivers/net/ixgbe/ixgbe_type.h | 1 +
+ 2 files changed, 19 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -5299,6 +5299,17 @@ static int __devinit ixgbe_sw_init(struc
+ spin_lock_init(&adapter->fdir_perfect_lock);
+
+ #ifdef CONFIG_IXGBE_DCB
++ switch (hw->mac.type) {
++ case ixgbe_mac_X540:
++ adapter->dcb_cfg.num_tcs.pg_tcs = X540_TRAFFIC_CLASS;
++ adapter->dcb_cfg.num_tcs.pfc_tcs = X540_TRAFFIC_CLASS;
++ break;
++ default:
++ adapter->dcb_cfg.num_tcs.pg_tcs = MAX_TRAFFIC_CLASS;
++ adapter->dcb_cfg.num_tcs.pfc_tcs = MAX_TRAFFIC_CLASS;
++ break;
++ }
++
+ /* Configure DCB traffic classes */
+ for (j = 0; j < MAX_TRAFFIC_CLASS; j++) {
+ tc = &adapter->dcb_cfg.tc_config[j];
+@@ -5308,6 +5319,12 @@ static int __devinit ixgbe_sw_init(struc
+ tc->path[DCB_RX_CONFIG].bwg_percent = 12 + (j & 1);
+ tc->dcb_pfc = pfc_disabled;
+ }
++
++ /* Initialize default user to priority mapping, UPx->TC0 */
++ tc = &adapter->dcb_cfg.tc_config[0];
++ tc->path[DCB_TX_CONFIG].up_to_tc_bitmap = 0xFF;
++ tc->path[DCB_RX_CONFIG].up_to_tc_bitmap = 0xFF;
++
+ adapter->dcb_cfg.bw_percentage[DCB_TX_CONFIG][0] = 100;
+ adapter->dcb_cfg.bw_percentage[DCB_RX_CONFIG][0] = 100;
+ adapter->dcb_cfg.pfc_mode_enable = false;
+@@ -7312,7 +7329,7 @@ int ixgbe_setup_tc(struct net_device *de
+ }
+
+ /* Hardware supports up to 8 traffic classes */
+- if (tc > MAX_TRAFFIC_CLASS ||
++ if (tc > adapter->dcb_cfg.num_tcs.pg_tcs ||
+ (hw->mac.type == ixgbe_mac_82598EB && tc < MAX_TRAFFIC_CLASS))
+ return -EINVAL;
+
+--- a/drivers/net/ixgbe/ixgbe_type.h
++++ b/drivers/net/ixgbe/ixgbe_type.h
+@@ -401,6 +401,7 @@
+
+ /* DCB registers */
+ #define MAX_TRAFFIC_CLASS 8
++#define X540_TRAFFIC_CLASS 4
+ #define IXGBE_RMCS 0x03D00
+ #define IXGBE_DPMCS 0x07F40
+ #define IXGBE_PDPMCS 0x0CD00
diff --git a/patches.drivers/0026-ixgbe-Move-all-values-that-deal-with-count-next_to_u.patch b/patches.drivers/0026-ixgbe-Move-all-values-that-deal-with-count-next_to_u.patch
new file mode 100644
index 0000000000..e8bf81a65b
--- /dev/null
+++ b/patches.drivers/0026-ixgbe-Move-all-values-that-deal-with-count-next_to_u.patch
@@ -0,0 +1,71 @@
+From 63544e9c0055316d0397cb671f2ff99d85c77293 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Fri, 27 May 2011 05:31:42 +0000
+Subject: [PATCH 26/32] ixgbe: Move all values that deal with count, next_to_use, next_to_clean to u16
+Git-commit: 63544e9c0055316d0397cb671f2ff99d85c77293
+Patch-mainline: v3.1-rc1~316^2~327^2~6
+References: bnc#714789
+
+This change updates all values dealing with count, next_to_use, and
+next_to_clean so that they stay u16 values. The advantage of this is that
+there is no re-casting of type during the propagation through the stack.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_main.c | 13 +++++++------
+ 1 file changed, 7 insertions(+), 6 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -6664,11 +6664,12 @@ static int ixgbe_tx_map(struct ixgbe_ada
+ struct ixgbe_tx_buffer *tx_buffer_info;
+ unsigned int len;
+ unsigned int total = skb->len;
+- unsigned int offset = 0, size, count = 0, i;
++ unsigned int offset = 0, size, count = 0;
+ unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+ unsigned int f;
+ unsigned int bytecount = skb->len;
+ u16 gso_segs = 1;
++ u16 i;
+
+ i = tx_ring->next_to_use;
+
+@@ -6934,7 +6935,7 @@ static void ixgbe_atr(struct ixgbe_ring
+ input, common, ring->queue_index);
+ }
+
+-static int __ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, int size)
++static int __ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, u16 size)
+ {
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ /* Herbert's original patch had:
+@@ -6953,7 +6954,7 @@ static int __ixgbe_maybe_stop_tx(struct
+ return 0;
+ }
+
+-static int ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, int size)
++static int ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, u16 size)
+ {
+ if (likely(IXGBE_DESC_UNUSED(tx_ring) >= size))
+ return 0;
+@@ -6991,13 +6992,13 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ struct ixgbe_adapter *adapter,
+ struct ixgbe_ring *tx_ring)
+ {
+- unsigned int first;
+ unsigned int tx_flags = 0;
+- u8 hdr_len = 0;
+ int tso;
+- int count = 0;
++ u16 count = 0;
++ u16 first;
+ unsigned int f;
+ __be16 protocol;
++ u8 hdr_len = 0;
+
+ protocol = vlan_get_protocol(skb);
+
diff --git a/patches.drivers/0027-ixgbe-Update-method-used-for-determining-descriptor-.patch b/patches.drivers/0027-ixgbe-Update-method-used-for-determining-descriptor-.patch
new file mode 100644
index 0000000000..2505b5ec3c
--- /dev/null
+++ b/patches.drivers/0027-ixgbe-Update-method-used-for-determining-descriptor-.patch
@@ -0,0 +1,124 @@
+From a535c30e9e98d201089503a0ffa0093cba16e796 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Fri, 27 May 2011 05:31:52 +0000
+Subject: [PATCH 27/32] ixgbe: Update method used for determining descriptor count for an skb
+Git-commit: a535c30e9e98d201089503a0ffa0093cba16e796
+Patch-mainline: v3.1-rc1~316^2~327^2~4
+References: bnc#714789
+
+This patch updates the current methods used for determining if we have
+enough space to transmit a given skb. The current method is quite wasteful
+as it has us go through and determine how each page is going to be broken
+up. That only needs to be done if pages are larger than our maximum data
+per TXD. As such I have wrapped that in a page size check.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 7 +++++
+ drivers/net/ixgbe/ixgbe_main.c | 55 +++++++++++++++++++----------------------
+ 2 files changed, 33 insertions(+), 29 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -131,6 +131,13 @@ struct vf_macvlans {
+ u8 vf_macvlan[ETH_ALEN];
+ };
+
++#define IXGBE_MAX_TXD_PWR 14
++#define IXGBE_MAX_DATA_PER_TXD (1 << IXGBE_MAX_TXD_PWR)
++
++/* Tx Descriptors needed, worst case */
++#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IXGBE_MAX_DATA_PER_TXD)
++#define DESC_NEEDED ((MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE)) + 4)
++
+ /* wrapper around a pointer to a socket buffer,
+ * so a DMA handle can be stored along with the buffer */
+ struct ixgbe_tx_buffer {
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -832,15 +832,6 @@ static inline bool ixgbe_check_tx_hang(s
+ return ret;
+ }
+
+-#define IXGBE_MAX_TXD_PWR 14
+-#define IXGBE_MAX_DATA_PER_TXD (1 << IXGBE_MAX_TXD_PWR)
+-
+-/* Tx Descriptors needed, worst case */
+-#define TXD_USE_COUNT(S) (((S) >> IXGBE_MAX_TXD_PWR) + \
+- (((S) & (IXGBE_MAX_DATA_PER_TXD - 1)) ? 1 : 0))
+-#define DESC_NEEDED (TXD_USE_COUNT(IXGBE_MAX_DATA_PER_TXD) /* skb->data */ + \
+- MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE) + 1) /* for context */
+-
+ /**
+ * ixgbe_tx_timeout_reset - initiate reset due to Tx timeout
+ * @adapter: driver private struct
+@@ -6992,14 +6983,34 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ struct ixgbe_adapter *adapter,
+ struct ixgbe_ring *tx_ring)
+ {
+- unsigned int tx_flags = 0;
+ int tso;
+- u16 count = 0;
++ u32 tx_flags = 0;
++#if PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
++ unsigned short f;
++#endif
+ u16 first;
+- unsigned int f;
++ u16 count = TXD_USE_COUNT(skb_headlen(skb));
+ __be16 protocol;
+ u8 hdr_len = 0;
+
++ /*
++ * need: 1 descriptor per page * PAGE_SIZE/IXGBE_MAX_DATA_PER_TXD,
++ * + 1 desc for skb_head_len/IXGBE_MAX_DATA_PER_TXD,
++ * + 2 desc gap to keep tail from touching head,
++ * + 1 desc for context descriptor,
++ * otherwise try next time
++ */
++#if PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
++ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
++ count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size);
++#else
++ count += skb_shinfo(skb)->nr_frags;
++#endif
++ if (ixgbe_maybe_stop_tx(tx_ring, count + 3)) {
++ tx_ring->tx_stats.tx_busy++;
++ return NETDEV_TX_BUSY;
++ }
++
+ protocol = vlan_get_protocol(skb);
+
+ if (vlan_tx_tag_present(skb)) {
+@@ -7023,25 +7034,11 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED &&
+ (protocol == htons(ETH_P_FCOE)))
+ tx_flags |= IXGBE_TX_FLAGS_FCOE;
+-#endif
+-
+- /* four things can cause us to need a context descriptor */
+- if (skb_is_gso(skb) ||
+- (skb->ip_summed == CHECKSUM_PARTIAL) ||
+- (tx_flags & IXGBE_TX_FLAGS_VLAN) ||
+- (tx_flags & IXGBE_TX_FLAGS_FCOE))
+- count++;
+-
+- count += TXD_USE_COUNT(skb_headlen(skb));
+- for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+- count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size);
+-
+- if (ixgbe_maybe_stop_tx(tx_ring, count)) {
+- tx_ring->tx_stats.tx_busy++;
+- return NETDEV_TX_BUSY;
+- }
+
++#endif
++ /* record the location of the first descriptor for this packet */
+ first = tx_ring->next_to_use;
++
+ if (tx_flags & IXGBE_TX_FLAGS_FCOE) {
+ #ifdef IXGBE_FCOE
+ /* setup tx offload for FCoE */
diff --git a/patches.drivers/0028-ixgbe-Add-one-function-that-handles-most-of-context-.patch b/patches.drivers/0028-ixgbe-Add-one-function-that-handles-most-of-context-.patch
new file mode 100644
index 0000000000..16caeb1c69
--- /dev/null
+++ b/patches.drivers/0028-ixgbe-Add-one-function-that-handles-most-of-context-.patch
@@ -0,0 +1,547 @@
+From 897ab15606ce896b6a574a263beb51cbfb43f041 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Fri, 27 May 2011 05:31:47 +0000
+Subject: [PATCH 28/32] ixgbe: Add one function that handles most of context descriptor setup
+Git-commit: 897ab15606ce896b6a574a263beb51cbfb43f041
+Patch-mainline: v3.1-rc1~316^2~327^2~5
+References: bnc#714789
+
+There is a significant amount of shared functionality between the checksum
+and TSO offload configuration that is shared in regards to how they setup
+the context descriptors. Since so much of the functionality is shared it
+makes sense to move the shared functionality into a single function and
+just call that function from the two context descriptor specific routines.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 4
+ drivers/net/ixgbe/ixgbe_fcoe.c | 85 ++++-------
+ drivers/net/ixgbe/ixgbe_main.c | 295 +++++++++++++++++------------------------
+ 3 files changed, 162 insertions(+), 222 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -590,10 +590,10 @@ extern void ixgbe_clear_rscctl(struct ix
+ struct ixgbe_ring *ring);
+ extern void ixgbe_set_rx_mode(struct net_device *netdev);
+ extern int ixgbe_setup_tc(struct net_device *dev, u8 tc);
++extern void ixgbe_tx_ctxtdesc(struct ixgbe_ring *, u32, u32, u32, u32);
+ #ifdef IXGBE_FCOE
+ extern void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter);
+-extern int ixgbe_fso(struct ixgbe_adapter *adapter,
+- struct ixgbe_ring *tx_ring, struct sk_buff *skb,
++extern int ixgbe_fso(struct ixgbe_ring *tx_ring, struct sk_buff *skb,
+ u32 tx_flags, u8 *hdr_len);
+ extern void ixgbe_cleanup_fcoe(struct ixgbe_adapter *adapter);
+ extern int ixgbe_fcoe_ddp(struct ixgbe_adapter *adapter,
+--- a/drivers/net/ixgbe/ixgbe_fcoe.c
++++ b/drivers/net/ixgbe/ixgbe_fcoe.c
+@@ -474,24 +474,18 @@ ddp_out:
+ *
+ * Returns : 0 indicates no FSO, > 0 for FSO, < 0 for error
+ */
+-int ixgbe_fso(struct ixgbe_adapter *adapter,
+- struct ixgbe_ring *tx_ring, struct sk_buff *skb,
++int ixgbe_fso(struct ixgbe_ring *tx_ring, struct sk_buff *skb,
+ u32 tx_flags, u8 *hdr_len)
+ {
+- u8 sof, eof;
++ struct fc_frame_header *fh;
+ u32 vlan_macip_lens;
+- u32 fcoe_sof_eof;
+- u32 type_tucmd;
++ u32 fcoe_sof_eof = 0;
+ u32 mss_l4len_idx;
+- int mss = 0;
+- unsigned int i;
+- struct ixgbe_tx_buffer *tx_buffer_info;
+- struct ixgbe_adv_tx_context_desc *context_desc;
+- struct fc_frame_header *fh;
++ u8 sof, eof;
+
+ if (skb_is_gso(skb) && (skb_shinfo(skb)->gso_type != SKB_GSO_FCOE)) {
+- e_err(drv, "Wrong gso type %d:expecting SKB_GSO_FCOE\n",
+- skb_shinfo(skb)->gso_type);
++ dev_err(tx_ring->dev, "Wrong gso type %d:expecting SKB_GSO_FCOE\n",
++ skb_shinfo(skb)->gso_type);
+ return -EINVAL;
+ }
+
+@@ -501,23 +495,22 @@ int ixgbe_fso(struct ixgbe_adapter *adap
+ sizeof(struct fcoe_hdr));
+
+ /* sets up SOF and ORIS */
+- fcoe_sof_eof = 0;
+ sof = ((struct fcoe_hdr *)skb_network_header(skb))->fcoe_sof;
+ switch (sof) {
+ case FC_SOF_I2:
+- fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_ORIS;
++ fcoe_sof_eof = IXGBE_ADVTXD_FCOEF_ORIS;
+ break;
+ case FC_SOF_I3:
+- fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_SOF;
+- fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_ORIS;
++ fcoe_sof_eof = IXGBE_ADVTXD_FCOEF_SOF |
++ IXGBE_ADVTXD_FCOEF_ORIS;
+ break;
+ case FC_SOF_N2:
+ break;
+ case FC_SOF_N3:
+- fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_SOF;
++ fcoe_sof_eof = IXGBE_ADVTXD_FCOEF_SOF;
+ break;
+ default:
+- e_warn(drv, "unknown sof = 0x%x\n", sof);
++ dev_warn(tx_ring->dev, "unknown sof = 0x%x\n", sof);
+ return -EINVAL;
+ }
+
+@@ -530,12 +523,11 @@ int ixgbe_fso(struct ixgbe_adapter *adap
+ break;
+ case FC_EOF_T:
+ /* lso needs ORIE */
+- if (skb_is_gso(skb)) {
+- fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_EOF_N;
+- fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_ORIE;
+- } else {
++ if (skb_is_gso(skb))
++ fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_EOF_N |
++ IXGBE_ADVTXD_FCOEF_ORIE;
++ else
+ fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_EOF_T;
+- }
+ break;
+ case FC_EOF_NI:
+ fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_EOF_NI;
+@@ -544,7 +536,7 @@ int ixgbe_fso(struct ixgbe_adapter *adap
+ fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_EOF_A;
+ break;
+ default:
+- e_warn(drv, "unknown eof = 0x%x\n", eof);
++ dev_warn(tx_ring->dev, "unknown eof = 0x%x\n", eof);
+ return -EINVAL;
+ }
+
+@@ -553,43 +545,28 @@ int ixgbe_fso(struct ixgbe_adapter *adap
+ if (fh->fh_f_ctl[2] & FC_FC_REL_OFF)
+ fcoe_sof_eof |= IXGBE_ADVTXD_FCOEF_PARINC;
+
+- /* hdr_len includes fc_hdr if FCoE lso is enabled */
++ /* include trailer in headlen as it is replicated per frame */
+ *hdr_len = sizeof(struct fcoe_crc_eof);
++
++ /* hdr_len includes fc_hdr if FCoE LSO is enabled */
+ if (skb_is_gso(skb))
+ *hdr_len += (skb_transport_offset(skb) +
+ sizeof(struct fc_frame_header));
+- /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
+- vlan_macip_lens = (skb_transport_offset(skb) +
+- sizeof(struct fc_frame_header));
+- vlan_macip_lens |= ((skb_transport_offset(skb) - 4)
+- << IXGBE_ADVTXD_MACLEN_SHIFT);
+- vlan_macip_lens |= (tx_flags & IXGBE_TX_FLAGS_VLAN_MASK);
+-
+- /* type_tycmd and mss: set TUCMD.FCoE to enable offload */
+- type_tucmd = IXGBE_TXD_CMD_DEXT | IXGBE_ADVTXD_DTYP_CTXT |
+- IXGBE_ADVTXT_TUCMD_FCOE;
+- if (skb_is_gso(skb))
+- mss = skb_shinfo(skb)->gso_size;
++
+ /* mss_l4len_id: use 1 for FSO as TSO, no need for L4LEN */
+- mss_l4len_idx = (mss << IXGBE_ADVTXD_MSS_SHIFT) |
+- (1 << IXGBE_ADVTXD_IDX_SHIFT);
++ mss_l4len_idx = skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT;
++ mss_l4len_idx |= 1 << IXGBE_ADVTXD_IDX_SHIFT;
++
++ /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
++ vlan_macip_lens = skb_transport_offset(skb) +
++ sizeof(struct fc_frame_header);
++ vlan_macip_lens |= (skb_transport_offset(skb) - 4)
++ << IXGBE_ADVTXD_MACLEN_SHIFT;
++ vlan_macip_lens |= tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
+
+ /* write context desc */
+- i = tx_ring->next_to_use;
+- context_desc = IXGBE_TX_CTXTDESC_ADV(tx_ring, i);
+- context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens);
+- context_desc->seqnum_seed = cpu_to_le32(fcoe_sof_eof);
+- context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd);
+- context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx);
+-
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
+- tx_buffer_info->time_stamp = jiffies;
+- tx_buffer_info->next_to_watch = i;
+-
+- i++;
+- if (i == tx_ring->count)
+- i = 0;
+- tx_ring->next_to_use = i;
++ ixgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, fcoe_sof_eof,
++ IXGBE_ADVTXT_TUCMD_FCOE, mss_l4len_idx);
+
+ return skb_is_gso(skb);
+ }
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -34,6 +34,7 @@
+ #include <linux/in.h>
+ #include <linux/ip.h>
+ #include <linux/tcp.h>
++#include <linux/sctp.h>
+ #include <linux/pkt_sched.h>
+ #include <linux/ipv6.h>
+ #include <linux/slab.h>
+@@ -6471,179 +6472,145 @@ static void ixgbe_service_task(struct wo
+ ixgbe_service_event_complete(adapter);
+ }
+
+-static int ixgbe_tso(struct ixgbe_adapter *adapter,
+- struct ixgbe_ring *tx_ring, struct sk_buff *skb,
+- u32 tx_flags, u8 *hdr_len, __be16 protocol)
++void ixgbe_tx_ctxtdesc(struct ixgbe_ring *tx_ring, u32 vlan_macip_lens,
++ u32 fcoe_sof_eof, u32 type_tucmd, u32 mss_l4len_idx)
+ {
+ struct ixgbe_adv_tx_context_desc *context_desc;
+- unsigned int i;
+- int err;
+- struct ixgbe_tx_buffer *tx_buffer_info;
+- u32 vlan_macip_lens = 0, type_tucmd_mlhl;
+- u32 mss_l4len_idx, l4len;
++ u16 i = tx_ring->next_to_use;
+
+- if (skb_is_gso(skb)) {
+- if (skb_header_cloned(skb)) {
+- err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
+- if (err)
+- return err;
+- }
+- l4len = tcp_hdrlen(skb);
+- *hdr_len += l4len;
+-
+- if (protocol == htons(ETH_P_IP)) {
+- struct iphdr *iph = ip_hdr(skb);
+- iph->tot_len = 0;
+- iph->check = 0;
+- tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
+- iph->daddr, 0,
+- IPPROTO_TCP,
+- 0);
+- } else if (skb_is_gso_v6(skb)) {
+- ipv6_hdr(skb)->payload_len = 0;
+- tcp_hdr(skb)->check =
+- ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+- &ipv6_hdr(skb)->daddr,
+- 0, IPPROTO_TCP, 0);
+- }
++ context_desc = IXGBE_TX_CTXTDESC_ADV(tx_ring, i);
+
+- i = tx_ring->next_to_use;
++ i++;
++ tx_ring->next_to_use = (i < tx_ring->count) ? i : 0;
+
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
+- context_desc = IXGBE_TX_CTXTDESC_ADV(tx_ring, i);
+-
+- /* VLAN MACLEN IPLEN */
+- if (tx_flags & IXGBE_TX_FLAGS_VLAN)
+- vlan_macip_lens |=
+- (tx_flags & IXGBE_TX_FLAGS_VLAN_MASK);
+- vlan_macip_lens |= ((skb_network_offset(skb)) <<
+- IXGBE_ADVTXD_MACLEN_SHIFT);
+- *hdr_len += skb_network_offset(skb);
+- vlan_macip_lens |=
+- (skb_transport_header(skb) - skb_network_header(skb));
+- *hdr_len +=
+- (skb_transport_header(skb) - skb_network_header(skb));
+- context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens);
+- context_desc->seqnum_seed = 0;
+-
+- /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
+- type_tucmd_mlhl = (IXGBE_TXD_CMD_DEXT |
+- IXGBE_ADVTXD_DTYP_CTXT);
++ /* set bits to identify this as an advanced context descriptor */
++ type_tucmd |= IXGBE_TXD_CMD_DEXT | IXGBE_ADVTXD_DTYP_CTXT;
+
+- if (protocol == htons(ETH_P_IP))
+- type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_IPV4;
+- type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
+- context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd_mlhl);
+-
+- /* MSS L4LEN IDX */
+- mss_l4len_idx =
+- (skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT);
+- mss_l4len_idx |= (l4len << IXGBE_ADVTXD_L4LEN_SHIFT);
+- /* use index 1 for TSO */
+- mss_l4len_idx |= (1 << IXGBE_ADVTXD_IDX_SHIFT);
+- context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx);
++ context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens);
++ context_desc->seqnum_seed = cpu_to_le32(fcoe_sof_eof);
++ context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd);
++ context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx);
++}
+
+- tx_buffer_info->time_stamp = jiffies;
+- tx_buffer_info->next_to_watch = i;
++static int ixgbe_tso(struct ixgbe_ring *tx_ring, struct sk_buff *skb,
++ u32 tx_flags, __be16 protocol, u8 *hdr_len)
++{
++ int err;
++ u32 vlan_macip_lens, type_tucmd;
++ u32 mss_l4len_idx, l4len;
+
+- i++;
+- if (i == tx_ring->count)
+- i = 0;
+- tx_ring->next_to_use = i;
++ if (!skb_is_gso(skb))
++ return 0;
+
+- return true;
++ if (skb_header_cloned(skb)) {
++ err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
++ if (err)
++ return err;
+ }
+- return false;
++
++ /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
++ type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP;
++
++ if (protocol == __constant_htons(ETH_P_IP)) {
++ struct iphdr *iph = ip_hdr(skb);
++ iph->tot_len = 0;
++ iph->check = 0;
++ tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
++ iph->daddr, 0,
++ IPPROTO_TCP,
++ 0);
++ type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
++ } else if (skb_is_gso_v6(skb)) {
++ ipv6_hdr(skb)->payload_len = 0;
++ tcp_hdr(skb)->check =
++ ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
++ &ipv6_hdr(skb)->daddr,
++ 0, IPPROTO_TCP, 0);
++ }
++
++ l4len = tcp_hdrlen(skb);
++ *hdr_len = skb_transport_offset(skb) + l4len;
++
++ /* mss_l4len_id: use 1 as index for TSO */
++ mss_l4len_idx = l4len << IXGBE_ADVTXD_L4LEN_SHIFT;
++ mss_l4len_idx |= skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT;
++ mss_l4len_idx |= 1 << IXGBE_ADVTXD_IDX_SHIFT;
++
++ /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
++ vlan_macip_lens = skb_network_header_len(skb);
++ vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
++ vlan_macip_lens |= tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
++
++ ixgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, 0, type_tucmd,
++ mss_l4len_idx);
++
++ return 1;
+ }
+
+-static u32 ixgbe_psum(struct ixgbe_adapter *adapter, struct sk_buff *skb,
+- __be16 protocol)
++static bool ixgbe_tx_csum(struct ixgbe_ring *tx_ring,
++ struct sk_buff *skb, u32 tx_flags,
++ __be16 protocol)
+ {
+- u32 rtn = 0;
+-
+- switch (protocol) {
+- case cpu_to_be16(ETH_P_IP):
+- rtn |= IXGBE_ADVTXD_TUCMD_IPV4;
+- switch (ip_hdr(skb)->protocol) {
+- case IPPROTO_TCP:
+- rtn |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
++ u32 vlan_macip_lens = 0;
++ u32 mss_l4len_idx = 0;
++ u32 type_tucmd = 0;
++
++ if (skb->ip_summed != CHECKSUM_PARTIAL) {
++ if (!(tx_flags & IXGBE_TX_FLAGS_VLAN))
++ return false;
++ } else {
++ u8 l4_hdr = 0;
++ switch (protocol) {
++ case __constant_htons(ETH_P_IP):
++ vlan_macip_lens |= skb_network_header_len(skb);
++ type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
++ l4_hdr = ip_hdr(skb)->protocol;
+ break;
+- case IPPROTO_SCTP:
+- rtn |= IXGBE_ADVTXD_TUCMD_L4T_SCTP;
++ case __constant_htons(ETH_P_IPV6):
++ vlan_macip_lens |= skb_network_header_len(skb);
++ l4_hdr = ipv6_hdr(skb)->nexthdr;
++ break;
++ default:
++ if (unlikely(net_ratelimit())) {
++ dev_warn(tx_ring->dev,
++ "partial checksum but proto=%x!\n",
++ skb->protocol);
++ }
+ break;
+ }
+- break;
+- case cpu_to_be16(ETH_P_IPV6):
+- /* XXX what about other V6 headers?? */
+- switch (ipv6_hdr(skb)->nexthdr) {
++
++ switch (l4_hdr) {
+ case IPPROTO_TCP:
+- rtn |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
++ type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
++ mss_l4len_idx = tcp_hdrlen(skb) <<
++ IXGBE_ADVTXD_L4LEN_SHIFT;
+ break;
+ case IPPROTO_SCTP:
+- rtn |= IXGBE_ADVTXD_TUCMD_L4T_SCTP;
++ type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_SCTP;
++ mss_l4len_idx = sizeof(struct sctphdr) <<
++ IXGBE_ADVTXD_L4LEN_SHIFT;
++ break;
++ case IPPROTO_UDP:
++ mss_l4len_idx = sizeof(struct udphdr) <<
++ IXGBE_ADVTXD_L4LEN_SHIFT;
++ break;
++ default:
++ if (unlikely(net_ratelimit())) {
++ dev_warn(tx_ring->dev,
++ "partial checksum but l4 proto=%x!\n",
++ skb->protocol);
++ }
+ break;
+ }
+- break;
+- default:
+- if (unlikely(net_ratelimit()))
+- e_warn(probe, "partial checksum but proto=%x!\n",
+- protocol);
+- break;
+ }
+
+- return rtn;
+-}
++ vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
++ vlan_macip_lens |= tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
+
+-static bool ixgbe_tx_csum(struct ixgbe_adapter *adapter,
+- struct ixgbe_ring *tx_ring,
+- struct sk_buff *skb, u32 tx_flags,
+- __be16 protocol)
+-{
+- struct ixgbe_adv_tx_context_desc *context_desc;
+- unsigned int i;
+- struct ixgbe_tx_buffer *tx_buffer_info;
+- u32 vlan_macip_lens = 0, type_tucmd_mlhl = 0;
+-
+- if (skb->ip_summed == CHECKSUM_PARTIAL ||
+- (tx_flags & IXGBE_TX_FLAGS_VLAN)) {
+- i = tx_ring->next_to_use;
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
+- context_desc = IXGBE_TX_CTXTDESC_ADV(tx_ring, i);
++ ixgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, 0,
++ type_tucmd, mss_l4len_idx);
+
+- if (tx_flags & IXGBE_TX_FLAGS_VLAN)
+- vlan_macip_lens |=
+- (tx_flags & IXGBE_TX_FLAGS_VLAN_MASK);
+- vlan_macip_lens |= (skb_network_offset(skb) <<
+- IXGBE_ADVTXD_MACLEN_SHIFT);
+- if (skb->ip_summed == CHECKSUM_PARTIAL)
+- vlan_macip_lens |= (skb_transport_header(skb) -
+- skb_network_header(skb));
+-
+- context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens);
+- context_desc->seqnum_seed = 0;
+-
+- type_tucmd_mlhl |= (IXGBE_TXD_CMD_DEXT |
+- IXGBE_ADVTXD_DTYP_CTXT);
+-
+- if (skb->ip_summed == CHECKSUM_PARTIAL)
+- type_tucmd_mlhl |= ixgbe_psum(adapter, skb, protocol);
+-
+- context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd_mlhl);
+- /* use index zero for tx checksum offload */
+- context_desc->mss_l4len_idx = 0;
+-
+- tx_buffer_info->time_stamp = jiffies;
+- tx_buffer_info->next_to_watch = i;
+-
+- i++;
+- if (i == tx_ring->count)
+- i = 0;
+- tx_ring->next_to_use = i;
+-
+- return true;
+- }
+-
+- return false;
++ return (skb->ip_summed == CHECKSUM_PARTIAL);
+ }
+
+ static int ixgbe_tx_map(struct ixgbe_adapter *adapter,
+@@ -7042,29 +7009,21 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ if (tx_flags & IXGBE_TX_FLAGS_FCOE) {
+ #ifdef IXGBE_FCOE
+ /* setup tx offload for FCoE */
+- tso = ixgbe_fso(adapter, tx_ring, skb, tx_flags, &hdr_len);
+- if (tso < 0) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
+- if (tso)
++ tso = ixgbe_fso(tx_ring, skb, tx_flags, &hdr_len);
++ if (tso < 0)
++ goto out_drop;
++ else if (tso)
+ tx_flags |= IXGBE_TX_FLAGS_FSO;
+ #endif /* IXGBE_FCOE */
+ } else {
+ if (protocol == htons(ETH_P_IP))
+ tx_flags |= IXGBE_TX_FLAGS_IPV4;
+- tso = ixgbe_tso(adapter, tx_ring, skb, tx_flags, &hdr_len,
+- protocol);
+- if (tso < 0) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
+-
+- if (tso)
++ tso = ixgbe_tso(tx_ring, skb, tx_flags, protocol, &hdr_len);
++ if (tso < 0)
++ goto out_drop;
++ else if (tso)
+ tx_flags |= IXGBE_TX_FLAGS_TSO;
+- else if (ixgbe_tx_csum(adapter, tx_ring, skb, tx_flags,
+- protocol) &&
+- (skb->ip_summed == CHECKSUM_PARTIAL))
++ else if (ixgbe_tx_csum(tx_ring, skb, tx_flags, protocol))
+ tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ }
+
+@@ -7077,12 +7036,16 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ ixgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
+
+ } else {
+- dev_kfree_skb_any(skb);
+ tx_ring->tx_buffer_info[first].time_stamp = 0;
+ tx_ring->next_to_use = first;
++ goto out_drop;
+ }
+
+ return NETDEV_TX_OK;
++
++out_drop:
++ dev_kfree_skb_any(skb);
++ return NETDEV_TX_OK;
+ }
+
+ static netdev_tx_t ixgbe_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
diff --git a/patches.drivers/0029-ixgbe-Refactor-transmit-map-and-cleanup-routines.patch b/patches.drivers/0029-ixgbe-Refactor-transmit-map-and-cleanup-routines.patch
new file mode 100644
index 0000000000..0915f65844
--- /dev/null
+++ b/patches.drivers/0029-ixgbe-Refactor-transmit-map-and-cleanup-routines.patch
@@ -0,0 +1,643 @@
+From d3d0023979c87ee00f61946deb08b6a1ebd0455d Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Fri, 15 Jul 2011 02:31:25 +0000
+Subject: [PATCH 29/32] ixgbe: Refactor transmit map and cleanup routines
+Git-commit: d3d0023979c87ee00f61946deb08b6a1ebd0455d
+Patch-mainline: v3.2-rc1~129^2~425^2~2
+References: bnc#714789
+
+This patch implements a partial refactor of the TX map/queue and cleanup
+routines. It merges the map and queue functionality and as a result
+improves the transmit performance by avoiding unnecessary reads from memory.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 13 -
+ drivers/net/ixgbe/ixgbe_main.c | 450 +++++++++++++++++++++--------------------
+ 2 files changed, 247 insertions(+), 216 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -96,6 +96,7 @@
+ #define IXGBE_TX_FLAGS_IPV4 (u32)(1 << 3)
+ #define IXGBE_TX_FLAGS_FCOE (u32)(1 << 4)
+ #define IXGBE_TX_FLAGS_FSO (u32)(1 << 5)
++#define IXGBE_TX_FLAGS_MAPPED_AS_PAGE (u32)(1 << 6)
+ #define IXGBE_TX_FLAGS_VLAN_MASK 0xffff0000
+ #define IXGBE_TX_FLAGS_VLAN_PRIO_MASK 0x0000e000
+ #define IXGBE_TX_FLAGS_VLAN_SHIFT 16
+@@ -141,14 +142,14 @@ struct vf_macvlans {
+ /* wrapper around a pointer to a socket buffer,
+ * so a DMA handle can be stored along with the buffer */
+ struct ixgbe_tx_buffer {
+- struct sk_buff *skb;
+- dma_addr_t dma;
++ union ixgbe_adv_tx_desc *next_to_watch;
+ unsigned long time_stamp;
+- u16 length;
+- u16 next_to_watch;
+- unsigned int bytecount;
++ dma_addr_t dma;
++ u32 length;
++ u32 tx_flags;
++ struct sk_buff *skb;
++ u32 bytecount;
+ u16 gso_segs;
+- u8 mapped_as_page;
+ };
+
+ struct ixgbe_rx_buffer {
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -388,7 +388,7 @@ static void ixgbe_dump(struct ixgbe_adap
+ tx_ring = adapter->tx_ring[n];
+ tx_buffer_info =
+ &tx_ring->tx_buffer_info[tx_ring->next_to_clean];
+- pr_info(" %5d %5X %5X %016llX %04X %3X %016llX\n",
++ pr_info(" %5d %5X %5X %016llX %04X %p %016llX\n",
+ n, tx_ring->next_to_use, tx_ring->next_to_clean,
+ (u64)tx_buffer_info->dma,
+ tx_buffer_info->length,
+@@ -427,7 +427,7 @@ static void ixgbe_dump(struct ixgbe_adap
+ tx_buffer_info = &tx_ring->tx_buffer_info[i];
+ u0 = (struct my_u0 *)tx_desc;
+ pr_info("T [0x%03X] %016llX %016llX %016llX"
+- " %04X %3X %016llX %p", i,
++ " %04X %p %016llX %p", i,
+ le64_to_cpu(u0->a),
+ le64_to_cpu(u0->b),
+ (u64)tx_buffer_info->dma,
+@@ -646,27 +646,31 @@ static inline void ixgbe_irq_rearm_queue
+ }
+ }
+
+-void ixgbe_unmap_and_free_tx_resource(struct ixgbe_ring *tx_ring,
+- struct ixgbe_tx_buffer *tx_buffer_info)
++static inline void ixgbe_unmap_tx_resource(struct ixgbe_ring *ring,
++ struct ixgbe_tx_buffer *tx_buffer)
+ {
+- if (tx_buffer_info->dma) {
+- if (tx_buffer_info->mapped_as_page)
+- dma_unmap_page(tx_ring->dev,
+- tx_buffer_info->dma,
+- tx_buffer_info->length,
++ if (tx_buffer->dma) {
++ if (tx_buffer->tx_flags & IXGBE_TX_FLAGS_MAPPED_AS_PAGE)
++ dma_unmap_page(ring->dev,
++ tx_buffer->dma,
++ tx_buffer->length,
+ DMA_TO_DEVICE);
+ else
+- dma_unmap_single(tx_ring->dev,
+- tx_buffer_info->dma,
+- tx_buffer_info->length,
++ dma_unmap_single(ring->dev,
++ tx_buffer->dma,
++ tx_buffer->length,
+ DMA_TO_DEVICE);
+- tx_buffer_info->dma = 0;
+ }
+- if (tx_buffer_info->skb) {
++ tx_buffer->dma = 0;
++}
++
++void ixgbe_unmap_and_free_tx_resource(struct ixgbe_ring *tx_ring,
++ struct ixgbe_tx_buffer *tx_buffer_info)
++{
++ ixgbe_unmap_tx_resource(tx_ring, tx_buffer_info);
++ if (tx_buffer_info->skb)
+ dev_kfree_skb_any(tx_buffer_info->skb);
+- tx_buffer_info->skb = NULL;
+- }
+- tx_buffer_info->time_stamp = 0;
++ tx_buffer_info->skb = NULL;
+ /* tx_buffer_info must be completely set up in the transmit path */
+ }
+
+@@ -856,56 +860,72 @@ static bool ixgbe_clean_tx_irq(struct ix
+ struct ixgbe_ring *tx_ring)
+ {
+ struct ixgbe_adapter *adapter = q_vector->adapter;
+- union ixgbe_adv_tx_desc *tx_desc, *eop_desc;
+- struct ixgbe_tx_buffer *tx_buffer_info;
++ struct ixgbe_tx_buffer *tx_buffer;
++ union ixgbe_adv_tx_desc *tx_desc;
+ unsigned int total_bytes = 0, total_packets = 0;
+- u16 i, eop, count = 0;
++ u16 i = tx_ring->next_to_clean;
++ u16 count;
+
+- i = tx_ring->next_to_clean;
+- eop = tx_ring->tx_buffer_info[i].next_to_watch;
+- eop_desc = IXGBE_TX_DESC_ADV(tx_ring, eop);
+-
+- while ((eop_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD)) &&
+- (count < q_vector->tx.work_limit)) {
+- bool cleaned = false;
+- rmb(); /* read buffer_info after eop_desc */
+- for ( ; !cleaned; count++) {
+- tx_desc = IXGBE_TX_DESC_ADV(tx_ring, i);
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
++ tx_buffer = &tx_ring->tx_buffer_info[i];
++ tx_desc = IXGBE_TX_DESC_ADV(tx_ring, i);
++
++ for (count = 0; count < q_vector->tx.work_limit; count++) {
++ union ixgbe_adv_tx_desc *eop_desc = tx_buffer->next_to_watch;
++
++ /* if next_to_watch is not set then there is no work pending */
++ if (!eop_desc)
++ break;
++
++ /* if DD is not set pending work has not been completed */
++ if (!(eop_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD)))
++ break;
++
++ /* count the packet as being completed */
++ tx_ring->tx_stats.completed++;
+
++ /* clear next_to_watch to prevent false hangs */
++ tx_buffer->next_to_watch = NULL;
++
++ /* prevent any other reads prior to eop_desc being verified */
++ rmb();
++
++ do {
++ ixgbe_unmap_tx_resource(tx_ring, tx_buffer);
+ tx_desc->wb.status = 0;
+- cleaned = (i == eop);
++ if (likely(tx_desc == eop_desc)) {
++ eop_desc = NULL;
++ dev_kfree_skb_any(tx_buffer->skb);
++ tx_buffer->skb = NULL;
+
++ total_bytes += tx_buffer->bytecount;
++ total_packets += tx_buffer->gso_segs;
++ }
++
++ tx_buffer++;
++ tx_desc++;
+ i++;
+- if (i == tx_ring->count)
++ if (unlikely(i == tx_ring->count)) {
+ i = 0;
+
+- if (cleaned && tx_buffer_info->skb) {
+- total_bytes += tx_buffer_info->bytecount;
+- total_packets += tx_buffer_info->gso_segs;
++ tx_buffer = tx_ring->tx_buffer_info;
++ tx_desc = IXGBE_TX_DESC_ADV(tx_ring, 0);
+ }
+
+- ixgbe_unmap_and_free_tx_resource(tx_ring,
+- tx_buffer_info);
+- }
+-
+- tx_ring->tx_stats.completed++;
+- eop = tx_ring->tx_buffer_info[i].next_to_watch;
+- eop_desc = IXGBE_TX_DESC_ADV(tx_ring, eop);
++ } while (eop_desc);
+ }
+
+ tx_ring->next_to_clean = i;
++ u64_stats_update_begin(&tx_ring->syncp);
+ tx_ring->stats.bytes += total_bytes;
+ tx_ring->stats.packets += total_packets;
+- u64_stats_update_begin(&tx_ring->syncp);
++ u64_stats_update_end(&tx_ring->syncp);
+ q_vector->tx.total_bytes += total_bytes;
+ q_vector->tx.total_packets += total_packets;
+- u64_stats_update_end(&tx_ring->syncp);
+
+ if (check_for_tx_hang(tx_ring) && ixgbe_check_tx_hang(tx_ring)) {
+ /* schedule immediate reset if we believe we hung */
+ struct ixgbe_hw *hw = &adapter->hw;
+- tx_desc = IXGBE_TX_DESC_ADV(tx_ring, eop);
++ tx_desc = IXGBE_TX_DESC_ADV(tx_ring, i);
+ e_err(drv, "Detected Tx Unit Hang\n"
+ " Tx Queue <%d>\n"
+ " TDH, TDT <%x>, <%x>\n"
+@@ -917,8 +937,8 @@ static bool ixgbe_clean_tx_irq(struct ix
+ tx_ring->queue_index,
+ IXGBE_READ_REG(hw, IXGBE_TDH(tx_ring->reg_idx)),
+ IXGBE_READ_REG(hw, IXGBE_TDT(tx_ring->reg_idx)),
+- tx_ring->next_to_use, eop,
+- tx_ring->tx_buffer_info[eop].time_stamp, jiffies);
++ tx_ring->next_to_use, i,
++ tx_ring->tx_buffer_info[i].time_stamp, jiffies);
+
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+@@ -6613,90 +6633,161 @@ static bool ixgbe_tx_csum(struct ixgbe_r
+ return (skb->ip_summed == CHECKSUM_PARTIAL);
+ }
+
+-static int ixgbe_tx_map(struct ixgbe_adapter *adapter,
+- struct ixgbe_ring *tx_ring,
+- struct sk_buff *skb, u32 tx_flags,
+- unsigned int first, const u8 hdr_len)
++static __le32 ixgbe_tx_cmd_type(u32 tx_flags)
+ {
+- struct device *dev = tx_ring->dev;
+- struct ixgbe_tx_buffer *tx_buffer_info;
+- unsigned int len;
+- unsigned int total = skb->len;
+- unsigned int offset = 0, size, count = 0;
+- unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+- unsigned int f;
+- unsigned int bytecount = skb->len;
+- u16 gso_segs = 1;
+- u16 i;
++ /* set type for advanced descriptor with frame checksum insertion */
++ __le32 cmd_type = cpu_to_le32(IXGBE_ADVTXD_DTYP_DATA |
++ IXGBE_ADVTXD_DCMD_IFCS |
++ IXGBE_ADVTXD_DCMD_DEXT);
++
++ /* set HW vlan bit if vlan is present */
++ if (tx_flags & IXGBE_TX_FLAGS_VLAN)
++ cmd_type |= cpu_to_le32(IXGBE_ADVTXD_DCMD_VLE);
++
++ /* set segmentation enable bits for TSO/FSO */
++#ifdef IXGBE_FCOE
++ if ((tx_flags & IXGBE_TX_FLAGS_TSO) || (tx_flags & IXGBE_TX_FLAGS_FSO))
++#else
++ if (tx_flags & IXGBE_TX_FLAGS_TSO)
++#endif
++ cmd_type |= cpu_to_le32(IXGBE_ADVTXD_DCMD_TSE);
++
++ return cmd_type;
++}
++
++static __le32 ixgbe_tx_olinfo_status(u32 tx_flags, unsigned int paylen)
++{
++ __le32 olinfo_status =
++ cpu_to_le32(paylen << IXGBE_ADVTXD_PAYLEN_SHIFT);
++
++ if (tx_flags & IXGBE_TX_FLAGS_TSO) {
++ olinfo_status |= cpu_to_le32(IXGBE_ADVTXD_POPTS_TXSM |
++ (1 << IXGBE_ADVTXD_IDX_SHIFT));
++ /* enble IPv4 checksum for TSO */
++ if (tx_flags & IXGBE_TX_FLAGS_IPV4)
++ olinfo_status |= cpu_to_le32(IXGBE_ADVTXD_POPTS_IXSM);
++ }
+
+- i = tx_ring->next_to_use;
++ /* enable L4 checksum for TSO and TX checksum offload */
++ if (tx_flags & IXGBE_TX_FLAGS_CSUM)
++ olinfo_status |= cpu_to_le32(IXGBE_ADVTXD_POPTS_TXSM);
+
++#ifdef IXGBE_FCOE
++ /* use index 1 context for FCOE/FSO */
+ if (tx_flags & IXGBE_TX_FLAGS_FCOE)
+- /* excluding fcoe_crc_eof for FCoE */
+- total -= sizeof(struct fcoe_crc_eof);
++ olinfo_status |= cpu_to_le32(IXGBE_ADVTXD_CC |
++ (1 << IXGBE_ADVTXD_IDX_SHIFT));
+
+- len = min(skb_headlen(skb), total);
+- while (len) {
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
+- size = min(len, (uint)IXGBE_MAX_DATA_PER_TXD);
++#endif
++ return olinfo_status;
++}
+
+- tx_buffer_info->length = size;
+- tx_buffer_info->mapped_as_page = false;
+- tx_buffer_info->dma = dma_map_single(dev,
+- skb->data + offset,
+- size, DMA_TO_DEVICE);
+- if (dma_mapping_error(dev, tx_buffer_info->dma))
+- goto dma_error;
+- tx_buffer_info->time_stamp = jiffies;
+- tx_buffer_info->next_to_watch = i;
++#define IXGBE_TXD_CMD (IXGBE_TXD_CMD_EOP | \
++ IXGBE_TXD_CMD_RS)
+
+- len -= size;
+- total -= size;
+- offset += size;
+- count++;
++static void ixgbe_tx_map(struct ixgbe_ring *tx_ring,
++ struct sk_buff *skb,
++ struct ixgbe_tx_buffer *first,
++ u32 tx_flags,
++ const u8 hdr_len)
++{
++ struct device *dev = tx_ring->dev;
++ struct ixgbe_tx_buffer *tx_buffer_info;
++ union ixgbe_adv_tx_desc *tx_desc;
++ dma_addr_t dma;
++ __le32 cmd_type, olinfo_status;
++ struct skb_frag_struct *frag;
++ unsigned int f = 0;
++ unsigned int data_len = skb->data_len;
++ unsigned int size = skb_headlen(skb);
++ u32 offset = 0;
++ u32 paylen = skb->len - hdr_len;
++ u16 i = tx_ring->next_to_use;
++ u16 gso_segs;
+
+- if (len) {
+- i++;
+- if (i == tx_ring->count)
+- i = 0;
++#ifdef IXGBE_FCOE
++ if (tx_flags & IXGBE_TX_FLAGS_FCOE) {
++ if (data_len >= sizeof(struct fcoe_crc_eof)) {
++ data_len -= sizeof(struct fcoe_crc_eof);
++ } else {
++ size -= sizeof(struct fcoe_crc_eof) - data_len;
++ data_len = 0;
+ }
+ }
+
+- for (f = 0; f < nr_frags; f++) {
+- struct skb_frag_struct *frag;
++#endif
++ dma = dma_map_single(dev, skb->data, size, DMA_TO_DEVICE);
++ if (dma_mapping_error(dev, dma))
++ goto dma_error;
++
++ cmd_type = ixgbe_tx_cmd_type(tx_flags);
++ olinfo_status = ixgbe_tx_olinfo_status(tx_flags, paylen);
++
++ tx_desc = IXGBE_TX_DESC_ADV(tx_ring, i);
++
++ for (;;) {
++ while (size > IXGBE_MAX_DATA_PER_TXD) {
++ tx_desc->read.buffer_addr = cpu_to_le64(dma + offset);
++ tx_desc->read.cmd_type_len =
++ cmd_type | cpu_to_le32(IXGBE_MAX_DATA_PER_TXD);
++ tx_desc->read.olinfo_status = olinfo_status;
+
+- frag = &skb_shinfo(skb)->frags[f];
+- len = min((unsigned int)frag->size, total);
+- offset = frag->page_offset;
++ offset += IXGBE_MAX_DATA_PER_TXD;
++ size -= IXGBE_MAX_DATA_PER_TXD;
+
+- while (len) {
++ tx_desc++;
+ i++;
+- if (i == tx_ring->count)
++ if (i == tx_ring->count) {
++ tx_desc = IXGBE_TX_DESC_ADV(tx_ring, 0);
+ i = 0;
++ }
++ }
+
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
+- size = min(len, (uint)IXGBE_MAX_DATA_PER_TXD);
++ tx_buffer_info = &tx_ring->tx_buffer_info[i];
++ tx_buffer_info->length = offset + size;
++ tx_buffer_info->tx_flags = tx_flags;
++ tx_buffer_info->dma = dma;
++
++ tx_desc->read.buffer_addr = cpu_to_le64(dma + offset);
++ tx_desc->read.cmd_type_len = cmd_type | cpu_to_le32(size);
++ tx_desc->read.olinfo_status = olinfo_status;
+
+- tx_buffer_info->length = size;
+- tx_buffer_info->dma = dma_map_page(dev,
+- frag->page,
+- offset, size,
+- DMA_TO_DEVICE);
+- tx_buffer_info->mapped_as_page = true;
+- if (dma_mapping_error(dev, tx_buffer_info->dma))
+- goto dma_error;
+- tx_buffer_info->time_stamp = jiffies;
+- tx_buffer_info->next_to_watch = i;
+-
+- len -= size;
+- total -= size;
+- offset += size;
+- count++;
+- }
+- if (total == 0)
++ if (!data_len)
+ break;
++
++ frag = &skb_shinfo(skb)->frags[f];
++#ifdef IXGBE_FCOE
++ size = min_t(unsigned int, data_len, frag->size);
++#else
++ size = frag->size;
++#endif
++ data_len -= size;
++ f++;
++
++ offset = 0;
++ tx_flags |= IXGBE_TX_FLAGS_MAPPED_AS_PAGE;
++
++ dma = dma_map_page(dev, frag->page, frag->page_offset,
++ size, DMA_TO_DEVICE);
++ if (dma_mapping_error(dev, dma))
++ goto dma_error;
++
++ tx_desc++;
++ i++;
++ if (i == tx_ring->count) {
++ tx_desc = IXGBE_TX_DESC_ADV(tx_ring, 0);
++ i = 0;
++ }
+ }
+
++ tx_desc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD);
++
++ i++;
++ if (i == tx_ring->count)
++ i = 0;
++
++ tx_ring->next_to_use = i;
++
+ if (tx_flags & IXGBE_TX_FLAGS_TSO)
+ gso_segs = skb_shinfo(skb)->gso_segs;
+ #ifdef IXGBE_FCOE
+@@ -6705,93 +6796,16 @@ static int ixgbe_tx_map(struct ixgbe_ada
+ gso_segs = DIV_ROUND_UP(skb->len - hdr_len,
+ skb_shinfo(skb)->gso_size);
+ #endif /* IXGBE_FCOE */
+- bytecount += (gso_segs - 1) * hdr_len;
++ else
++ gso_segs = 1;
+
+ /* multiply data chunks by size of headers */
+- tx_ring->tx_buffer_info[i].bytecount = bytecount;
+- tx_ring->tx_buffer_info[i].gso_segs = gso_segs;
+- tx_ring->tx_buffer_info[i].skb = skb;
+- tx_ring->tx_buffer_info[first].next_to_watch = i;
+-
+- return count;
+-
+-dma_error:
+- e_dev_err("TX DMA map failed\n");
+-
+- /* clear timestamp and dma mappings for failed tx_buffer_info map */
+- tx_buffer_info->dma = 0;
+- tx_buffer_info->time_stamp = 0;
+- tx_buffer_info->next_to_watch = 0;
+- if (count)
+- count--;
+-
+- /* clear timestamp and dma mappings for remaining portion of packet */
+- while (count--) {
+- if (i == 0)
+- i += tx_ring->count;
+- i--;
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
+- ixgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer_info);
+- }
+-
+- return 0;
+-}
+-
+-static void ixgbe_tx_queue(struct ixgbe_ring *tx_ring,
+- int tx_flags, int count, u32 paylen, u8 hdr_len)
+-{
+- union ixgbe_adv_tx_desc *tx_desc = NULL;
+- struct ixgbe_tx_buffer *tx_buffer_info;
+- u32 olinfo_status = 0, cmd_type_len = 0;
+- unsigned int i;
+- u32 txd_cmd = IXGBE_TXD_CMD_EOP | IXGBE_TXD_CMD_RS | IXGBE_TXD_CMD_IFCS;
+-
+- cmd_type_len |= IXGBE_ADVTXD_DTYP_DATA;
+-
+- cmd_type_len |= IXGBE_ADVTXD_DCMD_IFCS | IXGBE_ADVTXD_DCMD_DEXT;
+-
+- if (tx_flags & IXGBE_TX_FLAGS_VLAN)
+- cmd_type_len |= IXGBE_ADVTXD_DCMD_VLE;
+-
+- if (tx_flags & IXGBE_TX_FLAGS_TSO) {
+- cmd_type_len |= IXGBE_ADVTXD_DCMD_TSE;
+-
+- olinfo_status |= IXGBE_TXD_POPTS_TXSM <<
+- IXGBE_ADVTXD_POPTS_SHIFT;
+-
+- /* use index 1 context for tso */
+- olinfo_status |= (1 << IXGBE_ADVTXD_IDX_SHIFT);
+- if (tx_flags & IXGBE_TX_FLAGS_IPV4)
+- olinfo_status |= IXGBE_TXD_POPTS_IXSM <<
+- IXGBE_ADVTXD_POPTS_SHIFT;
+-
+- } else if (tx_flags & IXGBE_TX_FLAGS_CSUM)
+- olinfo_status |= IXGBE_TXD_POPTS_TXSM <<
+- IXGBE_ADVTXD_POPTS_SHIFT;
+-
+- if (tx_flags & IXGBE_TX_FLAGS_FCOE) {
+- olinfo_status |= IXGBE_ADVTXD_CC;
+- olinfo_status |= (1 << IXGBE_ADVTXD_IDX_SHIFT);
+- if (tx_flags & IXGBE_TX_FLAGS_FSO)
+- cmd_type_len |= IXGBE_ADVTXD_DCMD_TSE;
+- }
++ tx_buffer_info->bytecount = paylen + (gso_segs * hdr_len);
++ tx_buffer_info->gso_segs = gso_segs;
++ tx_buffer_info->skb = skb;
+
+- olinfo_status |= ((paylen - hdr_len) << IXGBE_ADVTXD_PAYLEN_SHIFT);
+-
+- i = tx_ring->next_to_use;
+- while (count--) {
+- tx_buffer_info = &tx_ring->tx_buffer_info[i];
+- tx_desc = IXGBE_TX_DESC_ADV(tx_ring, i);
+- tx_desc->read.buffer_addr = cpu_to_le64(tx_buffer_info->dma);
+- tx_desc->read.cmd_type_len =
+- cpu_to_le32(cmd_type_len | tx_buffer_info->length);
+- tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
+- i++;
+- if (i == tx_ring->count)
+- i = 0;
+- }
+-
+- tx_desc->read.cmd_type_len |= cpu_to_le32(txd_cmd);
++ /* set the timestamp */
++ first->time_stamp = jiffies;
+
+ /*
+ * Force memory writes to complete before letting h/w
+@@ -6801,8 +6815,30 @@ static void ixgbe_tx_queue(struct ixgbe_
+ */
+ wmb();
+
+- tx_ring->next_to_use = i;
++ /* set next_to_watch value indicating a packet is present */
++ first->next_to_watch = tx_desc;
++
++ /* notify HW of packet */
+ writel(i, tx_ring->tail);
++
++ return;
++dma_error:
++ dev_err(dev, "TX DMA map failed\n");
++
++ /* clear dma mappings for failed tx_buffer_info map */
++ for (;;) {
++ tx_buffer_info = &tx_ring->tx_buffer_info[i];
++ ixgbe_unmap_tx_resource(tx_ring, tx_buffer_info);
++ if (tx_buffer_info == first)
++ break;
++ if (i == 0)
++ i = tx_ring->count;
++ i--;
++ }
++
++ dev_kfree_skb_any(skb);
++
++ tx_ring->next_to_use = i;
+ }
+
+ static void ixgbe_atr(struct ixgbe_ring *ring, struct sk_buff *skb,
+@@ -6950,12 +6986,12 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ struct ixgbe_adapter *adapter,
+ struct ixgbe_ring *tx_ring)
+ {
++ struct ixgbe_tx_buffer *first;
+ int tso;
+- u32 tx_flags = 0;
++ u32 tx_flags = 0;
+ #if PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
+ unsigned short f;
+ #endif
+- u16 first;
+ u16 count = TXD_USE_COUNT(skb_headlen(skb));
+ __be16 protocol;
+ u8 hdr_len = 0;
+@@ -7004,7 +7040,7 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+
+ #endif
+ /* record the location of the first descriptor for this packet */
+- first = tx_ring->next_to_use;
++ first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+
+ if (tx_flags & IXGBE_TX_FLAGS_FCOE) {
+ #ifdef IXGBE_FCOE
+@@ -7025,22 +7061,16 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ tx_flags |= IXGBE_TX_FLAGS_TSO;
+ else if (ixgbe_tx_csum(tx_ring, skb, tx_flags, protocol))
+ tx_flags |= IXGBE_TX_FLAGS_CSUM;
+- }
+
+- count = ixgbe_tx_map(adapter, tx_ring, skb, tx_flags, first, hdr_len);
+- if (count) {
+ /* add the ATR filter if ATR is on */
+ if (test_bit(__IXGBE_TX_FDIR_INIT_DONE, &tx_ring->state))
+ ixgbe_atr(tx_ring, skb, tx_flags, protocol);
+- ixgbe_tx_queue(tx_ring, tx_flags, count, skb->len, hdr_len);
+- ixgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
+-
+- } else {
+- tx_ring->tx_buffer_info[first].time_stamp = 0;
+- tx_ring->next_to_use = first;
+- goto out_drop;
+ }
+
++ ixgbe_tx_map(tx_ring, skb, first, tx_flags, hdr_len);
++
++ ixgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
++
+ return NETDEV_TX_OK;
+
+ out_drop:
diff --git a/patches.drivers/0030-ixgbe-Cleanup-FCOE-and-VLAN-handling-in-xmit_frame_r.patch b/patches.drivers/0030-ixgbe-Cleanup-FCOE-and-VLAN-handling-in-xmit_frame_r.patch
new file mode 100644
index 0000000000..423636c51b
--- /dev/null
+++ b/patches.drivers/0030-ixgbe-Cleanup-FCOE-and-VLAN-handling-in-xmit_frame_r.patch
@@ -0,0 +1,217 @@
+From 66f32a8b97f11ad73d2e7b8c192c55febb20b425 Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Wed, 29 Jun 2011 05:43:22 +0000
+Subject: [PATCH 30/32] ixgbe: Cleanup FCOE and VLAN handling in xmit_frame_ring
+Git-commit: 66f32a8b97f11ad73d2e7b8c192c55febb20b425
+Patch-mainline: v3.2-rc1~129^2~425^2
+References: bnc#714789
+
+This change is meant to further cleanup the transmit path by streamlining
+some of the VLAN and FCOE/DCB tasks in the transmit path. In addition it
+adds code for support software VLANs in the event that they are used in
+conjunction with DCB and/or FCOE.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe.h | 16 +++--
+ drivers/net/ixgbe/ixgbe_main.c | 110 ++++++++++++++++++++++++-----------------
+ 2 files changed, 74 insertions(+), 52 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe.h
++++ b/drivers/net/ixgbe/ixgbe.h
+@@ -91,14 +91,16 @@
+ #define IXGBE_RX_BUFFER_WRITE 16 /* Must be power of 2 */
+
+ #define IXGBE_TX_FLAGS_CSUM (u32)(1)
+-#define IXGBE_TX_FLAGS_VLAN (u32)(1 << 1)
+-#define IXGBE_TX_FLAGS_TSO (u32)(1 << 2)
+-#define IXGBE_TX_FLAGS_IPV4 (u32)(1 << 3)
+-#define IXGBE_TX_FLAGS_FCOE (u32)(1 << 4)
+-#define IXGBE_TX_FLAGS_FSO (u32)(1 << 5)
+-#define IXGBE_TX_FLAGS_MAPPED_AS_PAGE (u32)(1 << 6)
++#define IXGBE_TX_FLAGS_HW_VLAN (u32)(1 << 1)
++#define IXGBE_TX_FLAGS_SW_VLAN (u32)(1 << 2)
++#define IXGBE_TX_FLAGS_TSO (u32)(1 << 3)
++#define IXGBE_TX_FLAGS_IPV4 (u32)(1 << 4)
++#define IXGBE_TX_FLAGS_FCOE (u32)(1 << 5)
++#define IXGBE_TX_FLAGS_FSO (u32)(1 << 6)
++#define IXGBE_TX_FLAGS_MAPPED_AS_PAGE (u32)(1 << 7)
+ #define IXGBE_TX_FLAGS_VLAN_MASK 0xffff0000
+-#define IXGBE_TX_FLAGS_VLAN_PRIO_MASK 0x0000e000
++#define IXGBE_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000
++#define IXGBE_TX_FLAGS_VLAN_PRIO_SHIFT 29
+ #define IXGBE_TX_FLAGS_VLAN_SHIFT 16
+
+ #define IXGBE_MAX_RSC_INT_RATE 162760
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -6576,7 +6576,7 @@ static bool ixgbe_tx_csum(struct ixgbe_r
+ u32 type_tucmd = 0;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL) {
+- if (!(tx_flags & IXGBE_TX_FLAGS_VLAN))
++ if (!(tx_flags & IXGBE_TX_FLAGS_HW_VLAN))
+ return false;
+ } else {
+ u8 l4_hdr = 0;
+@@ -6641,7 +6641,7 @@ static __le32 ixgbe_tx_cmd_type(u32 tx_f
+ IXGBE_ADVTXD_DCMD_DEXT);
+
+ /* set HW vlan bit if vlan is present */
+- if (tx_flags & IXGBE_TX_FLAGS_VLAN)
++ if (tx_flags & IXGBE_TX_FLAGS_HW_VLAN)
+ cmd_type |= cpu_to_le32(IXGBE_ADVTXD_DCMD_VLE);
+
+ /* set segmentation enable bits for TSO/FSO */
+@@ -6877,8 +6877,8 @@ static void ixgbe_atr(struct ixgbe_ring
+
+ th = tcp_hdr(skb);
+
+- /* skip this packet since the socket is closing */
+- if (th->fin)
++ /* skip this packet since it is invalid or the socket is closing */
++ if (!th || th->fin)
+ return;
+
+ /* sample on all syn packets or once every atr sample count */
+@@ -6903,7 +6903,7 @@ static void ixgbe_atr(struct ixgbe_ring
+ * since src port and flex bytes occupy the same word XOR them together
+ * and write the value to source port portion of compressed dword
+ */
+- if (vlan_id)
++ if (tx_flags & (IXGBE_TX_FLAGS_SW_VLAN | IXGBE_TX_FLAGS_HW_VLAN))
+ common.port.src ^= th->dest ^ __constant_htons(ETH_P_8021Q);
+ else
+ common.port.src ^= th->dest ^ protocol;
+@@ -6993,7 +6993,7 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ unsigned short f;
+ #endif
+ u16 count = TXD_USE_COUNT(skb_headlen(skb));
+- __be16 protocol;
++ __be16 protocol = skb->protocol;
+ u8 hdr_len = 0;
+
+ /*
+@@ -7014,59 +7014,79 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ return NETDEV_TX_BUSY;
+ }
+
+- protocol = vlan_get_protocol(skb);
+-
++ /* if we have a HW VLAN tag being added default to the HW one */
+ if (vlan_tx_tag_present(skb)) {
+- tx_flags |= vlan_tx_tag_get(skb);
+- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
+- tx_flags &= ~IXGBE_TX_FLAGS_VLAN_PRIO_MASK;
+- tx_flags |= tx_ring->dcb_tc << 13;
++ tx_flags |= vlan_tx_tag_get(skb) << IXGBE_TX_FLAGS_VLAN_SHIFT;
++ tx_flags |= IXGBE_TX_FLAGS_HW_VLAN;
++ /* else if it is a SW VLAN check the next protocol and store the tag */
++ } else if (protocol == __constant_htons(ETH_P_8021Q)) {
++ struct vlan_hdr *vhdr, _vhdr;
++ vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr);
++ if (!vhdr)
++ goto out_drop;
++
++ protocol = vhdr->h_vlan_encapsulated_proto;
++ tx_flags |= ntohs(vhdr->h_vlan_TCI) << IXGBE_TX_FLAGS_VLAN_SHIFT;
++ tx_flags |= IXGBE_TX_FLAGS_SW_VLAN;
++ }
++
++ if ((adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
++ skb->priority != TC_PRIO_CONTROL) {
++ tx_flags &= ~IXGBE_TX_FLAGS_VLAN_PRIO_MASK;
++ tx_flags |= tx_ring->dcb_tc <<
++ IXGBE_TX_FLAGS_VLAN_PRIO_SHIFT;
++ if (tx_flags & IXGBE_TX_FLAGS_SW_VLAN) {
++ struct vlan_ethhdr *vhdr;
++ if (skb_header_cloned(skb) &&
++ pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
++ goto out_drop;
++ vhdr = (struct vlan_ethhdr *)skb->data;
++ vhdr->h_vlan_TCI = htons(tx_flags >>
++ IXGBE_TX_FLAGS_VLAN_SHIFT);
++ } else {
++ tx_flags |= IXGBE_TX_FLAGS_HW_VLAN;
+ }
+- tx_flags <<= IXGBE_TX_FLAGS_VLAN_SHIFT;
+- tx_flags |= IXGBE_TX_FLAGS_VLAN;
+- } else if (adapter->flags & IXGBE_FLAG_DCB_ENABLED &&
+- skb->priority != TC_PRIO_CONTROL) {
+- tx_flags |= tx_ring->dcb_tc << 13;
+- tx_flags <<= IXGBE_TX_FLAGS_VLAN_SHIFT;
+- tx_flags |= IXGBE_TX_FLAGS_VLAN;
+ }
+
+-#ifdef IXGBE_FCOE
+- /* for FCoE with DCB, we force the priority to what
+- * was specified by the switch */
+- if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED &&
+- (protocol == htons(ETH_P_FCOE)))
+- tx_flags |= IXGBE_TX_FLAGS_FCOE;
+-
+-#endif
+ /* record the location of the first descriptor for this packet */
+ first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+
+- if (tx_flags & IXGBE_TX_FLAGS_FCOE) {
+ #ifdef IXGBE_FCOE
+- /* setup tx offload for FCoE */
++ /* setup tx offload for FCoE */
++ if ((protocol == __constant_htons(ETH_P_FCOE)) &&
++ (adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) {
+ tso = ixgbe_fso(tx_ring, skb, tx_flags, &hdr_len);
+ if (tso < 0)
+ goto out_drop;
+ else if (tso)
+- tx_flags |= IXGBE_TX_FLAGS_FSO;
+-#endif /* IXGBE_FCOE */
+- } else {
+- if (protocol == htons(ETH_P_IP))
+- tx_flags |= IXGBE_TX_FLAGS_IPV4;
+- tso = ixgbe_tso(tx_ring, skb, tx_flags, protocol, &hdr_len);
+- if (tso < 0)
+- goto out_drop;
+- else if (tso)
+- tx_flags |= IXGBE_TX_FLAGS_TSO;
+- else if (ixgbe_tx_csum(tx_ring, skb, tx_flags, protocol))
+- tx_flags |= IXGBE_TX_FLAGS_CSUM;
+-
+- /* add the ATR filter if ATR is on */
+- if (test_bit(__IXGBE_TX_FDIR_INIT_DONE, &tx_ring->state))
+- ixgbe_atr(tx_ring, skb, tx_flags, protocol);
++ tx_flags |= IXGBE_TX_FLAGS_FSO |
++ IXGBE_TX_FLAGS_FCOE;
++ else
++ tx_flags |= IXGBE_TX_FLAGS_FCOE;
++
++ goto xmit_fcoe;
+ }
+
++#endif /* IXGBE_FCOE */
++ /* setup IPv4/IPv6 offloads */
++ if (protocol == __constant_htons(ETH_P_IP))
++ tx_flags |= IXGBE_TX_FLAGS_IPV4;
++
++ tso = ixgbe_tso(tx_ring, skb, tx_flags, protocol, &hdr_len);
++ if (tso < 0)
++ goto out_drop;
++ else if (tso)
++ tx_flags |= IXGBE_TX_FLAGS_TSO;
++ else if (ixgbe_tx_csum(tx_ring, skb, tx_flags, protocol))
++ tx_flags |= IXGBE_TX_FLAGS_CSUM;
++
++ /* add the ATR filter if ATR is on */
++ if (test_bit(__IXGBE_TX_FDIR_INIT_DONE, &tx_ring->state))
++ ixgbe_atr(tx_ring, skb, tx_flags, protocol);
++
++#ifdef IXGBE_FCOE
++xmit_fcoe:
++#endif /* IXGBE_FCOE */
+ ixgbe_tx_map(tx_ring, skb, first, tx_flags, hdr_len);
+
+ ixgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
diff --git a/patches.drivers/0031-ixgbe-Always-tag-VLAN-tagged-packets.patch b/patches.drivers/0031-ixgbe-Always-tag-VLAN-tagged-packets.patch
new file mode 100644
index 0000000000..903615cacc
--- /dev/null
+++ b/patches.drivers/0031-ixgbe-Always-tag-VLAN-tagged-packets.patch
@@ -0,0 +1,38 @@
+From 09dca476e3201baac2dcbddc857b83aa25cbdf2e Mon Sep 17 00:00:00 2001
+From: Alexander Duyck <alexander.h.duyck@intel.com>
+Date: Wed, 20 Jul 2011 00:09:10 +0000
+Subject: [PATCH 31/32] ixgbe: Always tag VLAN tagged packets
+Git-commit: 09dca476e3201baac2dcbddc857b83aa25cbdf2e
+Patch-mainline: v3.2-rc1~129^2~363
+References: bnc#714789
+
+This change is meant to fix the patch:
+ ixgbe: Cleanup FCOE and VLAN handling in xmit_frame_ring
+And can be rolled into it if needed.
+
+What this fixes is that VLAN tagged packets were not being tagged if they
+were prio 7 which matches up with TC_PRIO_CONTROL. In order to fix it I am
+just setting things up so that we always tag VLAN tagged packets.
+
+Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_main.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -7031,7 +7031,8 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ }
+
+ if ((adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
+- skb->priority != TC_PRIO_CONTROL) {
++ ((tx_flags & (IXGBE_TX_FLAGS_HW_VLAN | IXGBE_TX_FLAGS_SW_VLAN)) ||
++ (skb->priority != TC_PRIO_CONTROL))) {
+ tx_flags &= ~IXGBE_TX_FLAGS_VLAN_PRIO_MASK;
+ tx_flags |= tx_ring->dcb_tc <<
+ IXGBE_TX_FLAGS_VLAN_PRIO_SHIFT;
diff --git a/patches.drivers/0032-ixgbe-fixup-hard-dependencies-on-supporting-8-traffi.patch b/patches.drivers/0032-ixgbe-fixup-hard-dependencies-on-supporting-8-traffi.patch
new file mode 100644
index 0000000000..eaa78594ec
--- /dev/null
+++ b/patches.drivers/0032-ixgbe-fixup-hard-dependencies-on-supporting-8-traffi.patch
@@ -0,0 +1,417 @@
+From 32701dc2e616ca64e3d24b41c78671c4528671c1 Mon Sep 17 00:00:00 2001
+From: John Fastabend <john.r.fastabend@intel.com>
+Date: Tue, 27 Sep 2011 03:51:56 +0000
+Subject: [PATCH 32/32] ixgbe: fixup hard dependencies on supporting 8 traffic classes
+Git-commit: 32701dc2e616ca64e3d24b41c78671c4528671c1
+Patch-mainline: v3.2-rc1~129^2~135
+References: bnc#714789
+
+This patch correctly configures DCB when less than 8 traffic classes
+are available in hardware.
+
+Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
+Tested-by: Ross Brattain <ross.b.brattain@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Acked-by: Michal Marek <mmarek@suse.cz>
+
+---
+ drivers/net/ixgbe/ixgbe_dcb.c | 20 +++++++++---
+ drivers/net/ixgbe/ixgbe_dcb.h | 3 +
+ drivers/net/ixgbe/ixgbe_dcb_82599.c | 38 +++++++++++++++++-----
+ drivers/net/ixgbe/ixgbe_dcb_82599.h | 2 -
+ drivers/net/ixgbe/ixgbe_dcb_nl.c | 60 +++++++++++++++++++++++++++---------
+ drivers/net/ixgbe/ixgbe_main.c | 12 ++++---
+ 6 files changed, 101 insertions(+), 34 deletions(-)
+
+--- a/drivers/net/ixgbe/ixgbe_dcb.c
++++ b/drivers/net/ixgbe/ixgbe_dcb.c
+@@ -231,6 +231,18 @@ void ixgbe_dcb_unpack_prio(struct ixgbe_
+ }
+ }
+
++void ixgbe_dcb_unpack_map(struct ixgbe_dcb_config *cfg, int direction, u8 *map)
++{
++ int i, up;
++ unsigned long bitmap;
++
++ for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
++ bitmap = cfg->tc_config[i].path[direction].up_to_tc_bitmap;
++ for_each_set_bit(up, &bitmap, MAX_USER_PRIORITY)
++ map[up] = i;
++ }
++}
++
+ /**
+ * ixgbe_dcb_hw_config - Config and enable DCB
+ * @hw: pointer to hardware structure
+@@ -245,10 +257,9 @@ s32 ixgbe_dcb_hw_config(struct ixgbe_hw
+ u8 pfc_en;
+ u8 ptype[MAX_TRAFFIC_CLASS];
+ u8 bwgid[MAX_TRAFFIC_CLASS];
++ u8 prio_tc[MAX_TRAFFIC_CLASS];
+ u16 refill[MAX_TRAFFIC_CLASS];
+ u16 max[MAX_TRAFFIC_CLASS];
+- /* CEE does not define a priority to tc mapping so map 1:1 */
+- u8 prio_tc[MAX_TRAFFIC_CLASS] = {0, 1, 2, 3, 4, 5, 6, 7};
+
+ /* Unpack CEE standard containers */
+ ixgbe_dcb_unpack_pfc(dcb_config, &pfc_en);
+@@ -256,6 +267,7 @@ s32 ixgbe_dcb_hw_config(struct ixgbe_hw
+ ixgbe_dcb_unpack_max(dcb_config, max);
+ ixgbe_dcb_unpack_bwgid(dcb_config, DCB_TX_CONFIG, bwgid);
+ ixgbe_dcb_unpack_prio(dcb_config, DCB_TX_CONFIG, ptype);
++ ixgbe_dcb_unpack_map(dcb_config, DCB_TX_CONFIG, prio_tc);
+
+ switch (hw->mac.type) {
+ case ixgbe_mac_82598EB:
+@@ -274,7 +286,7 @@ s32 ixgbe_dcb_hw_config(struct ixgbe_hw
+ }
+
+ /* Helper routines to abstract HW specifics from DCB netlink ops */
+-s32 ixgbe_dcb_hw_pfc_config(struct ixgbe_hw *hw, u8 pfc_en)
++s32 ixgbe_dcb_hw_pfc_config(struct ixgbe_hw *hw, u8 pfc_en, u8 *prio_tc)
+ {
+ int ret = -EINVAL;
+
+@@ -284,7 +296,7 @@ s32 ixgbe_dcb_hw_pfc_config(struct ixgbe
+ break;
+ case ixgbe_mac_82599EB:
+ case ixgbe_mac_X540:
+- ret = ixgbe_dcb_config_pfc_82599(hw, pfc_en);
++ ret = ixgbe_dcb_config_pfc_82599(hw, pfc_en, prio_tc);
+ break;
+ default:
+ break;
+--- a/drivers/net/ixgbe/ixgbe_dcb.h
++++ b/drivers/net/ixgbe/ixgbe_dcb.h
+@@ -145,6 +145,7 @@ void ixgbe_dcb_unpack_refill(struct ixgb
+ void ixgbe_dcb_unpack_max(struct ixgbe_dcb_config *, u16 *);
+ void ixgbe_dcb_unpack_bwgid(struct ixgbe_dcb_config *, int, u8 *);
+ void ixgbe_dcb_unpack_prio(struct ixgbe_dcb_config *, int, u8 *);
++void ixgbe_dcb_unpack_map(struct ixgbe_dcb_config *, int, u8 *);
+
+ /* DCB credits calculation */
+ s32 ixgbe_dcb_calculate_tc_credits(struct ixgbe_hw *,
+@@ -154,7 +155,7 @@ s32 ixgbe_dcb_calculate_tc_credits(struc
+ s32 ixgbe_dcb_hw_ets(struct ixgbe_hw *hw, struct ieee_ets *ets, int max);
+ s32 ixgbe_dcb_hw_ets_config(struct ixgbe_hw *hw, u16 *refill, u16 *max,
+ u8 *bwg_id, u8 *prio_type, u8 *tc_prio);
+-s32 ixgbe_dcb_hw_pfc_config(struct ixgbe_hw *hw, u8 pfc_en);
++s32 ixgbe_dcb_hw_pfc_config(struct ixgbe_hw *hw, u8 pfc_en, u8 *tc_prio);
+ s32 ixgbe_dcb_hw_config(struct ixgbe_hw *, struct ixgbe_dcb_config *);
+
+ /* DCB definitions for credit calculation */
+--- a/drivers/net/ixgbe/ixgbe_dcb_82599.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_82599.c
+@@ -59,9 +59,9 @@ s32 ixgbe_dcb_config_rx_arbiter_82599(st
+ reg = IXGBE_RTRPCS_RRM | IXGBE_RTRPCS_RAC | IXGBE_RTRPCS_ARBDIS;
+ IXGBE_WRITE_REG(hw, IXGBE_RTRPCS, reg);
+
+- /* Map all traffic classes to their UP, 1 to 1 */
++ /* Map all traffic classes to their UP */
+ reg = 0;
+- for (i = 0; i < MAX_TRAFFIC_CLASS; i++)
++ for (i = 0; i < MAX_USER_PRIORITY; i++)
+ reg |= (prio_tc[i] << (i * IXGBE_RTRUP2TC_UP_SHIFT));
+ IXGBE_WRITE_REG(hw, IXGBE_RTRUP2TC, reg);
+
+@@ -169,9 +169,9 @@ s32 ixgbe_dcb_config_tx_data_arbiter_825
+ IXGBE_RTTPCS_ARBDIS;
+ IXGBE_WRITE_REG(hw, IXGBE_RTTPCS, reg);
+
+- /* Map all traffic classes to their UP, 1 to 1 */
++ /* Map all traffic classes to their UP */
+ reg = 0;
+- for (i = 0; i < MAX_TRAFFIC_CLASS; i++)
++ for (i = 0; i < MAX_USER_PRIORITY; i++)
+ reg |= (prio_tc[i] << (i * IXGBE_RTTUP2TC_UP_SHIFT));
+ IXGBE_WRITE_REG(hw, IXGBE_RTTUP2TC, reg);
+
+@@ -205,16 +205,36 @@ s32 ixgbe_dcb_config_tx_data_arbiter_825
+ * ixgbe_dcb_config_pfc_82599 - Configure priority flow control
+ * @hw: pointer to hardware structure
+ * @pfc_en: enabled pfc bitmask
++ * @prio_tc: priority to tc assignments indexed by priority
+ *
+ * Configure Priority Flow Control (PFC) for each traffic class.
+ */
+-s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en)
++s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en, u8 *prio_tc)
+ {
+- u32 i, reg;
++ u32 i, j, reg;
++ u8 max_tc = 0;
++
++ for (i = 0; i < MAX_USER_PRIORITY; i++)
++ if (prio_tc[i] > max_tc)
++ max_tc = prio_tc[i];
+
+ /* Configure PFC Tx thresholds per TC */
+ for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+- int enabled = pfc_en & (1 << i);
++ int enabled = 0;
++
++ if (i > max_tc) {
++ reg = 0;
++ IXGBE_WRITE_REG(hw, IXGBE_FCRTL_82599(i), reg);
++ IXGBE_WRITE_REG(hw, IXGBE_FCRTH_82599(i), reg);
++ continue;
++ }
++
++ for (j = 0; j < MAX_USER_PRIORITY; j++) {
++ if ((prio_tc[j] == i) && (pfc_en & (1 << j))) {
++ enabled = 1;
++ break;
++ }
++ }
+
+ reg = hw->fc.low_water << 10;
+
+@@ -251,7 +271,7 @@ s32 ixgbe_dcb_config_pfc_82599(struct ix
+ reg |= IXGBE_MFLCN_RPFCE | IXGBE_MFLCN_DPF;
+
+ if (hw->mac.type == ixgbe_mac_X540) {
+- reg &= ~IXGBE_MFLCN_RPFCE_MASK;
++ reg &= ~(IXGBE_MFLCN_RPFCE_MASK | 0x10);
+ reg |= pfc_en << IXGBE_MFLCN_RPFCE_SHIFT;
+ }
+
+@@ -338,7 +358,7 @@ s32 ixgbe_dcb_hw_config_82599(struct ixg
+ bwg_id, prio_type);
+ ixgbe_dcb_config_tx_data_arbiter_82599(hw, refill, max,
+ bwg_id, prio_type, prio_tc);
+- ixgbe_dcb_config_pfc_82599(hw, pfc_en);
++ ixgbe_dcb_config_pfc_82599(hw, pfc_en, prio_tc);
+ ixgbe_dcb_config_tc_stats_82599(hw);
+
+ return 0;
+--- a/drivers/net/ixgbe/ixgbe_dcb_82599.h
++++ b/drivers/net/ixgbe/ixgbe_dcb_82599.h
+@@ -93,7 +93,7 @@
+ /* DCB hardware-specific driver APIs */
+
+ /* DCB PFC functions */
+-s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en);
++s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en, u8 *prio_tc);
+
+ /* DCB hw initialization */
+ s32 ixgbe_dcb_config_rx_arbiter_82599(struct ixgbe_hw *hw,
+--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
++++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
+@@ -123,7 +123,7 @@ static u8 ixgbe_dcbnl_set_state(struct n
+ return err;
+
+ if (state > 0)
+- err = ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
++ err = ixgbe_setup_tc(netdev, adapter->dcb_cfg.num_tcs.pg_tcs);
+ else
+ err = ixgbe_setup_tc(netdev, 0);
+
+@@ -158,6 +158,10 @@ static void ixgbe_dcbnl_set_pg_tc_cfg_tx
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
+
++ /* Abort a bad configuration */
++ if (ffs(up_map) > adapter->dcb_cfg.num_tcs.pg_tcs)
++ return;
++
+ if (prio != DCB_ATTR_VALUE_UNDEFINED)
+ adapter->temp_dcb_cfg.tc_config[tc].path[0].prio_type = prio;
+ if (bwg_id != DCB_ATTR_VALUE_UNDEFINED)
+@@ -178,6 +182,10 @@ static void ixgbe_dcbnl_set_pg_tc_cfg_tx
+ (adapter->temp_dcb_cfg.tc_config[tc].path[0].up_to_tc_bitmap !=
+ adapter->dcb_cfg.tc_config[tc].path[0].up_to_tc_bitmap))
+ adapter->dcb_set_bitmap |= BIT_PG_TX;
++
++ if (adapter->temp_dcb_cfg.tc_config[tc].path[0].up_to_tc_bitmap !=
++ adapter->dcb_cfg.tc_config[tc].path[0].up_to_tc_bitmap)
++ adapter->dcb_set_bitmap |= BIT_PFC;
+ }
+
+ static void ixgbe_dcbnl_set_pg_bwg_cfg_tx(struct net_device *netdev, int bwg_id,
+@@ -198,6 +206,10 @@ static void ixgbe_dcbnl_set_pg_tc_cfg_rx
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
+
++ /* Abort bad configurations */
++ if (ffs(up_map) > adapter->dcb_cfg.num_tcs.pg_tcs)
++ return;
++
+ if (prio != DCB_ATTR_VALUE_UNDEFINED)
+ adapter->temp_dcb_cfg.tc_config[tc].path[1].prio_type = prio;
+ if (bwg_id != DCB_ATTR_VALUE_UNDEFINED)
+@@ -218,6 +230,10 @@ static void ixgbe_dcbnl_set_pg_tc_cfg_rx
+ (adapter->temp_dcb_cfg.tc_config[tc].path[1].up_to_tc_bitmap !=
+ adapter->dcb_cfg.tc_config[tc].path[1].up_to_tc_bitmap))
+ adapter->dcb_set_bitmap |= BIT_PG_RX;
++
++ if (adapter->temp_dcb_cfg.tc_config[tc].path[1].up_to_tc_bitmap !=
++ adapter->dcb_cfg.tc_config[tc].path[1].up_to_tc_bitmap)
++ adapter->dcb_set_bitmap |= BIT_PFC;
+ }
+
+ static void ixgbe_dcbnl_set_pg_bwg_cfg_rx(struct net_device *netdev, int bwg_id,
+@@ -296,7 +312,7 @@ static void ixgbe_dcbnl_get_pfc_cfg(stru
+ static u8 ixgbe_dcbnl_set_all(struct net_device *netdev)
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
+- int ret;
++ int ret, i;
+ #ifdef IXGBE_FCOE
+ struct dcb_app app = {
+ .selector = DCB_APP_IDTYPE_ETHTYPE,
+@@ -370,18 +386,11 @@ static u8 ixgbe_dcbnl_set_all(struct net
+ }
+ #endif
+
+- if (adapter->dcb_set_bitmap & BIT_PFC) {
+- u8 pfc_en;
+- ixgbe_dcb_unpack_pfc(&adapter->dcb_cfg, &pfc_en);
+- ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc_en);
+- ret = DCB_HW_CHG;
+- }
+-
+ if (adapter->dcb_set_bitmap & (BIT_PG_TX|BIT_PG_RX)) {
+ u16 refill[MAX_TRAFFIC_CLASS], max[MAX_TRAFFIC_CLASS];
+ u8 bwg_id[MAX_TRAFFIC_CLASS], prio_type[MAX_TRAFFIC_CLASS];
+ /* Priority to TC mapping in CEE case default to 1:1 */
+- u8 prio_tc[MAX_TRAFFIC_CLASS] = {0, 1, 2, 3, 4, 5, 6, 7};
++ u8 prio_tc[MAX_USER_PRIORITY];
+ int max_frame = adapter->netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
+
+ #ifdef CONFIG_FCOE
+@@ -401,9 +410,25 @@ static u8 ixgbe_dcbnl_set_all(struct net
+ DCB_TX_CONFIG, bwg_id);
+ ixgbe_dcb_unpack_prio(&adapter->dcb_cfg,
+ DCB_TX_CONFIG, prio_type);
++ ixgbe_dcb_unpack_map(&adapter->dcb_cfg,
++ DCB_TX_CONFIG, prio_tc);
+
+ ixgbe_dcb_hw_ets_config(&adapter->hw, refill, max,
+ bwg_id, prio_type, prio_tc);
++
++ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
++ netdev_set_prio_tc_map(netdev, i, prio_tc[i]);
++ }
++
++ if (adapter->dcb_set_bitmap & BIT_PFC) {
++ u8 pfc_en;
++ u8 prio_tc[MAX_USER_PRIORITY];
++
++ ixgbe_dcb_unpack_map(&adapter->dcb_cfg,
++ DCB_TX_CONFIG, prio_tc);
++ ixgbe_dcb_unpack_pfc(&adapter->dcb_cfg, &pfc_en);
++ ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc_en, prio_tc);
++ ret = DCB_HW_CHG;
+ }
+
+ if (adapter->dcb_cfg.pfc_mode_enable)
+@@ -460,10 +485,10 @@ static u8 ixgbe_dcbnl_getnumtcs(struct n
+ if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
+ switch (tcid) {
+ case DCB_NUMTCS_ATTR_PG:
+- *num = MAX_TRAFFIC_CLASS;
++ *num = adapter->dcb_cfg.num_tcs.pg_tcs;
+ break;
+ case DCB_NUMTCS_ATTR_PFC:
+- *num = MAX_TRAFFIC_CLASS;
++ *num = adapter->dcb_cfg.num_tcs.pfc_tcs;
+ break;
+ default:
+ rval = -EINVAL;
+@@ -532,7 +557,7 @@ static int ixgbe_dcbnl_ieee_getets(struc
+ if (!my_ets)
+ return -EINVAL;
+
+- ets->ets_cap = MAX_TRAFFIC_CLASS;
++ ets->ets_cap = adapter->dcb_cfg.num_tcs.pg_tcs;
+ ets->cbs = my_ets->cbs;
+ memcpy(ets->tc_tx_bw, my_ets->tc_tx_bw, sizeof(ets->tc_tx_bw));
+ memcpy(ets->tc_rx_bw, my_ets->tc_rx_bw, sizeof(ets->tc_rx_bw));
+@@ -569,6 +594,9 @@ static int ixgbe_dcbnl_ieee_setets(struc
+ if (max_tc)
+ max_tc++;
+
++ if (max_tc > adapter->dcb_cfg.num_tcs.pg_tcs)
++ return -EINVAL;
++
+ if (max_tc != netdev_get_num_tc(dev))
+ ixgbe_setup_tc(dev, max_tc);
+
+@@ -586,7 +614,7 @@ static int ixgbe_dcbnl_ieee_getpfc(struc
+ if (!my_pfc)
+ return -EINVAL;
+
+- pfc->pfc_cap = MAX_TRAFFIC_CLASS;
++ pfc->pfc_cap = adapter->dcb_cfg.num_tcs.pfc_tcs;
+ pfc->pfc_en = my_pfc->pfc_en;
+ pfc->mbc = my_pfc->mbc;
+ pfc->delay = my_pfc->delay;
+@@ -603,6 +631,7 @@ static int ixgbe_dcbnl_ieee_setpfc(struc
+ struct ieee_pfc *pfc)
+ {
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
++ u8 *prio_tc;
+
+ if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
+ return -EINVAL;
+@@ -614,8 +643,9 @@ static int ixgbe_dcbnl_ieee_setpfc(struc
+ return -ENOMEM;
+ }
+
++ prio_tc = adapter->ixgbe_ieee_ets->prio_tc;
+ memcpy(adapter->ixgbe_ieee_pfc, pfc, sizeof(*adapter->ixgbe_ieee_pfc));
+- return ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc->pfc_en);
++ return ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc->pfc_en, prio_tc);
+ }
+
+ static int ixgbe_dcbnl_ieee_setapp(struct net_device *dev,
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -3708,8 +3708,10 @@ static void ixgbe_configure_dcb(struct i
+
+ if (adapter->ixgbe_ieee_pfc) {
+ struct ieee_pfc *pfc = adapter->ixgbe_ieee_pfc;
++ u8 *prio_tc = adapter->ixgbe_ieee_ets->prio_tc;
+
+- ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc->pfc_en);
++ ixgbe_dcb_hw_pfc_config(&adapter->hw, pfc->pfc_en,
++ prio_tc);
+ }
+ }
+
+@@ -4564,7 +4566,6 @@ static inline bool ixgbe_set_dcb_queues(
+ q = min((int)num_online_cpus(), per_tc_q);
+
+ for (i = 0; i < tcs; i++) {
+- netdev_set_prio_tc_map(dev, i, i);
+ netdev_set_tc_queue(dev, i, q, offset);
+ offset += q;
+ }
+@@ -5327,8 +5328,10 @@ static int __devinit ixgbe_sw_init(struc
+ tc = &adapter->dcb_cfg.tc_config[j];
+ tc->path[DCB_TX_CONFIG].bwg_id = 0;
+ tc->path[DCB_TX_CONFIG].bwg_percent = 12 + (j & 1);
++ tc->path[DCB_TX_CONFIG].up_to_tc_bitmap = 1 << j;
+ tc->path[DCB_RX_CONFIG].bwg_id = 0;
+ tc->path[DCB_RX_CONFIG].bwg_percent = 12 + (j & 1);
++ tc->path[DCB_RX_CONFIG].up_to_tc_bitmap = 1 << j;
+ tc->dcb_pfc = pfc_disabled;
+ }
+
+@@ -7030,12 +7033,13 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct
+ tx_flags |= IXGBE_TX_FLAGS_SW_VLAN;
+ }
+
++ /* DCB maps skb priorities 0-7 onto 3 bit PCP of VLAN tag. */
+ if ((adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
+ ((tx_flags & (IXGBE_TX_FLAGS_HW_VLAN | IXGBE_TX_FLAGS_SW_VLAN)) ||
+ (skb->priority != TC_PRIO_CONTROL))) {
+ tx_flags &= ~IXGBE_TX_FLAGS_VLAN_PRIO_MASK;
+- tx_flags |= tx_ring->dcb_tc <<
+- IXGBE_TX_FLAGS_VLAN_PRIO_SHIFT;
++ tx_flags |= (skb->priority & 0x7) <<
++ IXGBE_TX_FLAGS_VLAN_PRIO_SHIFT;
+ if (tx_flags & IXGBE_TX_FLAGS_SW_VLAN) {
+ struct vlan_ethhdr *vhdr;
+ if (skb_header_cloned(skb) &&
diff --git a/series.conf b/series.conf
index 493fbeb865..452b58e04e 100644
--- a/series.conf
+++ b/series.conf
@@ -1395,6 +1395,37 @@
patches.drivers/tg3-fix-VLAN-tagging-regression
patches.drivers/ixgbe-add-WOL-support-for-X540
+ patches.drivers/0003-ixgbe-consolidate-MRQC-and-MTQC-handling.patch
+ patches.drivers/0004-ixgbe-dcbnl-reduce-duplicated-code-and-indentation.patch
+ patches.drivers/0005-ixgbe-configure-minimal-packet-buffers-to-support-TC.patch
+ patches.drivers/0006-ixgbe-consolidate-packet-buffer-allocation.patch
+ patches.drivers/0007-ixgbe-add-basic-support-for-setting-and-getting-nfc-.patch
+ patches.drivers/0008-ixgbe-DCB-use-existing-TX-and-RX-queues.patch
+ patches.drivers/0009-ixgbe-add-support-for-displaying-ntuple-filters-via-.patch
+ patches.drivers/0010-ixgbe-fix-flags-relating-to-perfect-filters-to-suppo.patch
+ patches.drivers/0011-ixgbe-remove-ntuple-filtering.patch
+ patches.drivers/0012-ixgbe-update-perfect-filter-framework-to-support-ret.patch
+ patches.drivers/0013-ixgbe-add-support-for-nfc-addition-and-removal-of-fi.patch
+ patches.drivers/0014-ixgbe-remove-redundant-configuration-of-tx_sample_ra.patch
+ patches.drivers/0015-ixgbe-PFC-not-cleared-on-X540-devices.patch
+ patches.drivers/0016-ixgbe-DCB-and-perfect-filters-can-coexist.patch
+ patches.drivers/0017-ixgbe-consolidate-setup-for-multiple-traffic-classes.patch
+ patches.drivers/0018-v2-ixgbe-Update-packet-buffer-reservation-to-correct.patch
+ patches.drivers/0019-ixgbe-update-P-FC-thresholds-to-account-for-X540-and.patch
+ patches.drivers/0020-ixgbe-Make-certain-to-initialize-the-fdir_perfect_lo.patch
+ patches.drivers/0021-ixgbe-complete-FCoE-initialization-from-setapp-routi.patch
+ patches.drivers/0022-ixgbe-DCB-do-not-call-set_state-from-IEEE-mode.patch
+ patches.drivers/0023-ixgbe-add-structure-for-containing-RX-TX-rings-to-q_.patch
+ patches.drivers/0024-ixgbe-Move-interrupt-related-values-out-of-ring-and-.patch
+ patches.drivers/0025-ixgbe-DCB-X540-devices-support-max-traffic-class-of-.patch
+ patches.drivers/0026-ixgbe-Move-all-values-that-deal-with-count-next_to_u.patch
+ patches.drivers/0027-ixgbe-Update-method-used-for-determining-descriptor-.patch
+ patches.drivers/0028-ixgbe-Add-one-function-that-handles-most-of-context-.patch
+ patches.drivers/0029-ixgbe-Refactor-transmit-map-and-cleanup-routines.patch
+ patches.drivers/0030-ixgbe-Cleanup-FCOE-and-VLAN-handling-in-xmit_frame_r.patch
+ patches.drivers/0031-ixgbe-Always-tag-VLAN-tagged-packets.patch
+ patches.drivers/0032-ixgbe-fixup-hard-dependencies-on-supporting-8-traffi.patch
+
patches.drivers/igb-Fix-for-Alt-MAC-Address-feature-on-82580-and-la.patch