* 'slab/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
SLAB: Fix lockdep annotation breakage
00-INDEX
- This file
-as-iosched.txt
- - Anticipatory IO scheduler
barrier.txt
- I/O Barriers
biodoc.txt
+++ /dev/null
-Anticipatory IO scheduler
--------------------------
-Nick Piggin <piggin@cyberone.com.au> 13 Sep 2003
-
-Attention! Database servers, especially those using "TCQ" disks should
-investigate performance with the 'deadline' IO scheduler. Any system with high
-disk performance requirements should do so, in fact.
-
-If you see unusual performance characteristics of your disk systems, or you
-see big performance regressions versus the deadline scheduler, please email
-me. Database users don't bother unless you're willing to test a lot of patches
-from me ;) its a known issue.
-
-Also, users with hardware RAID controllers, doing striping, may find
-highly variable performance results with using the as-iosched. The
-as-iosched anticipatory implementation is based on the notion that a disk
-device has only one physical seeking head. A striped RAID controller
-actually has a head for each physical device in the logical RAID device.
-
-However, setting the antic_expire (see tunable parameters below) produces
-very similar behavior to the deadline IO scheduler.
-
-Selecting IO schedulers
------------------------
-Refer to Documentation/block/switching-sched.txt for information on
-selecting an io scheduler on a per-device basis.
-
-Anticipatory IO scheduler Policies
-----------------------------------
-The as-iosched implementation implements several layers of policies
-to determine when an IO request is dispatched to the disk controller.
-Here are the policies outlined, in order of application.
-
-1. one-way Elevator algorithm.
-
-The elevator algorithm is similar to that used in deadline scheduler, with
-the addition that it allows limited backward movement of the elevator
-(i.e. seeks backwards). A seek backwards can occur when choosing between
-two IO requests where one is behind the elevator's current position, and
-the other is in front of the elevator's position. If the seek distance to
-the request in back of the elevator is less than half the seek distance to
-the request in front of the elevator, then the request in back can be chosen.
-Backward seeks are also limited to a maximum of MAXBACK (1024*1024) sectors.
-This favors forward movement of the elevator, while allowing opportunistic
-"short" backward seeks.
-
-2. FIFO expiration times for reads and for writes.
-
-This is again very similar to the deadline IO scheduler. The expiration
-times for requests on these lists is tunable using the parameters read_expire
-and write_expire discussed below. When a read or a write expires in this way,
-the IO scheduler will interrupt its current elevator sweep or read anticipation
-to service the expired request.
-
-3. Read and write request batching
-
-A batch is a collection of read requests or a collection of write
-requests. The as scheduler alternates dispatching read and write batches
-to the driver. In the case a read batch, the scheduler submits read
-requests to the driver as long as there are read requests to submit, and
-the read batch time limit has not been exceeded (read_batch_expire).
-The read batch time limit begins counting down only when there are
-competing write requests pending.
-
-In the case of a write batch, the scheduler submits write requests to
-the driver as long as there are write requests available, and the
-write batch time limit has not been exceeded (write_batch_expire).
-However, the length of write batches will be gradually shortened
-when read batches frequently exceed their time limit.
-
-When changing between batch types, the scheduler waits for all requests
-from the previous batch to complete before scheduling requests for the
-next batch.
-
-The read and write fifo expiration times described in policy 2 above
-are checked only when in scheduling IO of a batch for the corresponding
-(read/write) type. So for example, the read FIFO timeout values are
-tested only during read batches. Likewise, the write FIFO timeout
-values are tested only during write batches. For this reason,
-it is generally not recommended for the read batch time
-to be longer than the write expiration time, nor for the write batch
-time to exceed the read expiration time (see tunable parameters below).
-
-When the IO scheduler changes from a read to a write batch,
-it begins the elevator from the request that is on the head of the
-write expiration FIFO. Likewise, when changing from a write batch to
-a read batch, scheduler begins the elevator from the first entry
-on the read expiration FIFO.
-
-4. Read anticipation.
-
-Read anticipation occurs only when scheduling a read batch.
-This implementation of read anticipation allows only one read request
-to be dispatched to the disk controller at a time. In
-contrast, many write requests may be dispatched to the disk controller
-at a time during a write batch. It is this characteristic that can make
-the anticipatory scheduler perform anomalously with controllers supporting
-TCQ, or with hardware striped RAID devices. Setting the antic_expire
-queue parameter (see below) to zero disables this behavior, and the
-anticipatory scheduler behaves essentially like the deadline scheduler.
-
-When read anticipation is enabled (antic_expire is not zero), reads
-are dispatched to the disk controller one at a time.
-At the end of each read request, the IO scheduler examines its next
-candidate read request from its sorted read list. If that next request
-is from the same process as the request that just completed,
-or if the next request in the queue is "very close" to the
-just completed request, it is dispatched immediately. Otherwise,
-statistics (average think time, average seek distance) on the process
-that submitted the just completed request are examined. If it seems
-likely that that process will submit another request soon, and that
-request is likely to be near the just completed request, then the IO
-scheduler will stop dispatching more read requests for up to (antic_expire)
-milliseconds, hoping that process will submit a new request near the one
-that just completed. If such a request is made, then it is dispatched
-immediately. If the antic_expire wait time expires, then the IO scheduler
-will dispatch the next read request from the sorted read queue.
-
-To decide whether an anticipatory wait is worthwhile, the scheduler
-maintains statistics for each process that can be used to compute
-mean "think time" (the time between read requests), and mean seek
-distance for that process. One observation is that these statistics
-are associated with each process, but those statistics are not associated
-with a specific IO device. So for example, if a process is doing IO
-on several file systems on separate devices, the statistics will be
-a combination of IO behavior from all those devices.
-
-
-Tuning the anticipatory IO scheduler
-------------------------------------
-When using 'as', the anticipatory IO scheduler there are 5 parameters under
-/sys/block/*/queue/iosched/. All are units of milliseconds.
-
-The parameters are:
-* read_expire
- Controls how long until a read request becomes "expired". It also controls the
- interval between which expired requests are served, so set to 50, a request
- might take anywhere < 100ms to be serviced _if_ it is the next on the
- expired list. Obviously request expiration strategies won't make the disk
- go faster. The result basically equates to the timeslice a single reader
- gets in the presence of other IO. 100*((seek time / read_expire) + 1) is
- very roughly the % streaming read efficiency your disk should get with
- multiple readers.
-
-* read_batch_expire
- Controls how much time a batch of reads is given before pending writes are
- served. A higher value is more efficient. This might be set below read_expire
- if writes are to be given higher priority than reads, but reads are to be
- as efficient as possible when there are no writes. Generally though, it
- should be some multiple of read_expire.
-
-* write_expire, and
-* write_batch_expire are equivalent to the above, for writes.
-
-* antic_expire
- Controls the maximum amount of time we can anticipate a good read (one
- with a short seek distance from the most recently completed request) before
- giving up. Many other factors may cause anticipation to be stopped early,
- or some processes will not be "anticipated" at all. Should be a bit higher
- for big seek time devices though not a linear correspondence - most
- processes have only a few ms thinktime.
-
-In addition to the tunables above there is a read-only file named est_time
-which, when read, will show:
-
- - The probability of a task exiting without a cooperating task
- submitting an anticipated IO.
-
- - The current mean think time.
-
- - The seek distance used to determine if an incoming IO is better.
-
__u8 pad;
} nmi;
__u32 sipi_vector;
- __u32 flags; /* must be zero */
+ __u32 flags;
};
4.30 KVM_SET_VCPU_EVENTS
See KVM_GET_VCPU_EVENTS for the data structure.
+Fields that may be modified asynchronously by running VCPUs can be excluded
+from the update. These fields are nmi.pending and sipi_vector. Keep the
+corresponding bits in the flags field cleared to suppress overwriting the
+current in-kernel state. The bits are:
+
+KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
+KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
+
5. The kvm_run structure
It takes an integer value, can be changed by writing to this
file, such as
- # cat 5 > /proc/asound/card0/pcm0p/xrun_debug
+ # echo 5 > /proc/asound/card0/pcm0p/xrun_debug
The value consists of the following bit flags:
bit 0 = Enable XRUN/jiffies debug messages
----------------
To use the vga arbiter char device it was implemented an API inside the
-libpciaccess library. One fieldd was added to struct pci_device (each device
+libpciaccess library. One field was added to struct pci_device (each device
on the system):
/* the type of resource decoded by the device */
#define _vmm_raw_spin_lock(x) do {}while(0)
#define _vmm_raw_spin_unlock(x) do {}while(0)
#else
+typedef struct {
+ volatile unsigned int lock;
+} vmm_spinlock_t;
#define _vmm_raw_spin_lock(x) \
do { \
__u32 *ia64_spinlock_ptr = (__u32 *) (x); \
#define _vmm_raw_spin_unlock(x) \
do { barrier(); \
- ((spinlock_t *)x)->raw_lock.lock = 0; } \
+ ((vmm_spinlock_t *)x)->lock = 0; } \
while (0)
#endif
-void vmm_spin_lock(spinlock_t *lock);
-void vmm_spin_unlock(spinlock_t *lock);
+void vmm_spin_lock(vmm_spinlock_t *lock);
+void vmm_spin_unlock(vmm_spinlock_t *lock);
enum {
I_TLB = 1,
D_TLB = 2
return ;
}
-void vmm_spin_lock(spinlock_t *lock)
+void vmm_spin_lock(vmm_spinlock_t *lock)
{
_vmm_raw_spin_lock(lock);
}
-void vmm_spin_unlock(spinlock_t *lock)
+void vmm_spin_unlock(vmm_spinlock_t *lock)
{
_vmm_raw_spin_unlock(lock);
}
{
u64 i, dirty_pages = 1;
u64 base_gfn = (pte&_PAGE_PPN_MASK) >> PAGE_SHIFT;
- spinlock_t *lock = __kvm_va(v->arch.dirty_log_lock_pa);
+ vmm_spinlock_t *lock = __kvm_va(v->arch.dirty_log_lock_pa);
void *dirty_bitmap = (void *)KVM_MEM_DIRTY_LOG_BASE;
dirty_pages <<= ps <= PAGE_SHIFT ? 0 : ps - PAGE_SHIFT;
list_for_each_entry(dev, &bus->devices, bus_list) {
struct dev_archdata *sd = &dev->dev.archdata;
+ /* Cardbus can call us to add new devices to a bus, so ignore
+ * those who are already fully discovered
+ */
+ if (dev->is_added)
+ continue;
+
/* Setup OF node pointer in archdata */
sd->of_node = pci_device_to_OF_node(dev);
}
EXPORT_SYMBOL(pcibios_fixup_bus);
+void __devinit pci_fixup_cardbus(struct pci_bus *bus)
+{
+ /* Now fixup devices on that bus */
+ pcibios_setup_bus_devices(bus);
+}
+
+
static int skip_isa_ioresource_align(struct pci_dev *dev)
{
if ((ppc_pci_flags & PPC_PCI_CAN_SKIP_ISA_ALIGN) &&
{
u64 rb = 0, rs = 0;
+ /*
+ * According to Book3 2.01 mtsrin is implemented as:
+ *
+ * The SLB entry specified by (RB)32:35 is loaded from register
+ * RS, as follows.
+ *
+ * SLBE Bit Source SLB Field
+ *
+ * 0:31 0x0000_0000 ESID-0:31
+ * 32:35 (RB)32:35 ESID-32:35
+ * 36 0b1 V
+ * 37:61 0x00_0000|| 0b0 VSID-0:24
+ * 62:88 (RS)37:63 VSID-25:51
+ * 89:91 (RS)33:35 Ks Kp N
+ * 92 (RS)36 L ((RS)36 must be 0b0)
+ * 93 0b0 C
+ */
+
+ dprintk("KVM MMU: mtsrin(0x%x, 0x%lx)\n", srnum, value);
+
/* ESID = srnum */
rb |= (srnum & 0xf) << 28;
/* Set the valid bit */
/* VSID = VSID */
rs |= (value & 0xfffffff) << 12;
/* flags = flags */
- rs |= ((value >> 27) & 0xf) << 9;
+ rs |= ((value >> 28) & 0x7) << 9;
kvmppc_mmu_book3s_64_slbmte(vcpu, rs, rb);
}
__u8 reserved[31];
};
+/* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */
+#define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001
+#define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002
+
/* for KVM_GET/SET_VCPU_EVENTS */
struct kvm_vcpu_events {
struct {
return UV_GLOBAL_GRU_MMR_BASE | offset | (pnode << uv_hub_info->m_val);
}
+static inline void uv_write_global_mmr8(int pnode, unsigned long offset,
+ unsigned char val)
+{
+ writeb(val, uv_global_mmr64_address(pnode, offset));
+}
+
+static inline unsigned char uv_read_global_mmr8(int pnode,
+ unsigned long offset)
+{
+ return readb(uv_global_mmr64_address(pnode, offset));
+}
+
/*
* Access hub local MMRs. Faster than using global space but only local MMRs
* are accessible.
}
}
+static inline unsigned long uv_scir_offset(int apicid)
+{
+ return SCIR_LOCAL_MMR_BASE | (apicid & 0x3f);
+}
+
static inline void uv_set_cpu_scir_bits(int cpu, unsigned char value)
{
if (uv_cpu_hub_info(cpu)->scir.state != value) {
+ uv_write_global_mmr8(uv_cpu_to_pnode(cpu),
+ uv_cpu_hub_info(cpu)->scir.offset, value);
uv_cpu_hub_info(cpu)->scir.state = value;
- uv_write_local_mmr8(uv_cpu_hub_info(cpu)->scir.offset, value);
}
}
uv_rtc_init();
for_each_present_cpu(cpu) {
+ int apicid = per_cpu(x86_cpu_to_apicid, cpu);
+
nid = cpu_to_node(cpu);
- pnode = uv_apicid_to_pnode(per_cpu(x86_cpu_to_apicid, cpu));
+ pnode = uv_apicid_to_pnode(apicid);
blade = boot_pnode_to_blade(pnode);
lcpu = uv_blade_info[blade].nr_possible_cpus;
uv_blade_info[blade].nr_possible_cpus++;
uv_cpu_hub_info(cpu)->gnode_extra = gnode_extra;
uv_cpu_hub_info(cpu)->global_mmr_base = mmr_base;
uv_cpu_hub_info(cpu)->coherency_domain_number = sn_coherency_id;
- uv_cpu_hub_info(cpu)->scir.offset = SCIR_LOCAL_MMR_BASE + lcpu;
+ uv_cpu_hub_info(cpu)->scir.offset = uv_scir_offset(apicid);
uv_node_to_blade[nid] = blade;
uv_cpu_to_blade[cpu] = blade;
max_pnode = max(pnode, max_pnode);
- printk(KERN_DEBUG "UV: cpu %d, apicid 0x%x, pnode %d, nid %d, "
- "lcpu %d, blade %d\n",
- cpu, per_cpu(x86_cpu_to_apicid, cpu), pnode, nid,
- lcpu, blade);
+ printk(KERN_DEBUG "UV: cpu %d, apicid 0x%x, pnode %d, nid %d, lcpu %d, blade %d\n",
+ cpu, apicid, pnode, nid, lcpu, blade);
}
/* Add blade/pnode info for nodes without cpus */
hrtimer_cancel(&apic->lapic_timer.timer);
update_divide_count(apic);
start_apic_timer(apic);
+ apic->irr_pending = true;
}
void __kvm_migrate_apic_timer(struct kvm_vcpu *vcpu)
static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
{
struct kvm_shadow_walk_iterator iterator;
- pt_element_t gpte;
- gpa_t pte_gpa = -1;
int level;
u64 *sptep;
int need_flush = 0;
if (level == PT_PAGE_TABLE_LEVEL ||
((level == PT_DIRECTORY_LEVEL && is_large_pte(*sptep))) ||
((level == PT_PDPE_LEVEL && is_large_pte(*sptep)))) {
- struct kvm_mmu_page *sp = page_header(__pa(sptep));
-
- pte_gpa = (sp->gfn << PAGE_SHIFT);
- pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
if (is_shadow_present_pte(*sptep)) {
rmap_remove(vcpu->kvm, sptep);
if (need_flush)
kvm_flush_remote_tlbs(vcpu->kvm);
spin_unlock(&vcpu->kvm->mmu_lock);
-
- if (pte_gpa == -1)
- return;
- if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &gpte,
- sizeof(pt_element_t)))
- return;
- if (is_present_gpte(gpte) && (gpte & PT_ACCESSED_MASK)) {
- if (mmu_topup_memory_caches(vcpu))
- return;
- kvm_mmu_pte_write(vcpu, pte_gpa, (const u8 *)&gpte,
- sizeof(pt_element_t), 0);
- }
}
static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr)
events->sipi_vector = vcpu->arch.sipi_vector;
- events->flags = 0;
+ events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR);
vcpu_put(vcpu);
}
static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
struct kvm_vcpu_events *events)
{
- if (events->flags)
+ if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR))
return -EINVAL;
vcpu_load(vcpu);
kvm_pic_clear_isr_ack(vcpu->kvm);
vcpu->arch.nmi_injected = events->nmi.injected;
- vcpu->arch.nmi_pending = events->nmi.pending;
+ if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
+ vcpu->arch.nmi_pending = events->nmi.pending;
kvm_x86_ops->set_nmi_mask(vcpu, events->nmi.masked);
- vcpu->arch.sipi_vector = events->sipi_vector;
+ if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR)
+ vcpu->arch.sipi_vector = events->sipi_vector;
vcpu_put(vcpu);
}
}
-void __init update_res(struct pci_root_info *info, size_t start,
+void __devinit update_res(struct pci_root_info *info, size_t start,
size_t end, unsigned long flags, int merge)
{
int i;
* our current implementations need. If we'll ever need
* more the interface will need revisiting.
*/
- page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ page = alloc_page(gfp_mask | __GFP_ZERO);
if (!page)
goto out_free_bio;
if (bio_add_pc_page(q, bio, page, sector_size, 0) < sector_size)
/**
* blk_stack_limits - adjust queue_limits for stacked devices
- * @t: the stacking driver limits (top)
- * @b: the underlying queue limits (bottom)
+ * @t: the stacking driver limits (top device)
+ * @b: the underlying queue limits (bottom, component device)
* @offset: offset to beginning of data within component device
*
* Description:
- * Merges two queue_limit structs. Returns 0 if alignment didn't
- * change. Returns -1 if adding the bottom device caused
- * misalignment.
+ * This function is used by stacking drivers like MD and DM to ensure
+ * that all component devices have compatible block sizes and
+ * alignments. The stacking driver must provide a queue_limits
+ * struct (top) and then iteratively call the stacking function for
+ * all component (bottom) devices. The stacking function will
+ * attempt to combine the values and ensure proper alignment.
+ *
+ * Returns 0 if the top and bottom queue_limits are compatible. The
+ * top device's block sizes and alignment offsets may be adjusted to
+ * ensure alignment with the bottom device. If no compatible sizes
+ * and alignments exist, -1 is returned and the resulting top
+ * queue_limits will have the misaligned flag set to indicate that
+ * the alignment_offset is undefined.
*/
int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
sector_t offset)
{
- int ret;
-
- ret = 0;
+ sector_t alignment;
+ unsigned int top, bottom;
t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
t->max_hw_sectors = min_not_zero(t->max_hw_sectors, b->max_hw_sectors);
t->max_segment_size = min_not_zero(t->max_segment_size,
b->max_segment_size);
+ alignment = queue_limit_alignment_offset(b, offset);
+
+ /* Bottom device has different alignment. Check that it is
+ * compatible with the current top alignment.
+ */
+ if (t->alignment_offset != alignment) {
+
+ top = max(t->physical_block_size, t->io_min)
+ + t->alignment_offset;
+ bottom = max(b->physical_block_size, b->io_min) + alignment;
+
+ /* Verify that top and bottom intervals line up */
+ if (max(top, bottom) & (min(top, bottom) - 1))
+ t->misaligned = 1;
+ }
+
t->logical_block_size = max(t->logical_block_size,
b->logical_block_size);
b->physical_block_size);
t->io_min = max(t->io_min, b->io_min);
+ t->io_opt = lcm(t->io_opt, b->io_opt);
+
t->no_cluster |= b->no_cluster;
t->discard_zeroes_data &= b->discard_zeroes_data;
- /* Bottom device offset aligned? */
- if (offset &&
- (offset & (b->physical_block_size - 1)) != b->alignment_offset) {
+ /* Physical block size a multiple of the logical block size? */
+ if (t->physical_block_size & (t->logical_block_size - 1)) {
+ t->physical_block_size = t->logical_block_size;
t->misaligned = 1;
- ret = -1;
}
- /*
- * Temporarily disable discard granularity. It's currently buggy
- * since we default to 0 for discard_granularity, hence this
- * "failure" will always trigger for non-zero offsets.
- */
-#if 0
- if (offset &&
- (offset & (b->discard_granularity - 1)) != b->discard_alignment) {
- t->discard_misaligned = 1;
- ret = -1;
+ /* Minimum I/O a multiple of the physical block size? */
+ if (t->io_min & (t->physical_block_size - 1)) {
+ t->io_min = t->physical_block_size;
+ t->misaligned = 1;
}
-#endif
-
- /* If top has no alignment offset, inherit from bottom */
- if (!t->alignment_offset)
- t->alignment_offset =
- b->alignment_offset & (b->physical_block_size - 1);
- if (!t->discard_alignment)
- t->discard_alignment =
- b->discard_alignment & (b->discard_granularity - 1);
-
- /* Top device aligned on logical block boundary? */
- if (t->alignment_offset & (t->logical_block_size - 1)) {
+ /* Optimal I/O a multiple of the physical block size? */
+ if (t->io_opt & (t->physical_block_size - 1)) {
+ t->io_opt = 0;
t->misaligned = 1;
- ret = -1;
}
- /* Find lcm() of optimal I/O size and granularity */
- t->io_opt = lcm(t->io_opt, b->io_opt);
- t->discard_granularity = lcm(t->discard_granularity,
- b->discard_granularity);
+ /* Find lowest common alignment_offset */
+ t->alignment_offset = lcm(t->alignment_offset, alignment)
+ & (max(t->physical_block_size, t->io_min) - 1);
- /* Verify that optimal I/O size is a multiple of io_min */
- if (t->io_min && t->io_opt % t->io_min)
- ret = -1;
+ /* Verify that new alignment_offset is on a logical block boundary */
+ if (t->alignment_offset & (t->logical_block_size - 1))
+ t->misaligned = 1;
+
+ /* Discard alignment and granularity */
+ if (b->discard_granularity) {
+ unsigned int granularity = b->discard_granularity;
+ offset &= granularity - 1;
+
+ alignment = (granularity + b->discard_alignment - offset)
+ & (granularity - 1);
+
+ if (t->discard_granularity != 0 &&
+ t->discard_alignment != alignment) {
+ top = t->discard_granularity + t->discard_alignment;
+ bottom = b->discard_granularity + alignment;
+
+ /* Verify that top and bottom intervals line up */
+ if (max(top, bottom) & (min(top, bottom) - 1))
+ t->discard_misaligned = 1;
+ }
+
+ t->max_discard_sectors = min_not_zero(t->max_discard_sectors,
+ b->max_discard_sectors);
+ t->discard_granularity = max(t->discard_granularity,
+ b->discard_granularity);
+ t->discard_alignment = lcm(t->discard_alignment, alignment) &
+ (t->discard_granularity - 1);
+ }
- return ret;
+ return t->misaligned ? -1 : 0;
}
EXPORT_SYMBOL(blk_stack_limits);
/* Root service tree for cfq_groups */
struct cfq_rb_root grp_service_tree;
struct cfq_group root_group;
- /* Number of active cfq groups on group service tree */
- int nr_groups;
/*
* The priority currently being served
static struct cfq_rb_root *service_tree_for(struct cfq_group *cfqg,
enum wl_prio_t prio,
- enum wl_type_t type,
- struct cfq_data *cfqd)
+ enum wl_type_t type)
{
if (!cfqg)
return NULL;
__cfq_group_service_tree_add(st, cfqg);
cfqg->on_st = true;
- cfqd->nr_groups++;
st->total_weight += cfqg->weight;
}
cfq_log_cfqg(cfqd, cfqg, "del_from_rr group");
cfqg->on_st = false;
- cfqd->nr_groups--;
st->total_weight -= cfqg->weight;
if (!RB_EMPTY_NODE(&cfqg->rb_node))
cfq_rb_erase(&cfqg->rb_node, st);
#endif
service_tree = service_tree_for(cfqq->cfqg, cfqq_prio(cfqq),
- cfqq_type(cfqq), cfqd);
+ cfqq_type(cfqq));
if (cfq_class_idle(cfqq)) {
rb_key = CFQ_IDLE_DELAY;
parent = rb_last(&service_tree->rb);
struct cfq_io_context *cic;
struct cfq_queue *cfqq;
- /* Deny merge if bio and rq don't belong to same cfq group */
- if ((RQ_CFQQ(rq))->cfqg != cfq_get_cfqg(cfqd, 0))
- return false;
/*
* Disallow merge of a sync bio into an async request.
*/
{
struct cfq_rb_root *service_tree =
service_tree_for(cfqd->serving_group, cfqd->serving_prio,
- cfqd->serving_type, cfqd);
+ cfqd->serving_type);
if (!cfqd->rq_queued)
return NULL;
#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
- struct request *rq)
+ struct request *rq, bool for_preempt)
{
sector_t sdist = cfqq->seek_mean;
if (!sample_valid(cfqq->seek_samples))
sdist = CFQQ_SEEK_THR;
+ /* if seek_mean is big, using it as close criteria is meaningless */
+ if (sdist > CFQQ_SEEK_THR && !for_preempt)
+ sdist = CFQQ_SEEK_THR;
+
return cfq_dist_from_last(cfqd, rq) <= sdist;
}
* will contain the closest sector.
*/
__cfqq = rb_entry(parent, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
return __cfqq;
if (blk_rq_pos(__cfqq->next_rq) < sector)
return NULL;
__cfqq = rb_entry(node, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
return __cfqq;
return NULL;
}
static enum wl_type_t cfq_choose_wl(struct cfq_data *cfqd,
- struct cfq_group *cfqg, enum wl_prio_t prio,
- bool prio_changed)
+ struct cfq_group *cfqg, enum wl_prio_t prio)
{
struct cfq_queue *queue;
int i;
unsigned long lowest_key = 0;
enum wl_type_t cur_best = SYNC_NOIDLE_WORKLOAD;
- if (prio_changed) {
- /*
- * When priorities switched, we prefer starting
- * from SYNC_NOIDLE (first choice), or just SYNC
- * over ASYNC
- */
- if (service_tree_for(cfqg, prio, cur_best, cfqd)->count)
- return cur_best;
- cur_best = SYNC_WORKLOAD;
- if (service_tree_for(cfqg, prio, cur_best, cfqd)->count)
- return cur_best;
-
- return ASYNC_WORKLOAD;
- }
-
- for (i = 0; i < 3; ++i) {
- /* otherwise, select the one with lowest rb_key */
- queue = cfq_rb_first(service_tree_for(cfqg, prio, i, cfqd));
+ for (i = 0; i <= SYNC_WORKLOAD; ++i) {
+ /* select the one with lowest rb_key */
+ queue = cfq_rb_first(service_tree_for(cfqg, prio, i));
if (queue &&
(!key_valid || time_before(queue->rb_key, lowest_key))) {
lowest_key = queue->rb_key;
static void choose_service_tree(struct cfq_data *cfqd, struct cfq_group *cfqg)
{
- enum wl_prio_t previous_prio = cfqd->serving_prio;
- bool prio_changed;
unsigned slice;
unsigned count;
struct cfq_rb_root *st;
* (SYNC, SYNC_NOIDLE, ASYNC), and to compute a workload
* expiration time
*/
- prio_changed = (cfqd->serving_prio != previous_prio);
- st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type,
- cfqd);
+ st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type);
count = st->count;
/*
- * If priority didn't change, check workload expiration,
- * and that we still have other queues ready
+ * check workload expiration, and that we still have other queues ready
*/
- if (!prio_changed && count &&
- !time_after(jiffies, cfqd->workload_expires))
+ if (count && !time_after(jiffies, cfqd->workload_expires))
return;
/* otherwise select new workload type */
cfqd->serving_type =
- cfq_choose_wl(cfqd, cfqg, cfqd->serving_prio, prio_changed);
- st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type,
- cfqd);
+ cfq_choose_wl(cfqd, cfqg, cfqd->serving_prio);
+ st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type);
count = st->count;
/*
* if this request is as-good as one we would expect from the
* current cfqq, let it preempt
*/
- if (cfq_rq_close(cfqd, cfqq, rq))
+ if (cfq_rq_close(cfqd, cfqq, rq, true))
return true;
return false;
static struct DAC960_privdata DAC960_LP_privdata = {
.HardwareType = DAC960_LP_Controller,
- .FirmwareType = DAC960_LP_Controller,
+ .FirmwareType = DAC960_V2_Controller,
.InterruptHandler = DAC960_LP_InterruptHandler,
.MemoryWindowSize = DAC960_LP_RegisterWindowSize,
};
part_stat_unlock();
}
-/*
- * Ensure we don't create aliases in VI caches
- */
-static inline void
-killalias(struct bio *bio)
-{
- struct bio_vec *bv;
- int i;
-
- if (bio_data_dir(bio) == READ)
- __bio_for_each_segment(bv, bio, i, 0) {
- flush_dcache_page(bv->bv_page);
- }
-}
-
void
aoecmd_ata_rsp(struct sk_buff *skb)
{
if (buf->flags & BUFFL_FAIL)
bio_endio(buf->bio, -EIO);
else {
- killalias(buf->bio);
+ bio_flush_dcache_pages(buf->bio);
bio_endio(buf->bio, 0);
}
mempool_free(buf, d->bufpool);
/* drbd_proc.c */
extern struct proc_dir_entry *drbd_proc;
-extern struct file_operations drbd_proc_fops;
+extern const struct file_operations drbd_proc_fops;
extern const char *drbd_conn_str(enum drbd_conns s);
extern const char *drbd_role_str(enum drbd_role s);
*/
#include <linux/module.h>
-#include <linux/version.h>
#include <linux/drbd.h>
#include <asm/uaccess.h>
#include <asm/types.h>
DEFINE_RATELIMIT_STATE(drbd_ratelimit_state, 5 * HZ, 5);
-static struct block_device_operations drbd_ops = {
+static const struct block_device_operations drbd_ops = {
.owner = THIS_MODULE,
.open = drbd_open,
.release = drbd_release,
{
long refresh;
- if (--rsp->count < 0) {
+ if (!rsp->count--) {
get_random_bytes(&refresh, sizeof(refresh));
rsp->state += refresh;
rsp->count = FAULT_RANDOM_REFRESH;
struct proc_dir_entry *drbd_proc;
-struct file_operations drbd_proc_fops = {
+const struct file_operations drbd_proc_fops = {
.owner = THIS_MODULE,
.open = drbd_proc_open,
.read = seq_read,
#include <asm/uaccess.h>
#include <net/sock.h>
-#include <linux/version.h>
#include <linux/drbd.h>
#include <linux/fs.h>
#include <linux/file.h>
*/
#include <linux/module.h>
-#include <linux/version.h>
#include <linux/drbd.h>
#include <linux/sched.h>
#include <linux/smp_lock.h>
#include <linux/mm_inline.h>
#include <linux/slab.h>
#include <linux/random.h>
-#include <linux/mm.h>
#include <linux/string.h>
#include <linux/scatterlist.h>
err = -EINVAL;
goto probe_err_2;
}
- host->dev_base = ioremap(rsc->start , rsc->end + 1);
+ host->dev_base = ioremap(rsc->start, resource_size(rsc));
if (!host->dev_base) {
printk(KERN_ERR "%s:%d ioremap fail\n",
__func__, __LINE__);
goto out;
}
}
-out_unlock:
- mutex_unlock(&rng_mutex);
out:
return ret ? : err;
+out_unlock:
+ mutex_unlock(&rng_mutex);
+ goto out;
}
if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
return;
if (!mddev->raid_disks && list_empty(&mddev->disks) &&
- !mddev->hold_active) {
+ mddev->ctime == 0 && !mddev->hold_active) {
+ /* Array is not configured at all, and not held active,
+ * so destroy it */
list_del(&mddev->all_mddevs);
if (mddev->gendisk) {
/* we did a probe so need to clean up.
mddev->barriers_work = 1;
mddev->ok_start_degraded = start_dirty_degraded;
- if (start_readonly)
+ if (start_readonly && mddev->ro == 0)
mddev->ro = 2; /* read-only, but switch on first write */
err = mddev->pers->run(mddev);
set_capacity(disk, mddev->array_sectors);
- /* If there is a partially-recovered drive we need to
- * start recovery here. If we leave it to md_check_recovery,
- * it will remove the drives and not do the right thing
- */
- if (mddev->degraded && !mddev->sync_thread) {
- int spares = 0;
- list_for_each_entry(rdev, &mddev->disks, same_set)
- if (rdev->raid_disk >= 0 &&
- !test_bit(In_sync, &rdev->flags) &&
- !test_bit(Faulty, &rdev->flags))
- /* complete an interrupted recovery */
- spares++;
- if (spares && mddev->pers->sync_request) {
- mddev->recovery = 0;
- set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
- mddev->sync_thread = md_register_thread(md_do_sync,
- mddev,
- "resync");
- if (!mddev->sync_thread) {
- printk(KERN_ERR "%s: could not start resync"
- " thread...\n",
- mdname(mddev));
- /* leave the spares where they are, it shouldn't hurt */
- mddev->recovery = 0;
- }
- }
- }
md_wakeup_thread(mddev->thread);
md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
mddev->minor_version = info->minor_version;
mddev->patch_version = info->patch_version;
mddev->persistent = !info->not_persistent;
+ /* ensure mddev_put doesn't delete this now that there
+ * is some minimal configuration.
+ */
+ mddev->ctime = get_seconds();
return 0;
}
mddev->major_version = MD_MAJOR_VERSION;
mddev->curr_resync = 2;
try_again:
- if (kthread_should_stop()) {
+ if (kthread_should_stop())
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
goto skip;
- }
for_each_mddev(mddev2, tmp) {
if (mddev2 == mddev)
continue;
#include <linux/errno.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
#include <linux/init.h>
memcpy_toio(lp->base, init_words + 5, sizeof(init_words) - 10);
/* Fill in the station address. */
- memcpy_toio(lp->base+SA_OFFSET, dev->dev_addr,
- sizeof(dev->dev_addr));
+ memcpy_toio(lp->base+SA_OFFSET, dev->dev_addr, ETH_ALEN);
/* The Tx-block list is written as needed. We just set up the values. */
lp->tx_cmd_link = IDLELOOP + 4;
config GELIC_WIRELESS
bool "PS3 Wireless support"
+ depends on WLAN
depends on GELIC_NET
select WIRELESS_EXT
help
config GELIC_WIRELESS_OLD_PSK_INTERFACE
bool "PS3 Wireless private PSK interface (OBSOLETE)"
depends on GELIC_WIRELESS
+ select WEXT_PRIV
help
This option retains the obsolete private interface to pass
the PSK from user space programs to the driver. The PSK
u32 tx_fc; /* Tx flow control */
int link_speed;
u8 port_type;
+ u8 transceiver;
};
extern const struct ethtool_ops be_ethtool_ops;
return status;
}
+int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num,
+ u8 loopback_type, u8 enable)
+{
+ struct be_mcc_wrb *wrb;
+ struct be_cmd_req_set_lmode *req;
+ int status;
+
+ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+ status = -EBUSY;
+ goto err;
+ }
+
+ req = embedded_payload(wrb);
+
+ be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0,
+ OPCODE_LOWLEVEL_SET_LOOPBACK_MODE);
+
+ be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_LOWLEVEL,
+ OPCODE_LOWLEVEL_SET_LOOPBACK_MODE,
+ sizeof(*req));
+
+ req->src_port = port_num;
+ req->dest_port = port_num;
+ req->loopback_type = loopback_type;
+ req->loopback_state = enable;
+
+ status = be_mcc_notify_wait(adapter);
+err:
+ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+}
+
int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num,
u32 loopback_type, u32 pkt_size, u32 num_pkts, u64 pattern)
{
be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_LOWLEVEL,
OPCODE_LOWLEVEL_LOOPBACK_TEST, sizeof(*req));
+ req->hdr.timeout = 4;
req->pattern = cpu_to_le64(pattern);
req->src_port = cpu_to_le32(port_num);
#define OPCODE_LOWLEVEL_HOST_DDR_DMA 17
#define OPCODE_LOWLEVEL_LOOPBACK_TEST 18
+#define OPCODE_LOWLEVEL_SET_LOOPBACK_MODE 19
struct be_cmd_req_hdr {
u8 opcode; /* dword 0 */
u32 ticks_compl;
};
+struct be_cmd_req_set_lmode {
+ struct be_cmd_req_hdr hdr;
+ u8 src_port;
+ u8 dest_port;
+ u8 loopback_type;
+ u8 loopback_state;
+};
+
+struct be_cmd_resp_set_lmode {
+ struct be_cmd_resp_hdr resp_hdr;
+ u8 rsvd0[4];
+};
+
/********************** DDR DMA test *********************/
struct be_cmd_req_ddrdma_test {
struct be_cmd_req_hdr hdr;
u32 num_pkts, u64 pattern);
extern int be_cmd_ddr_dma_test(struct be_adapter *adapter, u64 pattern,
u32 byte_cnt, struct be_dma_mem *cmd);
+extern int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num,
+ u8 loopback_type, u8 enable);
#define BE_MAC_LOOPBACK 0x0
#define BE_PHY_LOOPBACK 0x1
#define BE_ONE_PORT_EXT_LOOPBACK 0x2
+#define BE_NO_LOOPBACK 0xff
static void
be_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
status = be_cmd_read_port_type(adapter, adapter->port_num,
&connector);
- switch (connector) {
- case 7:
- ecmd->port = PORT_FIBRE;
- break;
- default:
- ecmd->port = PORT_TP;
- break;
+ if (!status) {
+ switch (connector) {
+ case 7:
+ ecmd->port = PORT_FIBRE;
+ ecmd->transceiver = XCVR_EXTERNAL;
+ break;
+ case 0:
+ ecmd->port = PORT_TP;
+ ecmd->transceiver = XCVR_EXTERNAL;
+ break;
+ default:
+ ecmd->port = PORT_TP;
+ ecmd->transceiver = XCVR_INTERNAL;
+ break;
+ }
+ } else {
+ ecmd->port = PORT_AUI;
+ ecmd->transceiver = XCVR_INTERNAL;
}
/* Save for future use */
adapter->link_speed = ecmd->speed;
adapter->port_type = ecmd->port;
+ adapter->transceiver = ecmd->transceiver;
} else {
ecmd->speed = adapter->link_speed;
ecmd->port = adapter->port_type;
+ ecmd->transceiver = adapter->transceiver;
}
ecmd->duplex = DUPLEX_FULL;
ecmd->autoneg = AUTONEG_DISABLE;
- ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_TP);
ecmd->phy_address = adapter->port_num;
- ecmd->transceiver = XCVR_INTERNAL;
+ switch (ecmd->port) {
+ case PORT_FIBRE:
+ ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE);
+ break;
+ case PORT_TP:
+ ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_TP);
+ break;
+ case PORT_AUI:
+ ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_AUI);
+ break;
+ }
return 0;
}
return ret;
}
+static u64 be_loopback_test(struct be_adapter *adapter, u8 loopback_type,
+ u64 *status)
+{
+ be_cmd_set_loopback(adapter, adapter->port_num,
+ loopback_type, 1);
+ *status = be_cmd_loopback_test(adapter, adapter->port_num,
+ loopback_type, 1500,
+ 2, 0xabc);
+ be_cmd_set_loopback(adapter, adapter->port_num,
+ BE_NO_LOOPBACK, 1);
+ return *status;
+}
+
static void
be_self_test(struct net_device *netdev, struct ethtool_test *test, u64 *data)
{
memset(data, 0, sizeof(u64) * ETHTOOL_TESTS_NUM);
if (test->flags & ETH_TEST_FL_OFFLINE) {
- data[0] = be_cmd_loopback_test(adapter, adapter->port_num,
- BE_MAC_LOOPBACK, 1500,
- 2, 0xabc);
- if (data[0] != 0)
+ if (be_loopback_test(adapter, BE_MAC_LOOPBACK,
+ &data[0]) != 0) {
test->flags |= ETH_TEST_FL_FAILED;
-
- data[1] = be_cmd_loopback_test(adapter, adapter->port_num,
- BE_PHY_LOOPBACK, 1500,
- 2, 0xabc);
- if (data[1] != 0)
+ }
+ if (be_loopback_test(adapter, BE_PHY_LOOPBACK,
+ &data[1]) != 0) {
test->flags |= ETH_TEST_FL_FAILED;
-
- data[2] = be_cmd_loopback_test(adapter, adapter->port_num,
- BE_ONE_PORT_EXT_LOOPBACK,
- 1500, 2, 0xabc);
- if (data[2] != 0)
+ }
+ if (be_loopback_test(adapter, BE_ONE_PORT_EXT_LOOPBACK,
+ &data[2]) != 0) {
test->flags |= ETH_TEST_FL_FAILED;
+ }
data[3] = be_test_ddr_dma(adapter);
if (data[3] != 0)
if (bp->cnic_eth_dev.drv_state & CNIC_DRV_STATE_REGD) {
bnx2x_set_iscsi_eth_mac_addr(bp, 1);
bp->cnic_flags |= BNX2X_CNIC_FLAG_MAC_SET;
+ bnx2x_init_sb(bp, bp->cnic_sb, bp->cnic_sb_mapping,
+ CNIC_SB_ID(bp));
}
mutex_unlock(&bp->cnic_mutex);
#endif
// check if any partner replys
if (best->is_individual) {
pr_warning("%s: Warning: No 802.3ad response from the link partner for any adapters in the bond\n",
- best->slave->dev->master->name);
+ best->slave ? best->slave->dev->master->name : "NULL");
}
best->is_active = 1;
static void gfar_clear_exact_match(struct net_device *dev);
static void gfar_set_mac_for_addr(struct net_device *dev, int num, u8 *addr);
static int gfar_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
-u16 gfar_select_queue(struct net_device *dev, struct sk_buff *skb);
MODULE_AUTHOR("Freescale Semiconductor, Inc");
MODULE_DESCRIPTION("Gianfar Ethernet Driver");
.ndo_set_multicast_list = gfar_set_multi,
.ndo_tx_timeout = gfar_timeout,
.ndo_do_ioctl = gfar_ioctl,
- .ndo_select_queue = gfar_select_queue,
.ndo_get_stats = gfar_get_stats,
.ndo_vlan_rx_register = gfar_vlan_rx_register,
.ndo_set_mac_address = eth_mac_addr,
return priv->vlgrp || priv->rx_csum_enable;
}
-u16 gfar_select_queue(struct net_device *dev, struct sk_buff *skb)
-{
- return skb_get_queue_mapping(skb);
-}
static void free_tx_pointers(struct gfar_private *priv)
{
int i = 0;
fcb = (struct rxfcb *)skb->data;
/* Remove the FCB from the skb */
- skb_set_queue_mapping(skb, fcb->rq);
/* Remove the padded bytes, if there are any */
- if (amount_pull)
+ if (amount_pull) {
+ skb_record_rx_queue(skb, fcb->rq);
skb_pull(skb, amount_pull);
+ }
if (priv->rx_csum_enable)
gfar_rx_checksum(skb, fcb);
/* Remove the FCS from the packet length */
skb_put(skb, pkt_len);
rx_queue->stats.rx_bytes += pkt_len;
-
+ skb_record_rx_queue(skb, rx_queue->qindex);
gfar_process_frame(dev, skb, amount_pull);
} else {
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
#include <linux/skbuff.h>
#include <linux/bitops.h>
/* copy out MAC address */
- for (z = 0; z < sizeof(dev->dev_addr); z++)
+ for (z = 0; z < ETH_ALEN; z++)
dev->dev_addr[z] = inb(dev->base_addr + MACADDRPROM + z);
/* print config */
hw_dbg("Configuring Autoneg:PCS_LCTL=0x%08X\n", reg);
} else {
/* Set PCS register for forced link */
- reg |= E1000_PCS_LCTL_FSD | /* Force Speed */
- E1000_PCS_LCTL_FORCE_LINK | /* Force Link */
- E1000_PCS_LCTL_FLV_LINK_UP; /* Force link value up */
+ reg |= E1000_PCS_LCTL_FSD; /* Force Speed */
hw_dbg("Configuring Forced Link:PCS_LCTL=0x%08X\n", reg);
}
phy_data |= I82580_CFG_ENABLE_DOWNSHIFT;
ret_val = phy->ops.write_reg(hw, I82580_CFG_REG, phy_data);
- if (ret_val)
- goto out;
-
- /* Set number of link attempts before downshift */
- ret_val = phy->ops.read_reg(hw, I82580_CTRL_REG, &phy_data);
- if (ret_val)
- goto out;
- phy_data &= ~I82580_CTRL_DOWNSHIFT_MASK;
- ret_val = phy->ops.write_reg(hw, I82580_CTRL_REG, phy_data);
out:
return ret_val;
/* dual port cards only support WoL on port A from now on
* unless it was enabled in the eeprom for port B
* so exclude FUNC_1 ports from having WoL enabled */
- if (rd32(E1000_STATUS) & E1000_STATUS_FUNC_1 &&
+ if ((rd32(E1000_STATUS) & E1000_STATUS_FUNC_MASK) &&
!adapter->eeprom_wol) {
wol->supported = 0;
break;
hwm = min(((pba << 10) * 9 / 10),
((pba << 10) - 2 * adapter->max_frame_size));
- if (mac->type < e1000_82576) {
- fc->high_water = hwm & 0xFFF8; /* 8-byte granularity */
- fc->low_water = fc->high_water - 8;
- } else {
- fc->high_water = hwm & 0xFFF0; /* 16-byte granularity */
- fc->low_water = fc->high_water - 16;
- }
+ fc->high_water = hwm & 0xFFF0; /* 16-byte granularity */
+ fc->low_water = fc->high_water - 16;
fc->pause_time = 0xFFFF;
fc->send_xon = 1;
fc->current_mode = fc->requested_mode;
err = hw->mac.ops.reset_hw(hw);
if (err) {
dev_info(&pdev->dev,
- "PF still in reset state, assigning new address\n");
+ "PF still in reset state, assigning new address."
+ " Is the PF interface up?\n");
random_ether_addr(hw->mac.addr);
} else {
err = hw->mac.ops.read_mac_addr(hw);
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
+ /*
+ * pci_restore_state clears dev->state_saved so call
+ * pci_save_state to restore it.
+ */
+ pci_save_state(pdev);
err = pci_enable_device_mem(pdev);
if (err) {
#include <linux/crc32.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
#include <linux/skbuff.h>
#include <linux/spinlock.h>
#include <linux/moduleparam.h>
/* if the ethernet address is not valid, force to 00:00:00:00:00:00 */
if (!is_valid_ether_addr(dev->perm_addr))
- memset(dev->dev_addr, 0, sizeof(dev->dev_addr));
+ memset(dev->dev_addr, 0, ETH_ALEN);
if (pcnet32_debug & NETIF_MSG_PROBE) {
printk(" %pM", dev->dev_addr);
EFX_LOG(efx, "create port\n");
+ if (phy_flash_cfg)
+ efx->phy_mode = PHY_MODE_SPECIAL;
+
/* Connect up MAC/PHY operations table */
rc = efx->type->probe_port(efx);
if (rc)
goto err;
- if (phy_flash_cfg)
- efx->phy_mode = PHY_MODE_SPECIAL;
-
/* Sanity check MAC address */
if (is_valid_ether_addr(efx->mac_address)) {
memcpy(efx->net_dev->dev_addr, efx->mac_address, ETH_ALEN);
static void falcon_remove_port(struct efx_nic *efx)
{
+ efx->phy_op->remove(efx);
efx_nic_free_buffer(efx, &efx->stats_buffer);
}
efx_writeo(efx, ®, FR_AB_XM_MGT_INT_MASK);
}
-/* Get status of XAUI link */
-static bool falcon_xaui_link_ok(struct efx_nic *efx)
+static bool falcon_xgxs_link_ok(struct efx_nic *efx)
{
efx_oword_t reg;
bool align_done, link_ok = false;
int sync_status;
- if (LOOPBACK_INTERNAL(efx))
- return true;
-
/* Read link status */
efx_reado(efx, ®, FR_AB_XX_CORE_STAT);
EFX_SET_OWORD_FIELD(reg, FRF_AB_XX_DISPERR, FFE_AB_XX_STAT_ALL_LANES);
efx_writeo(efx, ®, FR_AB_XX_CORE_STAT);
- /* If the link is up, then check the phy side of the xaui link */
- if (efx->link_state.up && link_ok)
- if (efx->mdio.mmds & (1 << MDIO_MMD_PHYXS))
- link_ok = efx_mdio_phyxgxs_lane_sync(efx);
-
return link_ok;
}
+static bool falcon_xmac_link_ok(struct efx_nic *efx)
+{
+ /*
+ * Check MAC's XGXS link status except when using XGMII loopback
+ * which bypasses the XGXS block.
+ * If possible, check PHY's XGXS link status except when using
+ * MAC loopback.
+ */
+ return (efx->loopback_mode == LOOPBACK_XGMII ||
+ falcon_xgxs_link_ok(efx)) &&
+ (!(efx->mdio.mmds & (1 << MDIO_MMD_PHYXS)) ||
+ LOOPBACK_INTERNAL(efx) ||
+ efx_mdio_phyxgxs_lane_sync(efx));
+}
+
void falcon_reconfigure_xmac_core(struct efx_nic *efx)
{
unsigned int max_frame_len;
/* Try to bring up the Falcon side of the Falcon-Phy XAUI link */
-static bool falcon_check_xaui_link_up(struct efx_nic *efx, int tries)
+static bool falcon_xmac_link_ok_retry(struct efx_nic *efx, int tries)
{
- bool mac_up = falcon_xaui_link_ok(efx);
+ bool mac_up = falcon_xmac_link_ok(efx);
if (LOOPBACK_MASK(efx) & LOOPBACKS_EXTERNAL(efx) & LOOPBACKS_WS ||
efx_phy_mode_disabled(efx->phy_mode))
falcon_reset_xaui(efx);
udelay(200);
- mac_up = falcon_xaui_link_ok(efx);
+ mac_up = falcon_xmac_link_ok(efx);
--tries;
}
static bool falcon_xmac_check_fault(struct efx_nic *efx)
{
- return !falcon_check_xaui_link_up(efx, 5);
+ return !falcon_xmac_link_ok_retry(efx, 5);
}
static int falcon_reconfigure_xmac(struct efx_nic *efx)
falcon_reconfigure_mac_wrapper(efx);
- efx->xmac_poll_required = !falcon_check_xaui_link_up(efx, 5);
+ efx->xmac_poll_required = !falcon_xmac_link_ok_retry(efx, 5);
falcon_mask_status_intr(efx, true);
return 0;
return;
falcon_mask_status_intr(efx, false);
- efx->xmac_poll_required = !falcon_check_xaui_link_up(efx, 1);
+ efx->xmac_poll_required = !falcon_xmac_link_ok_retry(efx, 1);
falcon_mask_status_intr(efx, true);
}
static int efx_mcdi_phy_probe(struct efx_nic *efx)
{
- struct efx_mcdi_phy_cfg *phy_cfg;
+ struct efx_mcdi_phy_cfg *phy_data;
+ u8 outbuf[MC_CMD_GET_LINK_OUT_LEN];
+ u32 caps;
int rc;
- /* TODO: Move phy_data initialisation to
- * phy_op->probe/remove, rather than init/fini */
- phy_cfg = kzalloc(sizeof(*phy_cfg), GFP_KERNEL);
- if (phy_cfg == NULL) {
- rc = -ENOMEM;
- goto fail_alloc;
- }
- rc = efx_mcdi_get_phy_cfg(efx, phy_cfg);
+ /* Initialise and populate phy_data */
+ phy_data = kzalloc(sizeof(*phy_data), GFP_KERNEL);
+ if (phy_data == NULL)
+ return -ENOMEM;
+
+ rc = efx_mcdi_get_phy_cfg(efx, phy_data);
if (rc != 0)
goto fail;
- efx->phy_type = phy_cfg->type;
+ /* Read initial link advertisement */
+ BUILD_BUG_ON(MC_CMD_GET_LINK_IN_LEN != 0);
+ rc = efx_mcdi_rpc(efx, MC_CMD_GET_LINK, NULL, 0,
+ outbuf, sizeof(outbuf), NULL);
+ if (rc)
+ goto fail;
+
+ /* Fill out nic state */
+ efx->phy_data = phy_data;
+ efx->phy_type = phy_data->type;
- efx->mdio_bus = phy_cfg->channel;
- efx->mdio.prtad = phy_cfg->port;
- efx->mdio.mmds = phy_cfg->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22);
+ efx->mdio_bus = phy_data->channel;
+ efx->mdio.prtad = phy_data->port;
+ efx->mdio.mmds = phy_data->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22);
efx->mdio.mode_support = 0;
- if (phy_cfg->mmd_mask & (1 << MC_CMD_MMD_CLAUSE22))
+ if (phy_data->mmd_mask & (1 << MC_CMD_MMD_CLAUSE22))
efx->mdio.mode_support |= MDIO_SUPPORTS_C22;
- if (phy_cfg->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22))
+ if (phy_data->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22))
efx->mdio.mode_support |= MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
+ caps = MCDI_DWORD(outbuf, GET_LINK_OUT_CAP);
+ if (caps & (1 << MC_CMD_PHY_CAP_AN_LBN))
+ efx->link_advertising =
+ mcdi_to_ethtool_cap(phy_data->media, caps);
+ else
+ phy_data->forced_cap = caps;
+
/* Assert that we can map efx -> mcdi loopback modes */
BUILD_BUG_ON(LOOPBACK_NONE != MC_CMD_LOOPBACK_NONE);
BUILD_BUG_ON(LOOPBACK_DATA != MC_CMD_LOOPBACK_DATA);
* but by convention we don't */
efx->loopback_modes &= ~(1 << LOOPBACK_NONE);
- kfree(phy_cfg);
-
- return 0;
-
-fail:
- kfree(phy_cfg);
-fail_alloc:
- return rc;
-}
-
-static int efx_mcdi_phy_init(struct efx_nic *efx)
-{
- struct efx_mcdi_phy_cfg *phy_data;
- u8 outbuf[MC_CMD_GET_LINK_OUT_LEN];
- u32 caps;
- int rc;
-
- phy_data = kzalloc(sizeof(*phy_data), GFP_KERNEL);
- if (phy_data == NULL)
- return -ENOMEM;
-
- rc = efx_mcdi_get_phy_cfg(efx, phy_data);
- if (rc != 0)
- goto fail;
-
- efx->phy_data = phy_data;
-
- BUILD_BUG_ON(MC_CMD_GET_LINK_IN_LEN != 0);
- rc = efx_mcdi_rpc(efx, MC_CMD_GET_LINK, NULL, 0,
- outbuf, sizeof(outbuf), NULL);
- if (rc)
- goto fail;
-
- caps = MCDI_DWORD(outbuf, GET_LINK_OUT_CAP);
- if (caps & (1 << MC_CMD_PHY_CAP_AN_LBN))
- efx->link_advertising =
- mcdi_to_ethtool_cap(phy_data->media, caps);
- else
- phy_data->forced_cap = caps;
-
return 0;
fail:
return !efx_link_state_equal(&efx->link_state, &old_state);
}
-static void efx_mcdi_phy_fini(struct efx_nic *efx)
+static void efx_mcdi_phy_remove(struct efx_nic *efx)
{
struct efx_mcdi_phy_data *phy_data = efx->phy_data;
struct efx_phy_operations efx_mcdi_phy_ops = {
.probe = efx_mcdi_phy_probe,
- .init = efx_mcdi_phy_init,
+ .init = efx_port_dummy_op_int,
.reconfigure = efx_mcdi_phy_reconfigure,
.poll = efx_mcdi_phy_poll,
- .fini = efx_mcdi_phy_fini,
+ .fini = efx_port_dummy_op_void,
+ .remove = efx_mcdi_phy_remove,
.get_settings = efx_mcdi_phy_get_settings,
.set_settings = efx_mcdi_phy_set_settings,
.run_tests = NULL,
int (*probe) (struct efx_nic *efx);
int (*init) (struct efx_nic *efx);
void (*fini) (struct efx_nic *efx);
+ void (*remove) (struct efx_nic *efx);
int (*reconfigure) (struct efx_nic *efx);
bool (*poll) (struct efx_nic *efx);
void (*get_settings) (struct efx_nic *efx,
EFX_SET_OWORD_FIELD(temp, FRF_AZ_TX_SOFT_EVT_EN, 1);
/* Prefetch threshold 2 => fetch when descriptor cache half empty */
EFX_SET_OWORD_FIELD(temp, FRF_AZ_TX_PREF_THRESHOLD, 2);
+ /* Disable hardware watchdog which can misfire */
+ EFX_SET_OWORD_FIELD(temp, FRF_AZ_TX_PREF_WD_TMR, 0x3fffff);
/* Squash TX of packets of 16 bytes or less */
if (efx_nic_rev(efx) >= EFX_REV_FALCON_B0)
EFX_SET_OWORD_FIELD(temp, FRF_BZ_TX_FLUSH_MIN_LEN_EN, 1);
#define PCS_FW_HEARTBEAT_REG 0xd7ee
#define PCS_FW_HEARTB_LBN 0
#define PCS_FW_HEARTB_WIDTH 8
+#define PCS_FW_PRODUCT_CODE_1 0xd7f0
+#define PCS_FW_VERSION_1 0xd7f3
+#define PCS_FW_BUILD_1 0xd7f6
#define PCS_UC8051_STATUS_REG 0xd7fd
#define PCS_UC_STATUS_LBN 0
#define PCS_UC_STATUS_WIDTH 8
struct qt202x_phy_data {
enum efx_phy_mode phy_mode;
+ bool bug17190_in_bad_state;
+ unsigned long bug17190_timer;
+ u32 firmware_ver;
};
#define QT2022C2_MAX_RESET_TIME 500
#define QT2022C2_RESET_WAIT 10
-static int qt2025c_wait_reset(struct efx_nic *efx)
+#define QT2025C_MAX_HEARTB_TIME (5 * HZ)
+#define QT2025C_HEARTB_WAIT 100
+#define QT2025C_MAX_FWSTART_TIME (25 * HZ / 10)
+#define QT2025C_FWSTART_WAIT 100
+
+#define BUG17190_INTERVAL (2 * HZ)
+
+static int qt2025c_wait_heartbeat(struct efx_nic *efx)
{
- unsigned long timeout = jiffies + 10 * HZ;
+ unsigned long timeout = jiffies + QT2025C_MAX_HEARTB_TIME;
int reg, old_counter = 0;
/* Wait for firmware heartbeat to start */
old_counter = counter;
else if (counter != old_counter)
break;
- if (time_after(jiffies, timeout))
+ if (time_after(jiffies, timeout)) {
+ /* Some cables have EEPROMs that conflict with the
+ * PHY's on-board EEPROM so it cannot load firmware */
+ EFX_ERR(efx, "If an SFP+ direct attach cable is"
+ " connected, please check that it complies"
+ " with the SFP+ specification\n");
return -ETIMEDOUT;
- msleep(10);
+ }
+ msleep(QT2025C_HEARTB_WAIT);
}
+ return 0;
+}
+
+static int qt2025c_wait_fw_status_good(struct efx_nic *efx)
+{
+ unsigned long timeout = jiffies + QT2025C_MAX_FWSTART_TIME;
+ int reg;
+
/* Wait for firmware status to look good */
for (;;) {
reg = efx_mdio_read(efx, MDIO_MMD_PCS, PCS_UC8051_STATUS_REG);
break;
if (time_after(jiffies, timeout))
return -ETIMEDOUT;
+ msleep(QT2025C_FWSTART_WAIT);
+ }
+
+ return 0;
+}
+
+static void qt2025c_restart_firmware(struct efx_nic *efx)
+{
+ /* Restart microcontroller execution of firmware from RAM */
+ efx_mdio_write(efx, 3, 0xe854, 0x00c0);
+ efx_mdio_write(efx, 3, 0xe854, 0x0040);
+ msleep(50);
+}
+
+static int qt2025c_wait_reset(struct efx_nic *efx)
+{
+ int rc;
+
+ rc = qt2025c_wait_heartbeat(efx);
+ if (rc != 0)
+ return rc;
+
+ rc = qt2025c_wait_fw_status_good(efx);
+ if (rc == -ETIMEDOUT) {
+ /* Bug 17689: occasionally heartbeat starts but firmware status
+ * code never progresses beyond 0x00. Try again, once, after
+ * restarting execution of the firmware image. */
+ EFX_LOG(efx, "bashing QT2025C microcontroller\n");
+ qt2025c_restart_firmware(efx);
+ rc = qt2025c_wait_heartbeat(efx);
+ if (rc != 0)
+ return rc;
+ rc = qt2025c_wait_fw_status_good(efx);
+ }
+
+ return rc;
+}
+
+static void qt2025c_firmware_id(struct efx_nic *efx)
+{
+ struct qt202x_phy_data *phy_data = efx->phy_data;
+ u8 firmware_id[9];
+ size_t i;
+
+ for (i = 0; i < sizeof(firmware_id); i++)
+ firmware_id[i] = efx_mdio_read(efx, MDIO_MMD_PCS,
+ PCS_FW_PRODUCT_CODE_1 + i);
+ EFX_INFO(efx, "QT2025C firmware %xr%d v%d.%d.%d.%d [20%02d-%02d-%02d]\n",
+ (firmware_id[0] << 8) | firmware_id[1], firmware_id[2],
+ firmware_id[3] >> 4, firmware_id[3] & 0xf,
+ firmware_id[4], firmware_id[5],
+ firmware_id[6], firmware_id[7], firmware_id[8]);
+ phy_data->firmware_ver = ((firmware_id[3] & 0xf0) << 20) |
+ ((firmware_id[3] & 0x0f) << 16) |
+ (firmware_id[4] << 8) | firmware_id[5];
+}
+
+static void qt2025c_bug17190_workaround(struct efx_nic *efx)
+{
+ struct qt202x_phy_data *phy_data = efx->phy_data;
+
+ /* The PHY can get stuck in a state where it reports PHY_XS and PMA/PMD
+ * layers up, but PCS down (no block_lock). If we notice this state
+ * persisting for a couple of seconds, we switch PMA/PMD loopback
+ * briefly on and then off again, which is normally sufficient to
+ * recover it.
+ */
+ if (efx->link_state.up ||
+ !efx_mdio_links_ok(efx, MDIO_DEVS_PMAPMD | MDIO_DEVS_PHYXS)) {
+ phy_data->bug17190_in_bad_state = false;
+ return;
+ }
+
+ if (!phy_data->bug17190_in_bad_state) {
+ phy_data->bug17190_in_bad_state = true;
+ phy_data->bug17190_timer = jiffies + BUG17190_INTERVAL;
+ return;
+ }
+
+ if (time_after_eq(jiffies, phy_data->bug17190_timer)) {
+ EFX_LOG(efx, "bashing QT2025C PMA/PMD\n");
+ efx_mdio_set_flag(efx, MDIO_MMD_PMAPMD, MDIO_CTRL1,
+ MDIO_PMA_CTRL1_LOOPBACK, true);
msleep(100);
+ efx_mdio_set_flag(efx, MDIO_MMD_PMAPMD, MDIO_CTRL1,
+ MDIO_PMA_CTRL1_LOOPBACK, false);
+ phy_data->bug17190_timer = jiffies + BUG17190_INTERVAL;
+ }
+}
+
+static int qt2025c_select_phy_mode(struct efx_nic *efx)
+{
+ struct qt202x_phy_data *phy_data = efx->phy_data;
+ struct falcon_board *board = falcon_board(efx);
+ int reg, rc, i;
+ uint16_t phy_op_mode;
+
+ /* Only 2.0.1.0+ PHY firmware supports the more optimal SFP+
+ * Self-Configure mode. Don't attempt any switching if we encounter
+ * older firmware. */
+ if (phy_data->firmware_ver < 0x02000100)
+ return 0;
+
+ /* In general we will get optimal behaviour in "SFP+ Self-Configure"
+ * mode; however, that powers down most of the PHY when no module is
+ * present, so we must use a different mode (any fixed mode will do)
+ * to be sure that loopbacks will work. */
+ phy_op_mode = (efx->loopback_mode == LOOPBACK_NONE) ? 0x0038 : 0x0020;
+
+ /* Only change mode if really necessary */
+ reg = efx_mdio_read(efx, 1, 0xc319);
+ if ((reg & 0x0038) == phy_op_mode)
+ return 0;
+ EFX_LOG(efx, "Switching PHY to mode 0x%04x\n", phy_op_mode);
+
+ /* This sequence replicates the register writes configured in the boot
+ * EEPROM (including the differences between board revisions), except
+ * that the operating mode is changed, and the PHY is prevented from
+ * unnecessarily reloading the main firmware image again. */
+ efx_mdio_write(efx, 1, 0xc300, 0x0000);
+ /* (Note: this portion of the boot EEPROM sequence, which bit-bashes 9
+ * STOPs onto the firmware/module I2C bus to reset it, varies across
+ * board revisions, as the bus is connected to different GPIO/LED
+ * outputs on the PHY.) */
+ if (board->major == 0 && board->minor < 2) {
+ efx_mdio_write(efx, 1, 0xc303, 0x4498);
+ for (i = 0; i < 9; i++) {
+ efx_mdio_write(efx, 1, 0xc303, 0x4488);
+ efx_mdio_write(efx, 1, 0xc303, 0x4480);
+ efx_mdio_write(efx, 1, 0xc303, 0x4490);
+ efx_mdio_write(efx, 1, 0xc303, 0x4498);
+ }
+ } else {
+ efx_mdio_write(efx, 1, 0xc303, 0x0920);
+ efx_mdio_write(efx, 1, 0xd008, 0x0004);
+ for (i = 0; i < 9; i++) {
+ efx_mdio_write(efx, 1, 0xc303, 0x0900);
+ efx_mdio_write(efx, 1, 0xd008, 0x0005);
+ efx_mdio_write(efx, 1, 0xc303, 0x0920);
+ efx_mdio_write(efx, 1, 0xd008, 0x0004);
+ }
+ efx_mdio_write(efx, 1, 0xc303, 0x4900);
+ }
+ efx_mdio_write(efx, 1, 0xc303, 0x4900);
+ efx_mdio_write(efx, 1, 0xc302, 0x0004);
+ efx_mdio_write(efx, 1, 0xc316, 0x0013);
+ efx_mdio_write(efx, 1, 0xc318, 0x0054);
+ efx_mdio_write(efx, 1, 0xc319, phy_op_mode);
+ efx_mdio_write(efx, 1, 0xc31a, 0x0098);
+ efx_mdio_write(efx, 3, 0x0026, 0x0e00);
+ efx_mdio_write(efx, 3, 0x0027, 0x0013);
+ efx_mdio_write(efx, 3, 0x0028, 0xa528);
+ efx_mdio_write(efx, 1, 0xd006, 0x000a);
+ efx_mdio_write(efx, 1, 0xd007, 0x0009);
+ efx_mdio_write(efx, 1, 0xd008, 0x0004);
+ /* This additional write is not present in the boot EEPROM. It
+ * prevents the PHY's internal boot ROM doing another pointless (and
+ * slow) reload of the firmware image (the microcontroller's code
+ * memory is not affected by the microcontroller reset). */
+ efx_mdio_write(efx, 1, 0xc317, 0x00ff);
+ efx_mdio_write(efx, 1, 0xc300, 0x0002);
+ msleep(20);
+
+ /* Restart microcontroller execution of firmware from RAM */
+ qt2025c_restart_firmware(efx);
+
+ /* Wait for the microcontroller to be ready again */
+ rc = qt2025c_wait_reset(efx);
+ if (rc < 0) {
+ EFX_ERR(efx, "PHY microcontroller reset during mode switch "
+ "timed out\n");
+ return rc;
}
return 0;
static int qt202x_phy_probe(struct efx_nic *efx)
{
+ struct qt202x_phy_data *phy_data;
+
+ phy_data = kzalloc(sizeof(struct qt202x_phy_data), GFP_KERNEL);
+ if (!phy_data)
+ return -ENOMEM;
+ efx->phy_data = phy_data;
+ phy_data->phy_mode = efx->phy_mode;
+ phy_data->bug17190_in_bad_state = false;
+ phy_data->bug17190_timer = 0;
+
efx->mdio.mmds = QT202X_REQUIRED_DEVS;
efx->mdio.mode_support = MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
efx->loopback_modes = QT202X_LOOPBACKS | FALCON_XMAC_LOOPBACKS;
static int qt202x_phy_init(struct efx_nic *efx)
{
- struct qt202x_phy_data *phy_data;
u32 devid;
int rc;
return rc;
}
- phy_data = kzalloc(sizeof(struct qt202x_phy_data), GFP_KERNEL);
- if (!phy_data)
- return -ENOMEM;
- efx->phy_data = phy_data;
-
devid = efx_mdio_read_id(efx, MDIO_MMD_PHYXS);
EFX_INFO(efx, "PHY ID reg %x (OUI %06x model %02x revision %x)\n",
devid, efx_mdio_id_oui(devid), efx_mdio_id_model(devid),
efx_mdio_id_rev(devid));
- phy_data->phy_mode = efx->phy_mode;
+ if (efx->phy_type == PHY_TYPE_QT2025C)
+ qt2025c_firmware_id(efx);
+
return 0;
}
efx->link_state.fd = true;
efx->link_state.fc = efx->wanted_fc;
+ if (efx->phy_type == PHY_TYPE_QT2025C)
+ qt2025c_bug17190_workaround(efx);
+
return efx->link_state.up != was_up;
}
struct qt202x_phy_data *phy_data = efx->phy_data;
if (efx->phy_type == PHY_TYPE_QT2025C) {
+ int rc = qt2025c_select_phy_mode(efx);
+ if (rc)
+ return rc;
+
/* There are several different register bits which can
* disable TX (and save power) on direct-attach cables
* or optical transceivers, varying somewhat between
mdio45_ethtool_gset(&efx->mdio, ecmd);
}
-static void qt202x_phy_fini(struct efx_nic *efx)
+static void qt202x_phy_remove(struct efx_nic *efx)
{
/* Free the context block */
kfree(efx->phy_data);
.init = qt202x_phy_init,
.reconfigure = qt202x_phy_reconfigure,
.poll = qt202x_phy_poll,
- .fini = qt202x_phy_fini,
+ .fini = efx_port_dummy_op_void,
+ .remove = qt202x_phy_remove,
.get_settings = qt202x_phy_get_settings,
.set_settings = efx_mdio_set_settings,
};
void siena_remove_port(struct efx_nic *efx)
{
+ efx->phy_op->remove(efx);
efx_nic_free_buffer(efx, &efx->stats_buffer);
}
int rc;
rtnl_lock();
- efx_mdio_set_flag(efx, MDIO_MMD_PMAPMD, MDIO_PMA_10GBT_TXPWR,
- MDIO_PMA_10GBT_TXPWR_SHORT,
- count != 0 && *buf != '0');
- rc = efx_reconfigure_port(efx);
+ if (efx->state != STATE_RUNNING) {
+ rc = -EBUSY;
+ } else {
+ efx_mdio_set_flag(efx, MDIO_MMD_PMAPMD, MDIO_PMA_10GBT_TXPWR,
+ MDIO_PMA_10GBT_TXPWR_SHORT,
+ count != 0 && *buf != '0');
+ rc = efx_reconfigure_port(efx);
+ }
rtnl_unlock();
return rc < 0 ? rc : (ssize_t)count;
return 0;
}
-static int sfx7101_phy_probe(struct efx_nic *efx)
+static int tenxpress_phy_probe(struct efx_nic *efx)
{
- efx->mdio.mmds = TENXPRESS_REQUIRED_DEVS;
- efx->mdio.mode_support = MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
- efx->loopback_modes = SFX7101_LOOPBACKS | FALCON_XMAC_LOOPBACKS;
- return 0;
-}
+ struct tenxpress_phy_data *phy_data;
+ int rc;
+
+ /* Allocate phy private storage */
+ phy_data = kzalloc(sizeof(*phy_data), GFP_KERNEL);
+ if (!phy_data)
+ return -ENOMEM;
+ efx->phy_data = phy_data;
+ phy_data->phy_mode = efx->phy_mode;
+
+ /* Create any special files */
+ if (efx->phy_type == PHY_TYPE_SFT9001B) {
+ rc = device_create_file(&efx->pci_dev->dev,
+ &dev_attr_phy_short_reach);
+ if (rc)
+ goto fail;
+ }
+
+ if (efx->phy_type == PHY_TYPE_SFX7101) {
+ efx->mdio.mmds = TENXPRESS_REQUIRED_DEVS;
+ efx->mdio.mode_support = MDIO_SUPPORTS_C45;
+
+ efx->loopback_modes = SFX7101_LOOPBACKS | FALCON_XMAC_LOOPBACKS;
+
+ efx->link_advertising = (ADVERTISED_TP | ADVERTISED_Autoneg |
+ ADVERTISED_10000baseT_Full);
+ } else {
+ efx->mdio.mmds = TENXPRESS_REQUIRED_DEVS;
+ efx->mdio.mode_support = MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
+
+ efx->loopback_modes = (SFT9001_LOOPBACKS |
+ FALCON_XMAC_LOOPBACKS |
+ FALCON_GMAC_LOOPBACKS);
+
+ efx->link_advertising = (ADVERTISED_TP | ADVERTISED_Autoneg |
+ ADVERTISED_10000baseT_Full |
+ ADVERTISED_1000baseT_Full |
+ ADVERTISED_100baseT_Full);
+ }
-static int sft9001_phy_probe(struct efx_nic *efx)
-{
- efx->mdio.mmds = TENXPRESS_REQUIRED_DEVS;
- efx->mdio.mode_support = MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
- efx->loopback_modes = (SFT9001_LOOPBACKS | FALCON_XMAC_LOOPBACKS |
- FALCON_GMAC_LOOPBACKS);
return 0;
+
+fail:
+ kfree(efx->phy_data);
+ efx->phy_data = NULL;
+ return rc;
}
static int tenxpress_phy_init(struct efx_nic *efx)
{
- struct tenxpress_phy_data *phy_data;
- int rc = 0;
+ int rc;
falcon_board(efx)->type->init_phy(efx);
- phy_data = kzalloc(sizeof(*phy_data), GFP_KERNEL);
- if (!phy_data)
- return -ENOMEM;
- efx->phy_data = phy_data;
- phy_data->phy_mode = efx->phy_mode;
-
if (!(efx->phy_mode & PHY_MODE_SPECIAL)) {
if (efx->phy_type == PHY_TYPE_SFT9001A) {
int reg;
rc = efx_mdio_wait_reset_mmds(efx, TENXPRESS_REQUIRED_DEVS);
if (rc < 0)
- goto fail;
+ return rc;
rc = efx_mdio_check_mmds(efx, TENXPRESS_REQUIRED_DEVS, 0);
if (rc < 0)
- goto fail;
+ return rc;
}
rc = tenxpress_init(efx);
if (rc < 0)
- goto fail;
+ return rc;
- /* Initialise advertising flags */
- efx->link_advertising = (ADVERTISED_TP | ADVERTISED_Autoneg |
- ADVERTISED_10000baseT_Full);
- if (efx->phy_type != PHY_TYPE_SFX7101)
- efx->link_advertising |= (ADVERTISED_1000baseT_Full |
- ADVERTISED_100baseT_Full);
+ /* Reinitialise flow control settings */
efx_link_set_wanted_fc(efx, efx->wanted_fc);
efx_mdio_an_reconfigure(efx);
- if (efx->phy_type == PHY_TYPE_SFT9001B) {
- rc = device_create_file(&efx->pci_dev->dev,
- &dev_attr_phy_short_reach);
- if (rc)
- goto fail;
- }
-
schedule_timeout_uninterruptible(HZ / 5); /* 200ms */
/* Let XGXS and SerDes out of reset */
falcon_reset_xaui(efx);
return 0;
-
- fail:
- kfree(efx->phy_data);
- efx->phy_data = NULL;
- return rc;
}
/* Perform a "special software reset" on the PHY. The caller is
return !efx_link_state_equal(&efx->link_state, &old_state);
}
-static void tenxpress_phy_fini(struct efx_nic *efx)
+static void sfx7101_phy_fini(struct efx_nic *efx)
{
int reg;
+ /* Power down the LNPGA */
+ reg = (1 << PMA_PMD_LNPGA_POWERDOWN_LBN);
+ efx_mdio_write(efx, MDIO_MMD_PMAPMD, PMA_PMD_XCONTROL_REG, reg);
+
+ /* Waiting here ensures that the board fini, which can turn
+ * off the power to the PHY, won't get run until the LNPGA
+ * powerdown has been given long enough to complete. */
+ schedule_timeout_uninterruptible(LNPGA_PDOWN_WAIT); /* 200 ms */
+}
+
+static void tenxpress_phy_remove(struct efx_nic *efx)
+{
if (efx->phy_type == PHY_TYPE_SFT9001B)
device_remove_file(&efx->pci_dev->dev,
&dev_attr_phy_short_reach);
- if (efx->phy_type == PHY_TYPE_SFX7101) {
- /* Power down the LNPGA */
- reg = (1 << PMA_PMD_LNPGA_POWERDOWN_LBN);
- efx_mdio_write(efx, MDIO_MMD_PMAPMD, PMA_PMD_XCONTROL_REG, reg);
-
- /* Waiting here ensures that the board fini, which can turn
- * off the power to the PHY, won't get run until the LNPGA
- * powerdown has been given long enough to complete. */
- schedule_timeout_uninterruptible(LNPGA_PDOWN_WAIT); /* 200 ms */
- }
-
kfree(efx->phy_data);
efx->phy_data = NULL;
}
}
struct efx_phy_operations falcon_sfx7101_phy_ops = {
- .probe = sfx7101_phy_probe,
+ .probe = tenxpress_phy_probe,
.init = tenxpress_phy_init,
.reconfigure = tenxpress_phy_reconfigure,
.poll = tenxpress_phy_poll,
- .fini = tenxpress_phy_fini,
+ .fini = sfx7101_phy_fini,
+ .remove = tenxpress_phy_remove,
.get_settings = tenxpress_get_settings,
.set_settings = tenxpress_set_settings,
.set_npage_adv = sfx7101_set_npage_adv,
};
struct efx_phy_operations falcon_sft9001_phy_ops = {
- .probe = sft9001_phy_probe,
+ .probe = tenxpress_phy_probe,
.init = tenxpress_phy_init,
.reconfigure = tenxpress_phy_reconfigure,
.poll = tenxpress_phy_poll,
- .fini = tenxpress_phy_fini,
+ .fini = efx_port_dummy_op_void,
+ .remove = tenxpress_phy_remove,
.get_settings = tenxpress_get_settings,
.set_settings = tenxpress_set_settings,
.set_npage_adv = sft9001_set_npage_adv,
EFX_TXQ_MASK];
efx_tsoh_free(tx_queue, buffer);
EFX_BUG_ON_PARANOID(buffer->skb);
- buffer->len = 0;
- buffer->continuation = true;
if (buffer->unmap_len) {
unmap_addr = (buffer->dma_addr + buffer->len -
buffer->unmap_len);
PCI_DMA_TODEVICE);
buffer->unmap_len = 0;
}
+ buffer->len = 0;
+ buffer->continuation = true;
}
}
if (sk->sk_sleep && waitqueue_active(sk->sk_sleep))
wake_up_interruptible_sync(sk->sk_sleep);
- tun = container_of(sk, struct tun_sock, sk)->tun;
+ tun = tun_sk(sk)->tun;
kill_fasync(&tun->fasync, SIGIO, POLL_OUT);
}
static void tun_sock_destruct(struct sock *sk)
{
- free_netdev(container_of(sk, struct tun_sock, sk)->tun->dev);
+ free_netdev(tun_sk(sk)->tun->dev);
}
static struct proto tun_proto = {
sk->sk_write_space = tun_sock_write_space;
sk->sk_sndbuf = INT_MAX;
- container_of(sk, struct tun_sock, sk)->tun = tun;
+ tun_sk(sk)->tun = tun;
security_tun_dev_post_create(sk);
static void ugeth_quiesce(struct ucc_geth_private *ugeth)
{
- /* Wait for and prevent any further xmits. */
+ /* Prevent any further xmits, plus detach the device. */
+ netif_device_detach(ugeth->ndev);
+
+ /* Wait for any current xmits to finish. */
netif_tx_disable(ugeth->ndev);
/* Disable the interrupt to avoid NAPI rescheduling. */
{
napi_enable(&ugeth->napi);
enable_irq(ugeth->ug_info->uf_info.irq);
- netif_tx_wake_all_queues(ugeth->ndev);
+ netif_device_attach(ugeth->ndev);
}
/* Called every time the controller might need to be made
ugeth->oldspeed = phydev->speed;
}
- /*
- * To change the MAC configuration we need to disable the
- * controller. To do so, we have to either grab ugeth->lock,
- * which is a bad idea since 'graceful stop' commands might
- * take quite a while, or we can quiesce driver's activity.
- */
- ugeth_quiesce(ugeth);
- ugeth_disable(ugeth, COMM_DIR_RX_AND_TX);
-
- out_be32(&ug_regs->maccfg2, tempval);
- out_be32(&uf_regs->upsmr, upsmr);
-
- ugeth_enable(ugeth, COMM_DIR_RX_AND_TX);
- ugeth_activate(ugeth);
-
if (!ugeth->oldlink) {
new_state = 1;
ugeth->oldlink = 1;
}
+
+ if (new_state) {
+ /*
+ * To change the MAC configuration we need to disable
+ * the controller. To do so, we have to either grab
+ * ugeth->lock, which is a bad idea since 'graceful
+ * stop' commands might take quite a while, or we can
+ * quiesce driver's activity.
+ */
+ ugeth_quiesce(ugeth);
+ ugeth_disable(ugeth, COMM_DIR_RX_AND_TX);
+
+ out_be32(&ug_regs->maccfg2, tempval);
+ out_be32(&uf_regs->upsmr, upsmr);
+
+ ugeth_enable(ugeth, COMM_DIR_RX_AND_TX);
+ ugeth_activate(ugeth);
+ }
} else if (ugeth->oldlink) {
new_state = 1;
ugeth->oldlink = 0;
/* Handle the transmitted buffer and release */
/* the BD to be used with the current frame */
- if ((bd == ugeth->txBd[txQ]) && (netif_queue_stopped(dev) == 0))
+ if (bd == ugeth->txBd[txQ]) /* queue empty? */
break;
dev->stats.tx_packets++;
#include <linux/ethtool.h>
#include <linux/crc32.h>
#include <linux/bitops.h>
+#include <linux/workqueue.h>
#include <asm/processor.h> /* Processor type for cache alignment. */
#include <asm/io.h>
#include <asm/irq.h>
struct net_device *dev;
struct napi_struct napi;
spinlock_t lock;
+ struct work_struct reset_task;
/* Frequently used values: keep some adjacent for cache effect. */
u32 quirks;
static int mdio_read(struct net_device *dev, int phy_id, int location);
static void mdio_write(struct net_device *dev, int phy_id, int location, int value);
static int rhine_open(struct net_device *dev);
+static void rhine_reset_task(struct work_struct *work);
static void rhine_tx_timeout(struct net_device *dev);
static netdev_tx_t rhine_start_tx(struct sk_buff *skb,
struct net_device *dev);
dev->irq = pdev->irq;
spin_lock_init(&rp->lock);
+ INIT_WORK(&rp->reset_task, rhine_reset_task);
+
rp->mii_if.dev = dev;
rp->mii_if.mdio_read = mdio_read;
rp->mii_if.mdio_write = mdio_write;
return 0;
}
-static void rhine_tx_timeout(struct net_device *dev)
+static void rhine_reset_task(struct work_struct *work)
{
- struct rhine_private *rp = netdev_priv(dev);
- void __iomem *ioaddr = rp->base;
-
- printk(KERN_WARNING "%s: Transmit timed out, status %4.4x, PHY status "
- "%4.4x, resetting...\n",
- dev->name, ioread16(ioaddr + IntrStatus),
- mdio_read(dev, rp->mii_if.phy_id, MII_BMSR));
+ struct rhine_private *rp = container_of(work, struct rhine_private,
+ reset_task);
+ struct net_device *dev = rp->dev;
/* protect against concurrent rx interrupts */
disable_irq(rp->pdev->irq);
napi_disable(&rp->napi);
- spin_lock(&rp->lock);
+ spin_lock_bh(&rp->lock);
/* clear all descriptors */
free_tbufs(dev);
rhine_chip_reset(dev);
init_registers(dev);
- spin_unlock(&rp->lock);
+ spin_unlock_bh(&rp->lock);
enable_irq(rp->pdev->irq);
dev->trans_start = jiffies;
netif_wake_queue(dev);
}
+static void rhine_tx_timeout(struct net_device *dev)
+{
+ struct rhine_private *rp = netdev_priv(dev);
+ void __iomem *ioaddr = rp->base;
+
+ printk(KERN_WARNING "%s: Transmit timed out, status %4.4x, PHY status "
+ "%4.4x, resetting...\n",
+ dev->name, ioread16(ioaddr + IntrStatus),
+ mdio_read(dev, rp->mii_if.phy_id, MII_BMSR));
+
+ schedule_work(&rp->reset_task);
+}
+
static netdev_tx_t rhine_start_tx(struct sk_buff *skb,
struct net_device *dev)
{
struct rhine_private *rp = netdev_priv(dev);
void __iomem *ioaddr = rp->base;
- spin_lock_irq(&rp->lock);
-
- netif_stop_queue(dev);
napi_disable(&rp->napi);
+ cancel_work_sync(&rp->reset_task);
+ netif_stop_queue(dev);
+
+ spin_lock_irq(&rp->lock);
if (debug > 1)
printk(KERN_DEBUG "%s: Shutting down ethercard, "
goto _exit0;
}
- if (!pci_set_dma_mask(pdev, 0xffffffffffffffffULL)) {
+ if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
vxge_debug_ll_config(VXGE_TRACE,
"%s : using 64bit DMA", __func__);
high_dma = 1;
if (pci_set_consistent_dma_mask(pdev,
- 0xffffffffffffffffULL)) {
+ DMA_BIT_MASK(64))) {
vxge_debug_init(VXGE_ERR,
"%s : unable to obtain 64bit DMA for "
"consistent allocations", __func__);
ret = -ENOMEM;
goto _exit1;
}
- } else if (!pci_set_dma_mask(pdev, 0xffffffffUL)) {
+ } else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
vxge_debug_ll_config(VXGE_TRACE,
"%s : using 32bit DMA", __func__);
} else {
rxs->noise = sc->ah->ah_noise_floor;
rxs->signal = rxs->noise + rs.rs_rssi;
- /* An rssi of 35 indicates you should be able use
- * 54 Mbps reliably. A more elaborate scheme can be used
- * here but it requires a map of SNR/throughput for each
- * possible mode used */
- rxs->qual = rs.rs_rssi * 100 / 35;
-
- /* rssi can be more than 35 though, anything above that
- * should be considered at 100% */
- if (rxs->qual > 100)
- rxs->qual = 100;
-
rxs->antenna = rs.rs_antenna;
rxs->rate_idx = ath5k_hw_to_driver_rix(sc, rs.rs_rate);
rxs->flag |= ath5k_rx_decrypted(sc, ds, skb, &rs);
*/
ath5k_stop_locked(sc);
+ /* Set PHY calibration interval */
+ ah->ah_cal_intval = ath5k_calinterval;
+
/*
* The basic interface to setting the hardware in a good
* state is ``reset''. On return the hardware is known to
/* Set ack to be sent at low bit-rates */
ath5k_hw_set_ack_bitrate_high(ah, false);
-
- /* Set PHY calibration inteval */
- ah->ah_cal_intval = ath5k_calinterval;
-
ret = 0;
done:
mmiowb();
wait = wait_time;
while (ath9k_hw_numtxpending(ah, q)) {
if ((--wait) == 0) {
- ath_print(common, ATH_DBG_QUEUE,
+ ath_print(common, ATH_DBG_FATAL,
"Failed to stop TX DMA in 100 "
"msec after killing last frame\n");
break;
#define ATH9K_TXERR_XTXOP 0x08
#define ATH9K_TXERR_TIMER_EXPIRED 0x10
#define ATH9K_TX_ACKED 0x20
+#define ATH9K_TXERR_MASK \
+ (ATH9K_TXERR_XRETRY | ATH9K_TXERR_FILT | ATH9K_TXERR_FIFO | \
+ ATH9K_TXERR_XTXOP | ATH9K_TXERR_TIMER_EXPIRED)
#define ATH9K_TX_BA 0x01
#define ATH9K_TX_PWRMGMT 0x02
struct ieee80211_hw *hw = sc->hw;
int r;
+ /* Stop ANI */
+ del_timer_sync(&common->ani.timer);
+
ath9k_hw_set_interrupts(ah, 0);
ath_drain_all_txq(sc, retry_tx);
ath_stoprecv(sc);
}
}
+ /* Start ANI */
+ ath_start_ani(common);
+
return r;
}
return; /* another wiphy still in use */
}
+ /* Ensure HW is awake when we try to shut it down. */
+ ath9k_ps_wakeup(sc);
+
if (ah->btcoex_hw.enabled) {
ath9k_hw_btcoex_disable(ah);
if (ah->btcoex_hw.scheme == ATH_BTCOEX_CFG_3WIRE)
/* disable HAL and put h/w to sleep */
ath9k_hw_disable(ah);
ath9k_hw_configpcipowersave(ah, 1, 1);
+ ath9k_ps_restore(sc);
+
+ /* Finally, put the chip in FULL SLEEP mode */
ath9k_setpower(sc, ATH9K_PM_FULL_SLEEP);
sc->sc_flags |= SC_OP_INVALID;
if ((sc->sc_ah->opmode == NL80211_IFTYPE_AP) ||
(sc->sc_ah->opmode == NL80211_IFTYPE_ADHOC) ||
(sc->sc_ah->opmode == NL80211_IFTYPE_MESH_POINT)) {
+ ath9k_ps_wakeup(sc);
ath9k_hw_stoptxdma(sc->sc_ah, sc->beacon.beaconq);
ath_beacon_return(sc, avp);
+ ath9k_ps_restore(sc);
}
sc->sc_flags &= ~SC_OP_BEACONS;
case IEEE80211_AMPDU_RX_STOP:
break;
case IEEE80211_AMPDU_TX_START:
+ ath9k_ps_wakeup(sc);
ath_tx_aggr_start(sc, sta, tid, ssn);
ieee80211_start_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ ath9k_ps_restore(sc);
break;
case IEEE80211_AMPDU_TX_STOP:
+ ath9k_ps_wakeup(sc);
ath_tx_aggr_stop(sc, sta, tid);
ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ ath9k_ps_restore(sc);
break;
case IEEE80211_AMPDU_TX_OPERATIONAL:
+ ath9k_ps_wakeup(sc);
ath_tx_aggr_resume(sc, sta, tid);
+ ath9k_ps_restore(sc);
break;
default:
ath_print(ath9k_hw_common(sc->sc_ah), ATH_DBG_FATAL,
pci_write_config_byte(pdev, ATH_PCIE_CAP_LINK_CTRL, aspm);
}
-const static struct ath_bus_ops ath_pci_bus_ops = {
+static const struct ath_bus_ops ath_pci_bus_ops = {
.read_cachesize = ath_pci_read_cachesize,
.cleanup = ath_pci_cleanup,
.eeprom_read = ath_pci_eeprom_read,
if (npend) {
int r;
- ath_print(common, ATH_DBG_XMIT,
+ ath_print(common, ATH_DBG_FATAL,
"Unable to stop TxDMA. Reset HAL!\n");
spin_lock_bh(&sc->sc_resetlock);
- r = ath9k_hw_reset(ah, sc->sc_ah->curchan, true);
+ r = ath9k_hw_reset(ah, sc->sc_ah->curchan, false);
if (r)
ath_print(common, ATH_DBG_FATAL,
"Unable to reset hardware; reset status %d\n",
* For HT capable stations, we save tidno for later use.
* We also override seqno set by upper layer with the one
* in tx aggregation state.
- *
- * If fragmentation is on, the sequence number is
- * not overridden, since it has been
- * incremented by the fragmentation routine.
- *
- * FIXME: check if the fragmentation threshold exceeds
- * IEEE80211 max.
*/
tid = ATH_AN_2_TID(an, bf->bf_tidno);
- hdr->seq_ctrl = cpu_to_le16(tid->seq_next <<
- IEEE80211_SEQ_SEQ_SHIFT);
+ hdr->seq_ctrl = cpu_to_le16(tid->seq_next << IEEE80211_SEQ_SEQ_SHIFT);
bf->bf_seqno = tid->seq_next;
INCR(tid->seq_next, IEEE80211_SEQ_MAX);
}
bf->bf_keyix = ATH9K_TXKEYIX_INVALID;
}
- if (ieee80211_is_data_qos(fc) && (sc->sc_flags & SC_OP_TXAGGR))
+ if (ieee80211_is_data_qos(fc) && bf_isht(bf) &&
+ (sc->sc_flags & SC_OP_TXAGGR))
assign_aggr_tid_seqno(skb, bf);
bf->bf_mpdu = skb;
struct ath_wiphy *aphy = hw->priv;
struct ath_softc *sc = aphy->sc;
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
- int hdrlen, padsize;
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
+ int padpos, padsize;
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ath_tx_control txctl;
* BSSes.
*/
if (info->flags & IEEE80211_TX_CTL_ASSIGN_SEQ) {
- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
if (info->flags & IEEE80211_TX_CTL_FIRST_FRAGMENT)
sc->tx.seq_no += 0x10;
hdr->seq_ctrl &= cpu_to_le16(IEEE80211_SCTL_FRAG);
}
/* Add the padding after the header if this is not already done */
- hdrlen = ieee80211_get_hdrlen_from_skb(skb);
- if (hdrlen & 3) {
- padsize = hdrlen % 4;
+ padpos = ath9k_cmn_padpos(hdr->frame_control);
+ padsize = padpos & 3;
+ if (padsize && skb->len>padpos) {
if (skb_headroom(skb) < padsize) {
ath_print(common, ATH_DBG_XMIT,
"TX CABQ padding failed\n");
return;
}
skb_push(skb, padsize);
- memmove(skb->data, skb->data + padsize, hdrlen);
+ memmove(skb->data, skb->data + padsize, padpos);
}
txctl.txq = sc->beacon.cabq;
struct ieee80211_hw *hw = sc->hw;
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
- int hdrlen, padsize;
+ struct ieee80211_hdr * hdr = (struct ieee80211_hdr *)skb->data;
+ int padpos, padsize;
ath_print(common, ATH_DBG_XMIT, "TX complete: skb: %p\n", skb);
tx_info->flags |= IEEE80211_TX_STAT_ACK;
}
- hdrlen = ieee80211_get_hdrlen_from_skb(skb);
- padsize = hdrlen & 3;
- if (padsize && hdrlen >= 24) {
+ padpos = ath9k_cmn_padpos(hdr->frame_control);
+ padsize = padpos & 3;
+ if (padsize && skb->len>padpos+padsize) {
/*
* Remove MAC header padding before giving the frame back to
* mac80211.
*/
- memmove(skb->data + padsize, skb->data, hdrlen);
+ memmove(skb->data + padsize, skb->data, padpos);
skb_pull(skb, padsize);
}
&txq->axq_q, lastbf->list.prev);
txq->axq_depth--;
- txok = !(ds->ds_txstat.ts_status & ATH9K_TXERR_FILT);
+ txok = !(ds->ds_txstat.ts_status & ATH9K_TXERR_MASK);
txq->axq_tx_inprogress = false;
spin_unlock_bh(&txq->axq_lock);
}
}
-/* Check if a DMA region fits the device constraints.
- * Returns true, if the region is OK for usage with this device. */
-static inline bool b43_dma_address_ok(struct b43_dmaring *ring,
- dma_addr_t addr, size_t size)
-{
- switch (ring->type) {
- case B43_DMA_30BIT:
- if ((u64)addr + size > (1ULL << 30))
- return 0;
- break;
- case B43_DMA_32BIT:
- if ((u64)addr + size > (1ULL << 32))
- return 0;
- break;
- case B43_DMA_64BIT:
- /* Currently we can't have addresses beyond
- * 64bit in the kernel. */
- break;
- }
- return 1;
-}
-
-#define is_4k_aligned(addr) (((u64)(addr) & 0x0FFFull) == 0)
-#define is_8k_aligned(addr) (((u64)(addr) & 0x1FFFull) == 0)
-
-static void b43_unmap_and_free_ringmem(struct b43_dmaring *ring, void *base,
- dma_addr_t dmaaddr, size_t size)
-{
- ssb_dma_unmap_single(ring->dev->dev, dmaaddr, size, DMA_TO_DEVICE);
- free_pages((unsigned long)base, get_order(size));
-}
-
-static void * __b43_get_and_map_ringmem(struct b43_dmaring *ring,
- dma_addr_t *dmaaddr, size_t size,
- gfp_t gfp_flags)
-{
- void *base;
-
- base = (void *)__get_free_pages(gfp_flags, get_order(size));
- if (!base)
- return NULL;
- memset(base, 0, size);
- *dmaaddr = ssb_dma_map_single(ring->dev->dev, base, size,
- DMA_TO_DEVICE);
- if (ssb_dma_mapping_error(ring->dev->dev, *dmaaddr)) {
- free_pages((unsigned long)base, get_order(size));
- return NULL;
- }
-
- return base;
-}
-
-static void * b43_get_and_map_ringmem(struct b43_dmaring *ring,
- dma_addr_t *dmaaddr, size_t size)
-{
- void *base;
-
- base = __b43_get_and_map_ringmem(ring, dmaaddr, size,
- GFP_KERNEL);
- if (!base) {
- b43err(ring->dev->wl, "Failed to allocate or map pages "
- "for DMA ringmemory\n");
- return NULL;
- }
- if (!b43_dma_address_ok(ring, *dmaaddr, size)) {
- /* The memory does not fit our device constraints.
- * Retry with GFP_DMA set to get lower memory. */
- b43_unmap_and_free_ringmem(ring, base, *dmaaddr, size);
- base = __b43_get_and_map_ringmem(ring, dmaaddr, size,
- GFP_KERNEL | GFP_DMA);
- if (!base) {
- b43err(ring->dev->wl, "Failed to allocate or map pages "
- "in the GFP_DMA region for DMA ringmemory\n");
- return NULL;
- }
- if (!b43_dma_address_ok(ring, *dmaaddr, size)) {
- b43_unmap_and_free_ringmem(ring, base, *dmaaddr, size);
- b43err(ring->dev->wl, "Failed to allocate DMA "
- "ringmemory that fits device constraints\n");
- return NULL;
- }
- }
- /* We expect the memory to be 4k aligned, at least. */
- if (B43_WARN_ON(!is_4k_aligned(*dmaaddr))) {
- b43_unmap_and_free_ringmem(ring, base, *dmaaddr, size);
- return NULL;
- }
-
- return base;
-}
-
static int alloc_ringmemory(struct b43_dmaring *ring)
{
- unsigned int required;
- void *base;
- dma_addr_t dmaaddr;
-
- /* There are several requirements to the descriptor ring memory:
- * - The memory region needs to fit the address constraints for the
- * device (same as for frame buffers).
- * - For 30/32bit DMA devices, the descriptor ring must be 4k aligned.
- * - For 64bit DMA devices, the descriptor ring must be 8k aligned.
+ gfp_t flags = GFP_KERNEL;
+
+ /* The specs call for 4K buffers for 30- and 32-bit DMA with 4K
+ * alignment and 8K buffers for 64-bit DMA with 8K alignment. Testing
+ * has shown that 4K is sufficient for the latter as long as the buffer
+ * does not cross an 8K boundary.
+ *
+ * For unknown reasons - possibly a hardware error - the BCM4311 rev
+ * 02, which uses 64-bit DMA, needs the ring buffer in very low memory,
+ * which accounts for the GFP_DMA flag below.
+ *
+ * The flags here must match the flags in free_ringmemory below!
*/
-
if (ring->type == B43_DMA_64BIT)
- required = ring->nr_slots * sizeof(struct b43_dmadesc64);
- else
- required = ring->nr_slots * sizeof(struct b43_dmadesc32);
- if (B43_WARN_ON(required > 0x1000))
+ flags |= GFP_DMA;
+ ring->descbase = ssb_dma_alloc_consistent(ring->dev->dev,
+ B43_DMA_RINGMEMSIZE,
+ &(ring->dmabase), flags);
+ if (!ring->descbase) {
+ b43err(ring->dev->wl, "DMA ringmemory allocation failed\n");
return -ENOMEM;
-
- ring->alloc_descsize = 0x1000;
- base = b43_get_and_map_ringmem(ring, &dmaaddr, ring->alloc_descsize);
- if (!base)
- return -ENOMEM;
- ring->alloc_descbase = base;
- ring->alloc_dmabase = dmaaddr;
-
- if ((ring->type != B43_DMA_64BIT) || is_8k_aligned(dmaaddr)) {
- /* We're on <=32bit DMA, or we already got 8k aligned memory.
- * That's all we need, so we're fine. */
- ring->descbase = base;
- ring->dmabase = dmaaddr;
- return 0;
- }
- b43_unmap_and_free_ringmem(ring, base, dmaaddr, ring->alloc_descsize);
-
- /* Ok, we failed at the 8k alignment requirement.
- * Try to force-align the memory region now. */
- ring->alloc_descsize = 0x2000;
- base = b43_get_and_map_ringmem(ring, &dmaaddr, ring->alloc_descsize);
- if (!base)
- return -ENOMEM;
- ring->alloc_descbase = base;
- ring->alloc_dmabase = dmaaddr;
-
- if (is_8k_aligned(dmaaddr)) {
- /* We're already 8k aligned. That Ok, too. */
- ring->descbase = base;
- ring->dmabase = dmaaddr;
- return 0;
}
- /* Force-align it to 8k */
- ring->descbase = (void *)((u8 *)base + 0x1000);
- ring->dmabase = dmaaddr + 0x1000;
- B43_WARN_ON(!is_8k_aligned(ring->dmabase));
+ memset(ring->descbase, 0, B43_DMA_RINGMEMSIZE);
return 0;
}
static void free_ringmemory(struct b43_dmaring *ring)
{
- b43_unmap_and_free_ringmem(ring, ring->alloc_descbase,
- ring->alloc_dmabase, ring->alloc_descsize);
+ gfp_t flags = GFP_KERNEL;
+
+ if (ring->type == B43_DMA_64BIT)
+ flags |= GFP_DMA;
+
+ ssb_dma_free_consistent(ring->dev->dev, B43_DMA_RINGMEMSIZE,
+ ring->descbase, ring->dmabase, flags);
}
/* Reset the RX DMA channel */
if (unlikely(ssb_dma_mapping_error(ring->dev->dev, addr)))
return 1;
- if (!b43_dma_address_ok(ring, addr, buffersize)) {
- /* We can't support this address. Unmap it again. */
- unmap_descbuffer(ring, addr, buffersize, dma_to_device);
- return 1;
+ switch (ring->type) {
+ case B43_DMA_30BIT:
+ if ((u64)addr + buffersize > (1ULL << 30))
+ goto address_error;
+ break;
+ case B43_DMA_32BIT:
+ if ((u64)addr + buffersize > (1ULL << 32))
+ goto address_error;
+ break;
+ case B43_DMA_64BIT:
+ /* Currently we can't have addresses beyond
+ * 64bit in the kernel. */
+ break;
}
/* The address is OK. */
return 0;
+
+address_error:
+ /* We can't support this address. Unmap it again. */
+ unmap_descbuffer(ring, addr, buffersize, dma_to_device);
+
+ return 1;
}
static bool b43_rx_buffer_is_poisoned(struct b43_dmaring *ring, struct sk_buff *skb)
meta->dmaaddr = dmaaddr;
ring->ops->fill_descriptor(ring, desc, dmaaddr,
ring->rx_buffersize, 0, 0, 0);
- ssb_dma_sync_single_for_device(ring->dev->dev,
- ring->alloc_dmabase,
- ring->alloc_descsize, DMA_TO_DEVICE);
return 0;
}
}
/* Now transfer the whole frame. */
wmb();
- ssb_dma_sync_single_for_device(ring->dev->dev,
- ring->alloc_dmabase,
- ring->alloc_descsize, DMA_TO_DEVICE);
ops->poke_tx(ring, next_slot(ring, slot));
return 0;
} __attribute__ ((__packed__));
/* Misc DMA constants */
+#define B43_DMA_RINGMEMSIZE PAGE_SIZE
#define B43_DMA0_RX_FRAMEOFFSET 30
/* DMA engine tuning knobs */
/* The QOS priority assigned to this ring. Only used for TX rings.
* This is the mac80211 "queue" value. */
u8 queue_prio;
- /* Pointers and size of the originally allocated and mapped memory
- * region for the descriptor ring. */
- void *alloc_descbase;
- dma_addr_t alloc_dmabase;
- unsigned int alloc_descsize;
- /* Pointer to our wireless device. */
struct b43_wldev *dev;
#ifdef CONFIG_B43_DEBUG
/* Maximum number of used slots. */
snr = rx_stats_sig_avg / rx_stats_noise_diff;
rx_status.noise = rx_status.signal -
iwl3945_calc_db_from_ratio(snr);
- rx_status.qual = iwl3945_calc_sig_qual(rx_status.signal,
- rx_status.noise);
-
- /* If noise info not available, calculate signal quality indicator (%)
- * using just the dBm signal level. */
} else {
rx_status.noise = priv->last_rx_noise;
- rx_status.qual = iwl3945_calc_sig_qual(rx_status.signal, 0);
}
- IWL_DEBUG_STATS(priv, "Rssi %d noise %d qual %d sig_avg %d noise_diff %d\n",
- rx_status.signal, rx_status.noise, rx_status.qual,
+ IWL_DEBUG_STATS(priv, "Rssi %d noise %d sig_avg %d noise_diff %d\n",
+ rx_status.signal, rx_status.noise,
rx_stats_sig_avg, rx_stats_noise_diff);
header = (struct ieee80211_hdr *)IWL_RX_DATA(pkt);
rc = -EIO;
}
- priv->alloc_rxb_page--;
- free_pages(cmd.reply_page, priv->hw_params.rx_page_order);
+ iwl_free_pages(priv, cmd.reply_page);
return rc;
}
.use_isr_legacy = true,
.ht_greenfield_support = false,
.led_compensation = 64,
+ .broken_powersave = true,
};
static struct iwl_cfg iwl3945_abg_cfg = {
.use_isr_legacy = true,
.ht_greenfield_support = false,
.led_compensation = 64,
+ .broken_powersave = true,
};
struct pci_device_id iwl3945_hw_card_ids[] = {
*
*****************************************************************************/
extern int iwl3945_calc_db_from_ratio(int sig_ratio);
-extern int iwl3945_calc_sig_qual(int rssi_dbm, int noise_dbm);
extern void iwl3945_rx_replenish(void *data);
extern void iwl3945_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq);
extern unsigned int iwl3945_fill_beacon_frame(struct iwl_priv *priv,
iwl4965_interpolate_chan(priv, channel, &ch_eeprom_info);
/* calculate tx gain adjustment based on power supply voltage */
- voltage = priv->calib_info->voltage;
+ voltage = le16_to_cpu(priv->calib_info->voltage);
init_voltage = (s32)le32_to_cpu(priv->card_alive_init.voltage);
voltage_compensation =
iwl4965_get_voltage_compensation(voltage, init_voltage);
static inline s32 iwl_temp_calib_to_offset(struct iwl_priv *priv)
{
- u16 *temp_calib = (u16 *)iwl_eeprom_query_addr(priv,
- EEPROM_5000_TEMPERATURE);
- /* offset = temperature - voltage / coef */
- s32 offset = (s32)(temp_calib[0] - temp_calib[1] / IWL_5150_VOLTAGE_TO_TEMPERATURE_COEFF);
- return offset;
+ u16 temperature, voltage;
+ __le16 *temp_calib =
+ (__le16 *)iwl_eeprom_query_addr(priv, EEPROM_5000_TEMPERATURE);
+
+ temperature = le16_to_cpu(temp_calib[0]);
+ voltage = le16_to_cpu(temp_calib[1]);
+
+ /* offset = temp - volt / coeff */
+ return (s32)(temperature - voltage / IWL_5150_VOLTAGE_TO_TEMPERATURE_COEFF);
}
/* Fixed (non-configurable) rx data from phy */
static int iwl5000_set_Xtal_calib(struct iwl_priv *priv)
{
struct iwl_calib_xtal_freq_cmd cmd;
- u16 *xtal_calib = (u16 *)iwl_eeprom_query_addr(priv, EEPROM_5000_XTAL);
+ __le16 *xtal_calib =
+ (__le16 *)iwl_eeprom_query_addr(priv, EEPROM_5000_XTAL);
cmd.hdr.op_code = IWL_PHY_CALIBRATE_CRYSTAL_FRQ_CMD;
cmd.hdr.first_group = 0;
cmd.hdr.groups_num = 1;
cmd.hdr.data_valid = 1;
- cmd.cap_pin1 = (u8)xtal_calib[0];
- cmd.cap_pin2 = (u8)xtal_calib[1];
+ cmd.cap_pin1 = le16_to_cpu(xtal_calib[0]);
+ cmd.cap_pin2 = le16_to_cpu(xtal_calib[1]);
return iwl_calib_set(&priv->calib_results[IWL_CALIB_XTAL],
(u8 *)&cmd, sizeof(cmd));
}
};
/* mbps, mcs */
-const static struct iwl_rate_mcs_info iwl_rate_mcs[IWL_RATE_COUNT] = {
+static const struct iwl_rate_mcs_info iwl_rate_mcs[IWL_RATE_COUNT] = {
{ "1", "BPSK DSSS"},
{ "2", "QPSK DSSS"},
{"5.5", "BPSK CCK"},
}
#ifdef CONFIG_IWLWIFI_DEBUG
- if (!(iwl_get_debug_level(priv) & IWL_DL_FW_ERRORS))
+ if (!(iwl_get_debug_level(priv) & IWL_DL_FW_ERRORS) && !full_log)
size = (size > DEFAULT_DUMP_EVENT_LOG_ENTRIES)
? DEFAULT_DUMP_EVENT_LOG_ENTRIES : size;
#else
priv->ibss_beacon = NULL;
- spin_lock_init(&priv->lock);
spin_lock_init(&priv->sta_lock);
spin_lock_init(&priv->hcmd_lock);
(unsigned long long) pci_resource_len(pdev, 0));
IWL_DEBUG_INFO(priv, "pci_resource_base = %p\n", priv->hw_base);
- /* this spin lock will be used in apm_ops.init and EEPROM access
+ /* these spin locks will be used in apm_ops.init and EEPROM access
* we should init now
*/
spin_lock_init(&priv->reg_lock);
+ spin_lock_init(&priv->lock);
iwl_hw_detect(priv);
IWL_INFO(priv, "Detected Intel Wireless WiFi Link %s REV=0x%X\n",
priv->cfg->name, priv->hw_rev);
* The MAC (uCode processor, etc.) does not need to be powered up for accessing
* the CSR registers.
*
- * NOTE: Newer devices using one-time-programmable (OTP) memory
- * require device to be awake in order to read this memory
+ * NOTE: Device does need to be awake in order to read this memory
* via CSR_EEPROM and CSR_OTP registers
*/
#define CSR_BASE (0x000)
/*
* EEPROM and OTP (one-time-programmable) memory reads
*
- * NOTE: For (newer) devices using OTP, device must be awake, initialized via
- * apm_ops.init() in order to read. Older devices (3945/4965/5000)
- * use EEPROM and do not require this.
+ * NOTE: Device must be awake, initialized via apm_ops.init(),
+ * in order to read.
*/
#define CSR_EEPROM_REG (CSR_BASE+0x02c)
#define CSR_EEPROM_GP (CSR_BASE+0x030)
u32 last_beacon_time;
u64 last_tsf;
- /* eeprom */
+ /* eeprom -- this is in the card's little endian byte order */
u8 *eeprom;
int nvm_device_type;
struct iwl_eeprom_calib_info *calib_info;
return ((ch->flags & EEPROM_CHANNEL_IBSS)) ? 1 : 0;
}
+static inline void __iwl_free_pages(struct iwl_priv *priv, struct page *page)
+{
+ __free_pages(page, priv->hw_params.rx_page_order);
+ priv->alloc_rxb_page--;
+}
+
+static inline void iwl_free_pages(struct iwl_priv *priv, unsigned long page)
+{
+ free_pages(page, priv->hw_params.rx_page_order);
+ priv->alloc_rxb_page--;
+}
#endif /* __iwl_dev_h__ */
return ret;
}
-static int iwl_read_otp_word(struct iwl_priv *priv, u16 addr, u16 *eeprom_data)
+static int iwl_read_otp_word(struct iwl_priv *priv, u16 addr, __le16 *eeprom_data)
{
int ret = 0;
u32 r;
CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK);
IWL_ERR(priv, "Correctable OTP ECC error, continue read\n");
}
- *eeprom_data = le16_to_cpu((__force __le16)(r >> 16));
+ *eeprom_data = cpu_to_le16(r >> 16);
return 0;
}
*/
static bool iwl_is_otp_empty(struct iwl_priv *priv)
{
- u16 next_link_addr = 0, link_value;
+ u16 next_link_addr = 0;
+ __le16 link_value;
bool is_empty = false;
/* locate the beginning of OTP link list */
static int iwl_find_otp_image(struct iwl_priv *priv,
u16 *validblockaddr)
{
- u16 next_link_addr = 0, link_value = 0, valid_addr;
+ u16 next_link_addr = 0, valid_addr;
+ __le16 link_value = 0;
int usedblocks = 0;
/* set addressing mode to absolute to traverse the link list */
* check for more block on the link list
*/
valid_addr = next_link_addr;
- next_link_addr = link_value * sizeof(u16);
+ next_link_addr = le16_to_cpu(link_value) * sizeof(u16);
IWL_DEBUG_INFO(priv, "OTP blocks %d addr 0x%x\n",
usedblocks, next_link_addr);
if (iwl_read_otp_word(priv, next_link_addr, &link_value))
*/
int iwl_eeprom_init(struct iwl_priv *priv)
{
- u16 *e;
+ __le16 *e;
u32 gp = iwl_read32(priv, CSR_EEPROM_GP);
int sz;
int ret;
ret = -ENOMEM;
goto alloc_err;
}
- e = (u16 *)priv->eeprom;
+ e = (__le16 *)priv->eeprom;
- if (priv->nvm_device_type == NVM_DEVICE_TYPE_OTP) {
- /* OTP reads require powered-up chip */
- priv->cfg->ops->lib->apm_ops.init(priv);
- }
+ priv->cfg->ops->lib->apm_ops.init(priv);
ret = priv->cfg->ops->lib->eeprom_ops.verify_signature(priv);
if (ret < 0) {
}
for (addr = validblockaddr; addr < validblockaddr + sz;
addr += sizeof(u16)) {
- u16 eeprom_data;
+ __le16 eeprom_data;
ret = iwl_read_otp_word(priv, addr, &eeprom_data);
if (ret)
e[cache_addr / 2] = eeprom_data;
cache_addr += sizeof(u16);
}
-
- /*
- * Now that OTP reads are complete, reset chip to save
- * power until we load uCode during "up".
- */
- priv->cfg->ops->lib->apm_ops.stop(priv);
-
} else {
/* eeprom is an array of 16bit values */
for (addr = 0; addr < sz; addr += sizeof(u16)) {
goto done;
}
r = _iwl_read_direct32(priv, CSR_EEPROM_REG);
- e[addr / 2] = le16_to_cpu((__force __le16)(r >> 16));
+ e[addr / 2] = cpu_to_le16(r >> 16);
}
}
ret = 0;
err:
if (ret)
iwl_eeprom_free(priv);
+ /* Reset chip to save power until we load uCode during "up". */
+ priv->cfg->ops->lib->apm_ops.stop(priv);
alloc_err:
return ret;
}
ch_info->ht40_eeprom = *eeprom_ch;
ch_info->ht40_max_power_avg = eeprom_ch->max_power_avg;
ch_info->ht40_flags = eeprom_ch->flags;
- ch_info->ht40_extension_channel &= ~clear_ht40_extension_channel;
+ if (eeprom_ch->flags & EEPROM_CHANNEL_VALID)
+ ch_info->ht40_extension_channel &= ~clear_ht40_extension_channel;
return 0;
}
*
*/
struct iwl_eeprom_enhanced_txpwr {
- u16 common;
+ __le16 common;
s8 chain_a_max;
s8 chain_b_max;
s8 chain_c_max;
struct iwl_eeprom_calib_info {
u8 saturation_power24; /* half-dBm (e.g. "34" = 17 dBm) */
u8 saturation_power52; /* half-dBm */
- s16 voltage; /* signed */
+ __le16 voltage; /* signed */
struct iwl_eeprom_calib_subband_info
band_info[EEPROM_TX_POWER_BANDS];
} __attribute__ ((packed));
}
fail:
if (cmd->reply_page) {
- free_pages(cmd->reply_page, priv->hw_params.rx_page_order);
+ iwl_free_pages(priv, cmd->reply_page);
cmd->reply_page = 0;
}
out:
pci_unmap_page(priv->pci_dev, rxq->pool[i].page_dma,
PAGE_SIZE << priv->hw_params.rx_page_order,
PCI_DMA_FROMDEVICE);
- __free_pages(rxq->pool[i].page,
- priv->hw_params.rx_page_order);
+ __iwl_free_pages(priv, rxq->pool[i].page);
rxq->pool[i].page = NULL;
- priv->alloc_rxb_page--;
}
}
pci_unmap_page(priv->pci_dev, rxq->pool[i].page_dma,
PAGE_SIZE << priv->hw_params.rx_page_order,
PCI_DMA_FROMDEVICE);
- priv->alloc_rxb_page--;
- __free_pages(rxq->pool[i].page,
- priv->hw_params.rx_page_order);
+ __iwl_free_pages(priv, rxq->pool[i].page);
rxq->pool[i].page = NULL;
}
list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
}
EXPORT_SYMBOL(iwl_reply_statistics);
-#define PERFECT_RSSI (-20) /* dBm */
-#define WORST_RSSI (-95) /* dBm */
-#define RSSI_RANGE (PERFECT_RSSI - WORST_RSSI)
-
-/* Calculate an indication of rx signal quality (a percentage, not dBm!).
- * See http://www.ces.clemson.edu/linux/signal_quality.shtml for info
- * about formulas used below. */
-static int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
-{
- int sig_qual;
- int degradation = PERFECT_RSSI - rssi_dbm;
-
- /* If we get a noise measurement, use signal-to-noise ratio (SNR)
- * as indicator; formula is (signal dbm - noise dbm).
- * SNR at or above 40 is a great signal (100%).
- * Below that, scale to fit SNR of 0 - 40 dB within 0 - 100% indicator.
- * Weakest usable signal is usually 10 - 15 dB SNR. */
- if (noise_dbm) {
- if (rssi_dbm - noise_dbm >= 40)
- return 100;
- else if (rssi_dbm < noise_dbm)
- return 0;
- sig_qual = ((rssi_dbm - noise_dbm) * 5) / 2;
-
- /* Else use just the signal level.
- * This formula is a least squares fit of data points collected and
- * compared with a reference system that had a percentage (%) display
- * for signal quality. */
- } else
- sig_qual = (100 * (RSSI_RANGE * RSSI_RANGE) - degradation *
- (15 * RSSI_RANGE + 62 * degradation)) /
- (RSSI_RANGE * RSSI_RANGE);
-
- if (sig_qual > 100)
- sig_qual = 100;
- else if (sig_qual < 1)
- sig_qual = 0;
-
- return sig_qual;
-}
-
/* Calc max signal level (dBm) among 3 possible receivers */
static inline int iwl_calc_rssi(struct iwl_priv *priv,
struct iwl_rx_phy_res *rx_resp)
if (iwl_is_associated(priv) &&
!test_bit(STATUS_SCANNING, &priv->status)) {
rx_status.noise = priv->last_rx_noise;
- rx_status.qual = iwl_calc_sig_qual(rx_status.signal,
- rx_status.noise);
} else {
rx_status.noise = IWL_NOISE_MEAS_NOT_AVAILABLE;
- rx_status.qual = iwl_calc_sig_qual(rx_status.signal, 0);
}
/* Reset beacon noise level if not associated. */
iwl_dbg_report_frame(priv, phy_res, len, header, 1);
#endif
iwl_dbg_log_rx_data_frame(priv, len, header);
- IWL_DEBUG_STATS_LIMIT(priv, "Rssi %d, noise %d, qual %d, TSF %llu\n",
- rx_status.signal, rx_status.noise, rx_status.qual,
+ IWL_DEBUG_STATS_LIMIT(priv, "Rssi %d, noise %d, TSF %llu\n",
+ rx_status.signal, rx_status.noise,
(unsigned long long)rx_status.mactime);
/*
clear_bit(STATUS_SCAN_HW, &priv->status);
}
- priv->alloc_rxb_page--;
- free_pages(cmd.reply_page, priv->hw_params.rx_page_order);
+ iwl_free_pages(priv, cmd.reply_page);
return ret;
}
break;
}
}
-
- priv->alloc_rxb_page--;
- free_pages(cmd.reply_page, priv->hw_params.rx_page_order);
+ iwl_free_pages(priv, cmd.reply_page);
return ret;
}
break;
}
}
-
- priv->alloc_rxb_page--;
- free_pages(cmd.reply_page, priv->hw_params.rx_page_order);
+ iwl_free_pages(priv, cmd.reply_page);
return ret;
}
int txq_id;
/* Tx queues */
- if (priv->txq)
+ if (priv->txq) {
for (txq_id = 0; txq_id < priv->hw_params.max_txq_num;
txq_id++)
if (txq_id == IWL_CMD_QUEUE_NUM)
iwl_cmd_queue_free(priv);
else
iwl_tx_queue_free(priv, txq_id);
+ }
iwl_free_dma_ptr(priv, &priv->kw);
iwl_free_dma_ptr(priv, &priv->scd_bc_tbls);
txq = &priv->txq[txq_id];
q = &txq->q;
+ if ((iwl_queue_space(q) < q->high_mark))
+ goto drop;
+
spin_lock_irqsave(&priv->lock, flags);
idx = get_cmd_index(q, q->write_ptr, 0);
break;
}
- free_pages(cmd.reply_page, priv->hw_params.rx_page_order);
+ iwl_free_pages(priv, cmd.reply_page);
return rc;
}
pci_unmap_page(priv->pci_dev, rxq->pool[i].page_dma,
PAGE_SIZE << priv->hw_params.rx_page_order,
PCI_DMA_FROMDEVICE);
- priv->alloc_rxb_page--;
- __free_pages(rxq->pool[i].page,
- priv->hw_params.rx_page_order);
+ __iwl_free_pages(priv, rxq->pool[i].page);
rxq->pool[i].page = NULL;
}
list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
pci_unmap_page(priv->pci_dev, rxq->pool[i].page_dma,
PAGE_SIZE << priv->hw_params.rx_page_order,
PCI_DMA_FROMDEVICE);
- __free_pages(rxq->pool[i].page,
- priv->hw_params.rx_page_order);
+ __iwl_free_pages(priv, rxq->pool[i].page);
rxq->pool[i].page = NULL;
- priv->alloc_rxb_page--;
}
}
return (int)ratio2dB[sig_ratio];
}
-#define PERFECT_RSSI (-20) /* dBm */
-#define WORST_RSSI (-95) /* dBm */
-#define RSSI_RANGE (PERFECT_RSSI - WORST_RSSI)
-
-/* Calculate an indication of rx signal quality (a percentage, not dBm!).
- * See http://www.ces.clemson.edu/linux/signal_quality.shtml for info
- * about formulas used below. */
-int iwl3945_calc_sig_qual(int rssi_dbm, int noise_dbm)
-{
- int sig_qual;
- int degradation = PERFECT_RSSI - rssi_dbm;
-
- /* If we get a noise measurement, use signal-to-noise ratio (SNR)
- * as indicator; formula is (signal dbm - noise dbm).
- * SNR at or above 40 is a great signal (100%).
- * Below that, scale to fit SNR of 0 - 40 dB within 0 - 100% indicator.
- * Weakest usable signal is usually 10 - 15 dB SNR. */
- if (noise_dbm) {
- if (rssi_dbm - noise_dbm >= 40)
- return 100;
- else if (rssi_dbm < noise_dbm)
- return 0;
- sig_qual = ((rssi_dbm - noise_dbm) * 5) / 2;
-
- /* Else use just the signal level.
- * This formula is a least squares fit of data points collected and
- * compared with a reference system that had a percentage (%) display
- * for signal quality. */
- } else
- sig_qual = (100 * (RSSI_RANGE * RSSI_RANGE) - degradation *
- (15 * RSSI_RANGE + 62 * degradation)) /
- (RSSI_RANGE * RSSI_RANGE);
-
- if (sig_qual > 100)
- sig_qual = 100;
- else if (sig_qual < 1)
- sig_qual = 0;
-
- return sig_qual;
-}
-
/**
* iwl3945_rx_handle - Main entry function for receiving responses from uCode
*
}
#ifdef CONFIG_IWLWIFI_DEBUG
- if (!(iwl_get_debug_level(priv) & IWL_DL_FW_ERRORS))
+ if (!(iwl_get_debug_level(priv) & IWL_DL_FW_ERRORS) && !full_log)
size = (size > DEFAULT_IWL3945_DUMP_EVENT_LOG_ENTRIES)
? DEFAULT_IWL3945_DUMP_EVENT_LOG_ENTRIES : size;
#else
priv->retry_rate = 1;
priv->ibss_beacon = NULL;
- spin_lock_init(&priv->lock);
spin_lock_init(&priv->sta_lock);
spin_lock_init(&priv->hcmd_lock);
/* Tell mac80211 our characteristics */
hw->flags = IEEE80211_HW_SIGNAL_DBM |
IEEE80211_HW_NOISE_DBM |
- IEEE80211_HW_SPECTRUM_MGMT |
- IEEE80211_HW_SUPPORTS_PS |
- IEEE80211_HW_SUPPORTS_DYNAMIC_PS;
+ IEEE80211_HW_SPECTRUM_MGMT;
+
+ if (!priv->cfg->broken_powersave)
+ hw->flags |= IEEE80211_HW_SUPPORTS_PS |
+ IEEE80211_HW_SUPPORTS_DYNAMIC_PS;
hw->wiphy->interface_modes =
BIT(NL80211_IFTYPE_STATION) |
* PCI Tx retries from interfering with C3 CPU state */
pci_write_config_byte(pdev, 0x41, 0x00);
- /* this spin lock will be used in apm_ops.init and EEPROM access
+ /* these spin locks will be used in apm_ops.init and EEPROM access
* we should init now
*/
spin_lock_init(&priv->reg_lock);
+ spin_lock_init(&priv->lock);
/***********************
* 4. Read EEPROM
struct sk_buff_head rx_list;
struct list_head rx_tickets;
- struct list_head rx_packets[IWM_RX_ID_HASH];
+ struct list_head rx_packets[IWM_RX_ID_HASH + 1];
struct workqueue_struct *rx_wq;
struct work_struct rx_worker;
int iwm_down(struct iwm_priv *iwm);
/* TX API */
-u16 iwm_tid_to_queue(u16 tid);
+int iwm_tid_to_queue(u16 tid);
void iwm_tx_credit_inc(struct iwm_priv *iwm, int id, int total_freed_pages);
void iwm_tx_worker(struct work_struct *work);
int iwm_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
*/
static const u16 iwm_1d_to_queue[8] = { 1, 0, 0, 1, 2, 2, 3, 3 };
-u16 iwm_tid_to_queue(u16 tid)
+int iwm_tid_to_queue(u16 tid)
{
if (tid > IWM_UMAC_TID_NR - 2)
return -EINVAL;
if (!stop) {
struct iwm_tx_queue *txq;
- u16 queue = iwm_tid_to_queue(bit);
+ int queue = iwm_tid_to_queue(bit);
if (queue < 0)
continue;
#include <linux/delay.h>
#include <linux/etherdevice.h>
#include <linux/netdevice.h>
+#include <linux/if_ether.h>
#include <linux/if_arp.h>
#include <linux/kthread.h>
#include <linux/kfifo.h>
mesh_dev->netdev_ops = &mesh_netdev_ops;
mesh_dev->ethtool_ops = &lbs_ethtool_ops;
- memcpy(mesh_dev->dev_addr, priv->dev->dev_addr,
- sizeof(priv->dev->dev_addr));
+ memcpy(mesh_dev->dev_addr, priv->dev->dev_addr, ETH_ALEN);
SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent);
chan_count = lbs_scan_create_channel_list(priv, chan_list);
netif_stop_queue(priv->dev);
- netif_carrier_off(priv->dev);
- if (priv->mesh_dev) {
+ if (priv->mesh_dev)
netif_stop_queue(priv->mesh_dev);
- netif_carrier_off(priv->mesh_dev);
- }
/* Prepare to continue an interrupted scan */
lbs_deb_scan("chan_count %d, scan_channel %d\n",
priv->scan_channel = 0;
out:
- if (priv->connect_status == LBS_CONNECTED) {
- netif_carrier_on(priv->dev);
- if (!priv->tx_pending_len)
- netif_wake_queue(priv->dev);
- }
- if (priv->mesh_dev && (priv->mesh_connect_status == LBS_CONNECTED)) {
- netif_carrier_on(priv->mesh_dev);
- if (!priv->tx_pending_len)
- netif_wake_queue(priv->mesh_dev);
- }
+ if (priv->connect_status == LBS_CONNECTED && !priv->tx_pending_len)
+ netif_wake_queue(priv->dev);
+
+ if (priv->mesh_dev && (priv->mesh_connect_status == LBS_CONNECTED) &&
+ !priv->tx_pending_len)
+ netif_wake_queue(priv->mesh_dev);
+
kfree(chan_list);
lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
if (priv->connect_status == LBS_CONNECTED) {
memcpy(extra, priv->curbssparams.ssid,
priv->curbssparams.ssid_len);
- extra[priv->curbssparams.ssid_len] = '\0';
} else {
memset(extra, 0, 32);
- extra[priv->curbssparams.ssid_len] = '\0';
}
/*
* If none, we may want to get the one that was set
stats.band = IEEE80211_BAND_2GHZ;
stats.signal = prxpd->snr;
stats.noise = prxpd->nf;
- stats.qual = prxpd->snr - prxpd->nf;
/* Marvell rate index has a hole at value 4 */
if (prxpd->rx_rate > 4)
--prxpd->rx_rate;
#define MAX_RID_LEN 1024
/* Helper routine to record keys
- * Do not call from interrupt context */
+ * It is called under orinoco_lock so it may not sleep */
static int orinoco_set_key(struct orinoco_private *priv, int index,
enum orinoco_alg alg, const u8 *key, int key_len,
const u8 *seq, int seq_len)
kzfree(priv->keys[index].seq);
if (key_len) {
- priv->keys[index].key = kzalloc(key_len, GFP_KERNEL);
+ priv->keys[index].key = kzalloc(key_len, GFP_ATOMIC);
if (!priv->keys[index].key)
goto nomem;
} else
priv->keys[index].key = NULL;
if (seq_len) {
- priv->keys[index].seq = kzalloc(seq_len, GFP_KERNEL);
+ priv->keys[index].seq = kzalloc(seq_len, GFP_ATOMIC);
if (!priv->keys[index].seq)
goto free_key;
} else
#define PAIRWISE_KEY_ENTRY(__idx) \
( PAIRWISE_KEY_TABLE_BASE + ((__idx) * sizeof(struct hw_key_entry)) )
#define MAC_IVEIV_ENTRY(__idx) \
- ( MAC_IVEIV_TABLE_BASE + ((__idx) & sizeof(struct mac_iveiv_entry)) )
+ ( MAC_IVEIV_TABLE_BASE + ((__idx) * sizeof(struct mac_iveiv_entry)) )
#define MAC_WCID_ATTR_ENTRY(__idx) \
( MAC_WCID_ATTRIBUTE_BASE + ((__idx) * sizeof(u32)) )
#define SHARED_KEY_ENTRY(__idx) \
#include <linux/module.h>
#include "rt2x00.h"
-#ifdef CONFIG_RT2800USB
+#if defined(CONFIG_RT2800USB) || defined(CONFIG_RT2800USB_MODULE)
#include "rt2x00usb.h"
#endif
#include "rt2800lib.h"
if (rt2x00_intf_is_usb(rt2x00dev)) {
rt2800_register_write(rt2x00dev, USB_DMA_CFG, 0x00000000);
-#ifdef CONFIG_RT2800USB
+#if defined(CONFIG_RT2800USB) || defined(CONFIG_RT2800USB_MODULE)
rt2x00usb_vendor_request_sw(rt2x00dev, USB_DEVICE_MODE, 0,
USB_MODE_RESET, REGISTER_TIMEOUT);
#endif
u16 eeprom;
/*
+ * Disable powersaving as default on PCI devices.
+ */
+ if (rt2x00_intf_is_pci(rt2x00dev))
+ rt2x00dev->hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT;
+
+ /*
* Initialize all hw fields.
*/
rt2x00dev->hw->flags =
IEEE80211_HT_CAP_SGI_20 |
IEEE80211_HT_CAP_SGI_40 |
IEEE80211_HT_CAP_TX_STBC |
- IEEE80211_HT_CAP_RX_STBC |
- IEEE80211_HT_CAP_PSMP_SUPPORT;
+ IEEE80211_HT_CAP_RX_STBC;
spec->ht.ampdu_factor = 3;
spec->ht.ampdu_density = 4;
spec->ht.mcs.tx_params =
rt2800_register_multiread(rt2x00dev, offset,
&iveiv_entry, sizeof(iveiv_entry));
- memcpy(&iveiv_entry.iv[0], iv16, sizeof(iv16));
- memcpy(&iveiv_entry.iv[4], iv32, sizeof(iv32));
+ memcpy(iv16, &iveiv_entry.iv[0], sizeof(*iv16));
+ memcpy(iv32, &iveiv_entry.iv[4], sizeof(*iv32));
}
static int rt2800_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
{ USB_DEVICE(0x1737, 0x0070), USB_DEVICE_DATA(&rt2800usb_ops) },
{ USB_DEVICE(0x1737, 0x0071), USB_DEVICE_DATA(&rt2800usb_ops) },
{ USB_DEVICE(0x1737, 0x0077), USB_DEVICE_DATA(&rt2800usb_ops) },
+ { USB_DEVICE(0x1737, 0x0079), USB_DEVICE_DATA(&rt2800usb_ops) },
/* Logitec */
{ USB_DEVICE(0x0789, 0x0162), USB_DEVICE_DATA(&rt2800usb_ops) },
{ USB_DEVICE(0x0789, 0x0163), USB_DEVICE_DATA(&rt2800usb_ops) },
unsigned int i;
/*
+ * Disable powersaving as default.
+ */
+ rt2x00dev->hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT;
+
+ /*
* Initialize all hw fields.
*/
rt2x00dev->hw->flags =
rx_status.antenna = (flags2 >> 15) & 1;
/* TODO: improve signal/rssi reporting */
- rx_status.qual = flags2 & 0xFF;
rx_status.signal = (flags2 >> 8) & 0x7F;
/* XXX: is this correct? */
rx_status.rate_idx = (flags >> 20) & 0xF;
}
}
- if (loop >= INIT_LOOP) {
+ if (loop > INIT_LOOP) {
wl1251_error("timeout waiting for the hardware to "
"complete initialization");
return -EIO;
return ret;
}
-static int wl1271_build_basic_rates(char *rates, u8 band)
+static int wl1271_build_basic_rates(u8 *rates, u8 band)
{
u8 index = 0;
return index;
}
-static int wl1271_build_extended_rates(char *rates, u8 band)
+static int wl1271_build_extended_rates(u8 *rates, u8 band)
{
u8 index = 0;
return r;
}
-static int ofdm_qual_db(u8 status_quality, u8 zd_rate, unsigned int size)
-{
- static const u16 constants[] = {
- 715, 655, 585, 540, 470, 410, 360, 315,
- 270, 235, 205, 175, 150, 125, 105, 85,
- 65, 50, 40, 25, 15
- };
-
- int i;
- u32 x;
-
- /* It seems that their quality parameter is somehow per signal
- * and is now transferred per bit.
- */
- switch (zd_rate) {
- case ZD_OFDM_RATE_6M:
- case ZD_OFDM_RATE_12M:
- case ZD_OFDM_RATE_24M:
- size *= 2;
- break;
- case ZD_OFDM_RATE_9M:
- case ZD_OFDM_RATE_18M:
- case ZD_OFDM_RATE_36M:
- case ZD_OFDM_RATE_54M:
- size *= 4;
- size /= 3;
- break;
- case ZD_OFDM_RATE_48M:
- size *= 3;
- size /= 2;
- break;
- default:
- return -EINVAL;
- }
-
- x = (10000 * status_quality)/size;
- for (i = 0; i < ARRAY_SIZE(constants); i++) {
- if (x > constants[i])
- break;
- }
-
- switch (zd_rate) {
- case ZD_OFDM_RATE_6M:
- case ZD_OFDM_RATE_9M:
- i += 3;
- break;
- case ZD_OFDM_RATE_12M:
- case ZD_OFDM_RATE_18M:
- i += 5;
- break;
- case ZD_OFDM_RATE_24M:
- case ZD_OFDM_RATE_36M:
- i += 9;
- break;
- case ZD_OFDM_RATE_48M:
- case ZD_OFDM_RATE_54M:
- i += 15;
- break;
- default:
- return -EINVAL;
- }
-
- return i;
-}
-
-static int ofdm_qual_percent(u8 status_quality, u8 zd_rate, unsigned int size)
-{
- int r;
-
- r = ofdm_qual_db(status_quality, zd_rate, size);
- ZD_ASSERT(r >= 0);
- if (r < 0)
- r = 0;
-
- r = (r * 100)/29;
- return r <= 100 ? r : 100;
-}
-
-static unsigned int log10times100(unsigned int x)
-{
- static const u8 log10[] = {
- 0,
- 0, 30, 47, 60, 69, 77, 84, 90, 95, 100,
- 104, 107, 111, 114, 117, 120, 123, 125, 127, 130,
- 132, 134, 136, 138, 139, 141, 143, 144, 146, 147,
- 149, 150, 151, 153, 154, 155, 156, 157, 159, 160,
- 161, 162, 163, 164, 165, 166, 167, 168, 169, 169,
- 170, 171, 172, 173, 174, 174, 175, 176, 177, 177,
- 178, 179, 179, 180, 181, 181, 182, 183, 183, 184,
- 185, 185, 186, 186, 187, 188, 188, 189, 189, 190,
- 190, 191, 191, 192, 192, 193, 193, 194, 194, 195,
- 195, 196, 196, 197, 197, 198, 198, 199, 199, 200,
- 200, 200, 201, 201, 202, 202, 202, 203, 203, 204,
- 204, 204, 205, 205, 206, 206, 206, 207, 207, 207,
- 208, 208, 208, 209, 209, 210, 210, 210, 211, 211,
- 211, 212, 212, 212, 213, 213, 213, 213, 214, 214,
- 214, 215, 215, 215, 216, 216, 216, 217, 217, 217,
- 217, 218, 218, 218, 219, 219, 219, 219, 220, 220,
- 220, 220, 221, 221, 221, 222, 222, 222, 222, 223,
- 223, 223, 223, 224, 224, 224, 224,
- };
-
- return x < ARRAY_SIZE(log10) ? log10[x] : 225;
-}
-
-enum {
- MAX_CCK_EVM_DB = 45,
-};
-
-static int cck_evm_db(u8 status_quality)
-{
- return (20 * log10times100(status_quality)) / 100;
-}
-
-static int cck_snr_db(u8 status_quality)
-{
- int r = MAX_CCK_EVM_DB - cck_evm_db(status_quality);
- ZD_ASSERT(r >= 0);
- return r;
-}
-
-static int cck_qual_percent(u8 status_quality)
-{
- int r;
-
- r = cck_snr_db(status_quality);
- r = (100*r)/17;
- return r <= 100 ? r : 100;
-}
-
static inline u8 zd_rate_from_ofdm_plcp_header(const void *rx_frame)
{
return ZD_OFDM | zd_ofdm_plcp_header_rate(rx_frame);
}
-u8 zd_rx_qual_percent(const void *rx_frame, unsigned int size,
- const struct rx_status *status)
-{
- return (status->frame_status&ZD_RX_OFDM) ?
- ofdm_qual_percent(status->signal_quality_ofdm,
- zd_rate_from_ofdm_plcp_header(rx_frame),
- size) :
- cck_qual_percent(status->signal_quality_cck);
-}
-
/**
* zd_rx_rate - report zd-rate
* @rx_frame - received frame
struct rx_status;
-u8 zd_rx_qual_percent(const void *rx_frame, unsigned int size,
- const struct rx_status *status);
-
u8 zd_rx_rate(const void *rx_frame, const struct rx_status *status);
struct zd_mc_hash {
stats.freq = zd_channels[_zd_chip_get_channel(&mac->chip) - 1].center_freq;
stats.band = IEEE80211_BAND_2GHZ;
stats.signal = status->signal_strength;
- stats.qual = zd_rx_qual_percent(buffer,
- length - sizeof(struct rx_status),
- status);
rate = zd_rx_rate(buffer, status);
#define PCI_DEVICE_ID_AMD_GOLAM_7450 0x7450
#define PCI_DEVICE_ID_AMD_POGO_7458 0x7458
-/* AMD PCIX bridge registers */
+/* AMD PCI-X bridge registers */
#define PCIX_MEM_BASE_LIMIT_OFFSET 0x1C
#define PCIX_MISCII_OFFSET 0x48
#define PCIX_MISC_BRIDGE_ERRORS_OFFSET 0x80
int segment; /* PCI domain */
u8 bus; /* PCI bus number */
u8 devfn; /* PCI devfn number */
- struct pci_dev *dev; /* it's NULL for PCIE-to-PCI bridge */
+ struct pci_dev *dev; /* it's NULL for PCIe-to-PCI bridge */
struct intel_iommu *iommu; /* IOMMU used by this device */
struct dmar_domain *domain; /* pointer to domain */
};
return ret;
parent = parent->bus->self;
}
- if (pci_is_pcie(tmp)) /* this is a PCIE-to-PCI bridge */
+ if (pci_is_pcie(tmp)) /* this is a PCIe-to-PCI bridge */
return domain_context_mapping_one(domain,
pci_domain_nr(tmp->subordinate),
tmp->subordinate->number, 0,
parent->devfn);
parent = parent->bus->self;
}
- if (pci_is_pcie(tmp)) /* this is a PCIE-to-PCI bridge */
+ if (pci_is_pcie(tmp)) /* this is a PCIe-to-PCI bridge */
iommu_detach_dev(iommu,
tmp->subordinate->number, 0);
else /* this is a legacy PCI bridge */
bridge = pci_find_upstream_pcie_bridge(dev);
if (bridge) {
- if (pci_is_pcie(bridge))/* this is a PCIE-to-PCI/PCIX bridge */
+ if (pci_is_pcie(bridge))/* this is a PCIe-to-PCI/PCIX bridge */
set_irte_sid(irte, SVT_VERIFY_BUS, SQ_ALL_16,
(bridge->bus->number << 8) | dev->bus->number);
else /* this is a legacy PCI bridge */
static void acpi_pci_propagate_wakeup_enable(struct pci_bus *bus, bool enable)
{
while (bus->parent) {
- struct pci_dev *bridge = bus->self;
- int ret;
-
- ret = acpi_pm_device_sleep_wake(&bridge->dev, enable);
- if (!ret || pci_is_pcie(bridge))
+ if (!acpi_pm_device_sleep_wake(&bus->self->dev, enable))
return;
bus = bus->parent;
}
if (acpi_pci_can_wakeup(dev))
return acpi_pm_device_sleep_wake(&dev->dev, enable);
- if (!pci_is_pcie(dev))
- acpi_pci_propagate_wakeup_enable(dev->bus, enable);
-
+ acpi_pci_propagate_wakeup_enable(dev->bus, enable);
return 0;
}
/**
* pcibios_set_pcie_reset_state - set reset state for device dev
- * @dev: the PCI-E device reset
+ * @dev: the PCIe device reset
* @state: Reset state to enter into
*
*
- * Sets the PCI-E reset state for the device. This is the default
+ * Sets the PCIe reset state for the device. This is the default
* implementation. Architecture implementations can override this.
*/
int __attribute__ ((weak)) pcibios_set_pcie_reset_state(struct pci_dev *dev,
/**
* pci_set_pcie_reset_state - set reset state for device dev
- * @dev: the PCI-E device reset
+ * @dev: the PCIe device reset
* @state: Reset state to enter into
*
*
return 0;
}
+static int pci_dev_specific_reset(struct pci_dev *dev, int probe)
+{
+ struct pci_dev_reset_methods *i;
+
+ for (i = pci_dev_reset_methods; i->reset; i++) {
+ if ((i->vendor == dev->vendor ||
+ i->vendor == (u16)PCI_ANY_ID) &&
+ (i->device == dev->device ||
+ i->device == (u16)PCI_ANY_ID))
+ return i->reset(dev, probe);
+ }
+
+ return -ENOTTY;
+}
+
static int pci_dev_reset(struct pci_dev *dev, int probe)
{
int rc;
down(&dev->dev.sem);
}
+ rc = pci_dev_specific_reset(dev, probe);
+ if (rc != -ENOTTY)
+ goto done;
+
rc = pcie_flr(dev, probe);
if (rc != -ENOTTY)
goto done;
return 1;
}
+void __weak pci_fixup_cardbus(struct pci_bus *bus)
+{
+}
+EXPORT_SYMBOL(pci_fixup_cardbus);
+
static int __init pci_setup(char *str)
{
while (str) {
extern void pci_enable_acs(struct pci_dev *dev);
+struct pci_dev_reset_methods {
+ u16 vendor;
+ u16 device;
+ int (*reset)(struct pci_dev *dev, int probe);
+};
+
+extern struct pci_dev_reset_methods pci_dev_reset_methods[];
+
#endif /* DRIVERS_PCI_H */
#
config PCIEAER_INJECT
- tristate "PCIE AER error injector support"
+ tristate "PCIe AER error injector support"
depends on PCIEAER
default n
help
This enables PCI Express Root Port Advanced Error Reporting
(AER) software error injector.
- Debuging PCIE AER code is quite difficult because it is hard
+ Debugging PCIe AER code is quite difficult because it is hard
to trigger various real hardware errors. Software based
error injection can fake almost all kinds of errors with the
help of a user space helper tool aer-inject, which can be
/*
- * PCIE AER software error injection support.
+ * PCIe AER software error injection support.
*
- * Debuging PCIE AER code is quite difficult because it is hard to
+ * Debuging PCIe AER code is quite difficult because it is hard to
* trigger various real hardware errors. Software based error
* injection can fake almost all kinds of errors with the help of a
* user space helper tool aer-inject, which can be gotten from:
module_init(aer_inject_init);
module_exit(aer_inject_exit);
-MODULE_DESCRIPTION("PCIE AER software error injector");
+MODULE_DESCRIPTION("PCIe AER software error injector");
MODULE_LICENSE("GPL");
mutex_init(&rpc->rpc_mutex);
init_waitqueue_head(&rpc->wait_release);
- /* Use PCIE bus function to store rpc into PCIE device */
+ /* Use PCIe bus function to store rpc into PCIe device */
set_service_data(dev, rpc);
return rpc;
*
* @return: Zero on success. Nonzero otherwise.
*
- * Invoked when PCIE bus loads AER service driver. To avoid conflict with
+ * Invoked when PCIe bus loads AER service driver. To avoid conflict with
* BIOS AER support requires BIOS to yield AER control to OS native driver.
**/
int aer_osc_setup(struct pcie_device *pciedev)
* aer_enable_rootport - enable Root Port's interrupts when receiving messages
* @rpc: pointer to a Root Port data structure
*
- * Invoked when PCIE bus loads AER service driver.
+ * Invoked when PCIe bus loads AER service driver.
*/
void aer_enable_rootport(struct aer_rpc *rpc)
{
u32 reg32;
pos = pci_pcie_cap(pdev);
- /* Clear PCIE Capability's Device Status */
+ /* Clear PCIe Capability's Device Status */
pci_read_config_word(pdev, pos+PCI_EXP_DEVSTA, ®16);
pci_write_config_word(pdev, pos+PCI_EXP_DEVSTA, reg16);
* disable_root_aer - disable Root Port's interrupts when receiving messages
* @rpc: pointer to a Root Port data structure
*
- * Invoked when PCIE bus unloads AER service driver.
+ * Invoked when PCIe bus unloads AER service driver.
*/
static void disable_root_aer(struct aer_rpc *rpc)
{
if (info->status == 0) {
AER_PR(info, dev,
- "PCIE Bus Error: severity=%s, type=Unaccessible, "
+ "PCIe Bus Error: severity=%s, type=Unaccessible, "
"id=%04x(Unregistered Agent ID)\n",
aer_error_severity_string[info->severity], id);
} else {
agent = AER_GET_AGENT(info->severity, info->status);
AER_PR(info, dev,
- "PCIE Bus Error: severity=%s, type=%s, id=%04x(%s)\n",
+ "PCIe Bus Error: severity=%s, type=%s, id=%04x(%s)\n",
aer_error_severity_string[info->severity],
aer_error_layer[layer], id, aer_agent_string[agent]);
/*
* File: drivers/pci/pcie/aspm.c
- * Enabling PCIE link L0s/L1 state and Clock Power Management
+ * Enabling PCIe link L0s/L1 state and Clock Power Management
*
* Copyright (C) 2007 Intel
* Copyright (C) Zhang Yanmin (yanmin.zhang@intel.com)
int pos;
u32 reg32;
/*
- * Some functions in a slot might not all be PCIE functions,
+ * Some functions in a slot might not all be PCIe functions,
* very strange. Disable ASPM for the whole slot
*/
list_for_each_entry(child, &pdev->subordinate->devices, bus_list) {
*/
#define DRIVER_VERSION "v1.0"
#define DRIVER_AUTHOR "tom.l.nguyen@intel.com"
-#define DRIVER_DESC "PCIE Port Bus Driver"
+#define DRIVER_DESC "PCIe Port Bus Driver"
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_LICENSE("GPL");
if (!pci_cache_line_size) {
printk(KERN_DEBUG "PCI: CLS %u bytes, default %u\n",
cls << 2, pci_dfl_cache_line_size << 2);
- pci_cache_line_size = cls;
+ pci_cache_line_size = cls ? cls : pci_dfl_cache_line_size;
}
return 0;
}
fs_initcall_sync(pci_apply_final_quirks);
+
+/*
+ * Followings are device-specific reset methods which can be used to
+ * reset a single function if other methods (e.g. FLR, PM D0->D3) are
+ * not available.
+ */
+static int reset_intel_generic_dev(struct pci_dev *dev, int probe)
+{
+ int pos;
+
+ /* only implement PCI_CLASS_SERIAL_USB at present */
+ if (dev->class == PCI_CLASS_SERIAL_USB) {
+ pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
+ if (!pos)
+ return -ENOTTY;
+
+ if (probe)
+ return 0;
+
+ pci_write_config_byte(dev, pos + 0x4, 1);
+ msleep(100);
+
+ return 0;
+ } else {
+ return -ENOTTY;
+ }
+}
+
+static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, int probe)
+{
+ int pos;
+
+ pos = pci_find_capability(dev, PCI_CAP_ID_EXP);
+ if (!pos)
+ return -ENOTTY;
+
+ if (probe)
+ return 0;
+
+ pci_write_config_word(dev, pos + PCI_EXP_DEVCTL,
+ PCI_EXP_DEVCTL_BCR_FLR);
+ msleep(100);
+
+ return 0;
+}
+
+#define PCI_DEVICE_ID_INTEL_82599_SFP_VF 0x10ed
+
+struct pci_dev_reset_methods pci_dev_reset_methods[] = {
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82599_SFP_VF,
+ reset_intel_82599_sfp_virtfn },
+ { PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ reset_intel_generic_dev },
+ { 0 }
+};
#else
void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev) {}
#endif
DECLARE_RWSEM(pci_bus_sem);
/*
- * find the upstream PCIE-to-PCI bridge of a PCI device
+ * find the upstream PCIe-to-PCI bridge of a PCI device
* if the device is PCIE, return NULL
- * if the device isn't connected to a PCIE bridge (that is its parent is a
+ * if the device isn't connected to a PCIe bridge (that is its parent is a
* legacy PCI bridge and the bridge is directly connected to bus 0), return its
* parent
*/
tmp = pdev;
continue;
}
- /* PCI device should connect to a PCIE bridge */
+ /* PCI device should connect to a PCIe bridge */
if (pdev->pcie_type != PCI_EXP_TYPE_PCI_BRIDGE) {
/* Busted hardware? */
WARN_ON_ONCE(1);
unsigned int max, pass;
s->functions = pci_scan_slot(bus, PCI_DEVFN(0, 0));
-/* pcibios_fixup_bus(bus); */
+ pci_fixup_cardbus(bus);
max = bus->secondary;
for (pass = 0; pass < 2; pass++)
static int __init dell_wmi_init(void)
{
int err;
+ acpi_status status;
if (wmi_has_guid(DELL_EVENT_GUID)) {
printk(KERN_WARNING "dell-wmi: No known WMI GUID found\n");
if (err)
return err;
- err = wmi_install_notify_handler(DELL_EVENT_GUID,
+ status = wmi_install_notify_handler(DELL_EVENT_GUID,
dell_wmi_notify, NULL);
- if (err) {
+ if (ACPI_FAILURE(status)) {
input_unregister_device(dell_wmi_input_dev);
printk(KERN_ERR
"dell-wmi: Unable to register notify handler - %d\n",
- err);
- return err;
+ status);
+ return -ENODEV;
}
return 0;
if (!guid || !handler)
return AE_BAD_PARAMETER;
- find_guid(guid, &block);
- if (!block)
+ if (!find_guid(guid, &block))
return AE_NOT_EXIST;
if (block->handler)
if (!guid)
return AE_BAD_PARAMETER;
- find_guid(guid, &block);
- if (!block)
+ if (!find_guid(guid, &block))
return AE_NOT_EXIST;
if (!block->handler)
/*
* Searching includes executable on directories, else just read.
*/
+ mask &= MAY_READ | MAY_WRITE | MAY_EXEC;
if (mask == MAY_READ || (S_ISDIR(inode->i_mode) && !(mask & MAY_WRITE)))
if (capable(CAP_DAC_READ_SEARCH))
return 0;
* blk_rq_err_bytes() : bytes left till the next error boundary
* blk_rq_sectors() : sectors left in the entire request
* blk_rq_cur_sectors() : sectors left in the current segment
- * blk_rq_err_sectors() : sectors left till the next error boundary
*/
static inline sector_t blk_rq_pos(const struct request *rq)
{
return blk_rq_cur_bytes(rq) >> 9;
}
-static inline unsigned int blk_rq_err_sectors(const struct request *rq)
-{
- return blk_rq_err_bytes(rq) >> 9;
-}
-
/*
* Request issue related functions.
*/
return q->limits.alignment_offset;
}
+static inline int queue_limit_alignment_offset(struct queue_limits *lim, sector_t offset)
+{
+ unsigned int granularity = max(lim->physical_block_size, lim->io_min);
+
+ offset &= granularity - 1;
+ return (granularity + lim->alignment_offset - offset) & (granularity - 1);
+}
+
static inline int queue_sector_alignment_offset(struct request_queue *q,
sector_t sector)
{
- return ((sector << 9) - q->limits.alignment_offset)
- & (q->limits.io_min - 1);
+ return queue_limit_alignment_offset(&q->limits, sector << 9);
}
static inline int bdev_alignment_offset(struct block_device *bdev)
#define IEEE80211_HT_CAP_DELAY_BA 0x0400
#define IEEE80211_HT_CAP_MAX_AMSDU 0x0800
#define IEEE80211_HT_CAP_DSSSCCK40 0x1000
-#define IEEE80211_HT_CAP_PSMP_SUPPORT 0x2000
+#define IEEE80211_HT_CAP_RESERVED 0x2000
#define IEEE80211_HT_CAP_40MHZ_INTOLERANT 0x4000
#define IEEE80211_HT_CAP_LSIG_TXOP_PROT 0x8000
#define IN_DEV_FORWARD(in_dev) IN_DEV_CONF_GET((in_dev), FORWARDING)
#define IN_DEV_MFORWARD(in_dev) IN_DEV_ANDCONF((in_dev), MC_FORWARDING)
#define IN_DEV_RPFILTER(in_dev) IN_DEV_MAXCONF((in_dev), RP_FILTER)
+#define IN_DEV_SRC_VMARK(in_dev) IN_DEV_ORCONF((in_dev), SRC_VMARK)
#define IN_DEV_SOURCE_ROUTE(in_dev) IN_DEV_ANDCONF((in_dev), \
ACCEPT_SOURCE_ROUTE)
#define IN_DEV_ACCEPT_LOCAL(in_dev) IN_DEV_ORCONF((in_dev), ACCEPT_LOCAL)
}
/**
- * INIT_KFIFO - Initialize a kfifo declared by DECLARED_KFIFO
+ * INIT_KFIFO - Initialize a kfifo declared by DECLARE_KFIFO
* @name: name of the declared kfifo datatype
*/
#define INIT_KFIFO(name) \
resource_size_t);
void pcibios_update_irq(struct pci_dev *, int irq);
+/* Weak but can be overriden by arch */
+void pci_fixup_cardbus(struct pci_bus *);
+
/* Generic PCI functions used internally */
extern struct pci_bus *pci_find_bus(int domain, int busnr);
NET_IPV4_CONF_ARP_ACCEPT=21,
NET_IPV4_CONF_ARP_NOTIFY=22,
NET_IPV4_CONF_ACCEPT_LOCAL=23,
+ NET_IPV4_CONF_SRC_VMARK=24,
__NET_IPV4_CONF_MAX
};
* unspecified depending on the hardware capabilities flags
* @IEEE80211_HW_SIGNAL_*
* @noise: noise when receiving this frame, in dBm.
- * @qual: overall signal quality indication, in percent (0-100).
* @antenna: antenna used
* @rate_idx: index of data rate into band's supported rates or MCS index if
* HT rates are use (RX_FLAG_HT)
int freq;
int signal;
int noise;
- int __deprecated qual;
int antenna;
int rate_idx;
int flag;
local_bh_enable();
}
+/*
+ * The TX headroom reserved by mac80211 for its own tx_status functions.
+ * This is enough for the radiotap header.
+ */
+#define IEEE80211_TX_STATUS_HEADROOM 13
+
/**
* ieee80211_tx_status - transmit status callback
*
#define __LIBSRP_H__
#include <linux/list.h>
+#include <linux/kfifo.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_host.h>
#include <scsi/srp.h>
}
EXPORT_SYMBOL(do_mmap_pgoff);
+SYSCALL_DEFINE6(mmap_pgoff, unsigned long, addr, unsigned long, len,
+ unsigned long, prot, unsigned long, flags,
+ unsigned long, fd, unsigned long, pgoff)
+{
+ struct file *file = NULL;
+ unsigned long retval = -EBADF;
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ if (unlikely(flags & MAP_HUGETLB))
+ return -EINVAL;
+ file = fget(fd);
+ if (!file)
+ goto out;
+ } else if (flags & MAP_HUGETLB) {
+ struct user_struct *user = NULL;
+ /*
+ * VM_NORESERVE is used because the reservations will be
+ * taken when vm_ops->mmap() is called
+ * A dummy user value is used because we are not locking
+ * memory so no accounting is necessary
+ */
+ len = ALIGN(len, huge_page_size(&default_hstate));
+ file = hugetlb_file_setup(HUGETLB_ANON_FILE, len, VM_NORESERVE,
+ &user, HUGETLB_ANONHUGE_INODE);
+ if (IS_ERR(file))
+ return PTR_ERR(file);
+ }
+
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+
+ down_write(¤t->mm->mmap_sem);
+ retval = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
+ up_write(¤t->mm->mmap_sem);
+
+ if (file)
+ fput(file);
+out:
+ return retval;
+}
+
/*
* Some shared mappigns will want the pages marked read-only
* to track write events. If so, we'll downgrade vm_page_prot
}
EXPORT_SYMBOL(do_mmap_pgoff);
+SYSCALL_DEFINE6(mmap_pgoff, unsigned long, addr, unsigned long, len,
+ unsigned long, prot, unsigned long, flags,
+ unsigned long, fd, unsigned long, pgoff)
+{
+ struct file *file = NULL;
+ unsigned long retval = -EBADF;
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ goto out;
+ }
+
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+
+ down_write(¤t->mm->mmap_sem);
+ retval = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
+ up_write(¤t->mm->mmap_sem);
+
+ if (file)
+ fput(file);
+out:
+ return retval;
+}
+
/*
* split a vma into two pieces at address 'addr', a new vma is allocated either
* for the first part or the tail.
#include <linux/module.h>
#include <linux/err.h>
#include <linux/sched.h>
-#include <linux/hugetlb.h>
-#include <linux/syscalls.h>
-#include <linux/mman.h>
-#include <linux/file.h>
#include <asm/uaccess.h>
#define CREATE_TRACE_POINTS
}
EXPORT_SYMBOL_GPL(get_user_pages_fast);
-SYSCALL_DEFINE6(mmap_pgoff, unsigned long, addr, unsigned long, len,
- unsigned long, prot, unsigned long, flags,
- unsigned long, fd, unsigned long, pgoff)
-{
- struct file * file = NULL;
- unsigned long retval = -EBADF;
-
- if (!(flags & MAP_ANONYMOUS)) {
- if (unlikely(flags & MAP_HUGETLB))
- return -EINVAL;
- file = fget(fd);
- if (!file)
- goto out;
- } else if (flags & MAP_HUGETLB) {
- struct user_struct *user = NULL;
- /*
- * VM_NORESERVE is used because the reservations will be
- * taken when vm_ops->mmap() is called
- * A dummy user value is used because we are not locking
- * memory so no accounting is necessary
- */
- len = ALIGN(len, huge_page_size(&default_hstate));
- file = hugetlb_file_setup(HUGETLB_ANON_FILE, len, VM_NORESERVE,
- &user, HUGETLB_ANONHUGE_INODE);
- if (IS_ERR(file))
- return PTR_ERR(file);
- }
-
- flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
-
- down_write(¤t->mm->mmap_sem);
- retval = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
- up_write(¤t->mm->mmap_sem);
-
- if (file)
- fput(file);
-out:
- return retval;
-}
-
/* Tracepoints definitions. */
EXPORT_TRACEPOINT_SYMBOL(kmalloc);
EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
__u64 count; /* Default No packets to send */
__u64 sofar; /* How many pkts we've sent so far */
__u64 tx_bytes; /* How many bytes we've transmitted */
- __u64 errors; /* Errors when trying to transmit,
- pkts will be re-sent */
+ __u64 errors; /* Errors when trying to transmit, */
/* runtime counters relating to clone_skb */
pkt_dev->seq_num++;
pkt_dev->tx_bytes += pkt_dev->last_pkt_size;
break;
+ case NET_XMIT_DROP:
+ case NET_XMIT_CN:
+ case NET_XMIT_POLICED:
+ /* skb has been consumed */
+ pkt_dev->errors++;
+ break;
default: /* Drivers are not supposed to return other values! */
if (net_ratelimit())
pr_info("pktgen: %s xmit error: %d\n",
DEVINET_SYSCTL_RW_ENTRY(ACCEPT_SOURCE_ROUTE,
"accept_source_route"),
DEVINET_SYSCTL_RW_ENTRY(ACCEPT_LOCAL, "accept_local"),
+ DEVINET_SYSCTL_RW_ENTRY(SRC_VMARK, "src_valid_mark"),
DEVINET_SYSCTL_RW_ENTRY(PROXY_ARP, "proxy_arp"),
DEVINET_SYSCTL_RW_ENTRY(MEDIUM_ID, "medium_id"),
DEVINET_SYSCTL_RW_ENTRY(BOOTP_RELAY, "bootp_relay"),
no_addr = in_dev->ifa_list == NULL;
rpf = IN_DEV_RPFILTER(in_dev);
accept_local = IN_DEV_ACCEPT_LOCAL(in_dev);
+ if (mark && !IN_DEV_SRC_VMARK(in_dev))
+ fl.mark = 0;
}
rcu_read_unlock();
ht_cap->ht_supported = true;
- ht_cap->cap = le16_to_cpu(ht_cap_ie->cap_info) & sband->ht_cap.cap;
- ht_cap->cap &= ~IEEE80211_HT_CAP_SM_PS;
- ht_cap->cap |= sband->ht_cap.cap & IEEE80211_HT_CAP_SM_PS;
+ /*
+ * The bits listed in this expression should be
+ * the same for the peer and us, if the station
+ * advertises more then we can't use those thus
+ * we mask them out.
+ */
+ ht_cap->cap = le16_to_cpu(ht_cap_ie->cap_info) &
+ (sband->ht_cap.cap |
+ ~(IEEE80211_HT_CAP_LDPC_CODING |
+ IEEE80211_HT_CAP_SUP_WIDTH_20_40 |
+ IEEE80211_HT_CAP_GRN_FLD |
+ IEEE80211_HT_CAP_SGI_20 |
+ IEEE80211_HT_CAP_SGI_40 |
+ IEEE80211_HT_CAP_DSSSCCK40));
+ /*
+ * The STBC bits are asymmetric -- if we don't have
+ * TX then mask out the peer's RX and vice versa.
+ */
+ if (!(sband->ht_cap.cap & IEEE80211_HT_CAP_TX_STBC))
+ ht_cap->cap &= ~IEEE80211_HT_CAP_RX_STBC;
+ if (!(sband->ht_cap.cap & IEEE80211_HT_CAP_RX_STBC))
+ ht_cap->cap &= ~IEEE80211_HT_CAP_TX_STBC;
ampdu_info = ht_cap_ie->ampdu_params_info;
ht_cap->ampdu_factor =
struct sta_info *ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata,
u8 *bssid,u8 *addr, u32 supp_rates)
{
+ struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
int band = local->hw.conf.channel->band;
return NULL;
}
+ if (ifibss->state == IEEE80211_IBSS_MLME_SEARCH)
+ return NULL;
+
if (compare_ether_addr(bssid, sdata->u.ibss.bssid))
return NULL;
* and we need some headroom for passing the frame to monitor
* interfaces, but never both at the same time.
*/
+ BUILD_BUG_ON(IEEE80211_TX_STATUS_HEADROOM !=
+ sizeof(struct ieee80211_tx_status_rtap_hdr));
local->tx_headroom = max_t(unsigned int , local->hw.extra_tx_headroom,
sizeof(struct ieee80211_tx_status_rtap_hdr));
sdata->u.mgd.flags &= ~(IEEE80211_STA_CONNECTION_POLL |
IEEE80211_STA_BEACON_POLL);
+ /*
+ * Always handle WMM once after association regardless
+ * of the first value the AP uses. Setting -1 here has
+ * that effect because the AP values is an unsigned
+ * 4-bit value.
+ */
+ sdata->u.mgd.wmm_last_param_set = -1;
+
ieee80211_led_assoc(local, 1);
sdata->vif.bss_conf.assoc = 1;
if (!local->ps_sdata)
return false;
+ /* No point if we're going to suspend */
+ if (local->quiescing)
+ return false;
+
return true;
}
/* restart hardware */
if (local->open_count) {
+ /*
+ * Upon resume hardware can sometimes be goofy due to
+ * various platform / driver / bus issues, so restarting
+ * the device may at times not work immediately. Propagate
+ * the error.
+ */
res = drv_start(local);
+ if (res) {
+ WARN(local->suspended, "Harware became unavailable "
+ "upon resume. This is could be a software issue"
+ "prior to suspend or a harware issue\n");
+ return res;
+ }
ieee80211_led_radio(local, true);
}
}
}
- WARN_ON(!bss);
+ /*
+ * We might be coming here because the driver reported
+ * a successful association at the same time as the
+ * user requested a deauth. In that case, we will have
+ * removed the BSS from the auth_bsses list due to the
+ * deauth request when the assoc response makes it. If
+ * the two code paths acquire the lock the other way
+ * around, that's just the standard situation of a
+ * deauth being requested while connected.
+ */
+ if (!bss)
+ goto out;
} else if (wdev->conn) {
cfg80211_sme_failed_assoc(wdev);
/*
struct cfg80211_registered_device *rdev;
struct wiphy *wiphy;
struct iw_scan_req *wreq = NULL;
- struct cfg80211_scan_request *creq;
+ struct cfg80211_scan_request *creq = NULL;
int i, err, n_channels = 0;
enum ieee80211_band band;
/* translate "Scan for SSID" request */
if (wreq) {
if (wrqu->data.flags & IW_SCAN_THIS_ESSID) {
- if (wreq->essid_len > IEEE80211_MAX_SSID_LEN)
- return -EINVAL;
+ if (wreq->essid_len > IEEE80211_MAX_SSID_LEN) {
+ err = -EINVAL;
+ goto out;
+ }
memcpy(creq->ssids[0].ssid, wreq->essid, wreq->essid_len);
creq->ssids[0].ssid_len = wreq->essid_len;
}
err = rdev->ops->scan(wiphy, dev, creq);
if (err) {
rdev->scan_req = NULL;
- kfree(creq);
+ /* creq will be freed below */
} else {
nl80211_send_scan_start(rdev, dev);
+ /* creq now owned by driver */
+ creq = NULL;
dev_hold(dev);
}
out:
+ kfree(creq);
cfg80211_unlock_rdev(rdev);
return err;
}
if (!dev)
goto free_dst;
- /* Copy neighbout for reachability confirmation */
+ /* Copy neighbour for reachability confirmation */
dst0->neighbour = neigh_clone(dst->neighbour);
xfrm_init_path((struct xfrm_dst *)dst0, dst, nfheader_len);
struct snd_pcm_hw_params *params)
{
int err;
+ struct aaci *aaci = substream->private_data;
aaci_pcm_hw_free(substream);
if (aacirun->pcm_open) {
static int aaci_pcm_playback_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params)
{
- struct aaci *aaci = substream->private_data;
struct aaci_runtime *aacirun = substream->runtime->private_data;
unsigned int channels = params_channels(params);
int ret;
static int aaci_pcm_capture_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params)
{
- struct aaci *aaci = substream->private_data;
struct aaci_runtime *aacirun = substream->runtime->private_data;
int ret;
err = snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_RATE,
hw->rate_min, hw->rate_max);
- if (err < 0)
- return err;
+ if (err < 0)
+ return err;
err = snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_PERIOD_BYTES,
hw->period_bytes_min, hw->period_bytes_max);
- if (err < 0)
- return err;
+ if (err < 0)
+ return err;
err = snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_PERIODS,
hw->periods_min, hw->periods_max);
return;
/* generate tone */
- snd_hda_codec_write_cache(codec, beep->nid, 0,
+ snd_hda_codec_write(codec, beep->nid, 0,
AC_VERB_SET_BEEP_CONTROL, beep->tone);
}
beep->dev = NULL;
cancel_work_sync(&beep->beep_work);
/* turn off beep for sure */
- snd_hda_codec_write_cache(beep->codec, beep->nid, 0,
+ snd_hda_codec_write(beep->codec, beep->nid, 0,
AC_VERB_SET_BEEP_CONTROL, 0);
}
beep->enabled = enable;
if (!enable) {
/* turn off beep */
- snd_hda_codec_write_cache(beep->codec, beep->nid, 0,
+ snd_hda_codec_write(beep->codec, beep->nid, 0,
AC_VERB_SET_BEEP_CONTROL, 0);
}
if (beep->mode == HDA_BEEP_MODE_SWREG) {
mutex_init(&beep->mutex);
if (beep->mode == HDA_BEEP_MODE_ON) {
- beep->enabled = 1;
- snd_hda_do_register(&beep->register_work);
+ int err = snd_hda_do_attach(beep);
+ if (err < 0) {
+ kfree(beep);
+ codec->beep = NULL;
+ return err;
+ }
}
return 0;
if (beep) {
cancel_work_sync(&beep->register_work);
cancel_delayed_work(&beep->unregister_work);
- if (beep->enabled)
+ if (beep->dev)
snd_hda_do_detach(beep);
codec->beep = NULL;
kfree(beep);
*/
u32 snd_hda_pin_sense(struct hda_codec *codec, hda_nid_t nid)
{
- u32 pincap = snd_hda_query_pin_caps(codec, nid);
-
- if (pincap & AC_PINCAP_TRIG_REQ) /* need trigger? */
- snd_hda_codec_read(codec, nid, 0, AC_VERB_SET_PIN_SENSE, 0);
+ u32 pincap;
+ if (!codec->no_trigger_sense) {
+ pincap = snd_hda_query_pin_caps(codec, nid);
+ if (pincap & AC_PINCAP_TRIG_REQ) /* need trigger? */
+ snd_hda_codec_read(codec, nid, 0, AC_VERB_SET_PIN_SENSE, 0);
+ }
return snd_hda_codec_read(codec, nid, 0,
AC_VERB_GET_PIN_SENSE, 0);
}
unsigned int pin_amp_workaround:1; /* pin out-amp takes index
* (e.g. Conexant codecs)
*/
+ unsigned int no_trigger_sense:1; /* don't trigger at pin-sensing */
#ifdef CONFIG_SND_HDA_POWER_SAVE
unsigned int power_on :1; /* current (global) power-state */
unsigned int power_transition :1; /* power-state in transition */
*/
unsigned char stream_tag; /* assigned stream */
unsigned char index; /* stream index */
+ int device; /* last device number assigned to */
unsigned int opened :1;
unsigned int running :1;
*/
/* assign a stream for the PCM */
-static inline struct azx_dev *azx_assign_device(struct azx *chip, int stream)
+static inline struct azx_dev *
+azx_assign_device(struct azx *chip, struct snd_pcm_substream *substream)
{
int dev, i, nums;
- if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ struct azx_dev *res = NULL;
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
dev = chip->playback_index_offset;
nums = chip->playback_streams;
} else {
}
for (i = 0; i < nums; i++, dev++)
if (!chip->azx_dev[dev].opened) {
- chip->azx_dev[dev].opened = 1;
- return &chip->azx_dev[dev];
+ res = &chip->azx_dev[dev];
+ if (res->device == substream->pcm->device)
+ break;
}
- return NULL;
+ if (res) {
+ res->opened = 1;
+ res->device = substream->pcm->device;
+ }
+ return res;
}
/* release the assigned stream */
int err;
mutex_lock(&chip->open_mutex);
- azx_dev = azx_assign_device(chip, substream->stream);
+ azx_dev = azx_assign_device(chip, substream);
if (azx_dev == NULL) {
mutex_unlock(&chip->open_mutex);
return -EBUSY;
*/
spec->multiout.no_share_stream = 1;
+ codec->no_trigger_sense = 1;
+
return 0;
}
codec->patch_ops = ad198x_patch_ops;
+ codec->no_trigger_sense = 1;
+
return 0;
}
codec->patch_ops.unsol_event = ad1981_hp_unsol_event;
break;
}
+
+ codec->no_trigger_sense = 1;
+
return 0;
}
#endif
spec->vmaster_nid = 0x04;
+ codec->no_trigger_sense = 1;
+
return 0;
}
codec->patch_ops = ad198x_patch_ops;
+ codec->no_trigger_sense = 1;
+
return 0;
}
break;
}
+ codec->no_trigger_sense = 1;
+
return 0;
}
spec->mixers[2] = ad1882_6stack_mixers;
break;
}
+
+ codec->no_trigger_sense = 1;
+
return 0;
}
{
if (!nid)
return 0;
- /* NOTE: we can't use snd_hda_jack_detect() here because STAC/IDT
- * codecs behave wrongly when SET_PIN_SENSE is triggered, although
- * the pincap gives TRIG_REQ bit.
- */
- if (snd_hda_codec_read(codec, nid, 0, AC_VERB_GET_PIN_SENSE, 0) &
- AC_PINSENSE_PRESENCE)
- return 1;
- return 0;
+ return snd_hda_jack_detect(codec, nid);
}
static void stac92xx_line_out_detect(struct hda_codec *codec,
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
spec->num_pins = ARRAY_SIZE(stac9200_pin_nids);
spec->pin_nids = stac9200_pin_nids;
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
spec->num_pins = ARRAY_SIZE(stac925x_pin_nids);
spec->pin_nids = stac925x_pin_nids;
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
codec->slave_dig_outs = stac92hd73xx_slave_dig_outs;
spec->num_pins = ARRAY_SIZE(stac92hd73xx_pin_nids);
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
codec->slave_dig_outs = stac92hd83xxx_slave_dig_outs;
spec->digbeep_nid = 0x21;
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
codec->patch_ops = stac92xx_patch_ops;
spec->num_pins = STAC92HD71BXX_NUM_PINS;
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
spec->num_pins = ARRAY_SIZE(stac922x_pin_nids);
spec->pin_nids = stac922x_pin_nids;
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
codec->slave_dig_outs = stac927x_slave_dig_outs;
spec->num_pins = ARRAY_SIZE(stac927x_pin_nids);
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
spec->num_pins = ARRAY_SIZE(stac9205_pin_nids);
spec->pin_nids = stac9205_pin_nids;
spec = kzalloc(sizeof(*spec), GFP_KERNEL);
if (spec == NULL)
return -ENOMEM;
+ codec->no_trigger_sense = 1;
codec->spec = spec;
spec->num_pins = ARRAY_SIZE(stac9872_pin_nids);
spec->pin_nids = stac9872_pin_nids;
struct kvm_assigned_dev_kernel *match;
struct pci_dev *dev;
- down_read(&kvm->slots_lock);
mutex_lock(&kvm->lock);
+ down_read(&kvm->slots_lock);
match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
assigned_dev->assigned_dev_id);
}
out:
- mutex_unlock(&kvm->lock);
up_read(&kvm->slots_lock);
+ mutex_unlock(&kvm->lock);
return r;
out_list_del:
list_del(&match->list);
pci_dev_put(dev);
out_free:
kfree(match);
- mutex_unlock(&kvm->lock);
up_read(&kvm->slots_lock);
+ mutex_unlock(&kvm->lock);
return r;
}
/*
* Ordering of locks:
*
- * kvm->slots_lock --> kvm->lock --> kvm->irq_lock
+ * kvm->lock --> kvm->slots_lock --> kvm->irq_lock
*/
DEFINE_SPINLOCK(kvm_lock);
out:
return kvm;
+#if defined(KVM_COALESCED_MMIO_PAGE_OFFSET) || \
+ (defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER))
out_err:
hardware_disable_all();
+#endif
out_err_nodisable:
kfree(kvm);
return ERR_PTR(r);