D: Original author of the Linux networking code
N: Anton Blanchard
-E: anton@linuxcare.com
-W: http://linuxcare.com.au/anton/
+E: anton@samba.org
+W: http://samba.org/~anton/
P: 1024/8462A731 4C 55 86 34 44 59 A7 99 2B 97 88 4A 88 9A 0D 97
D: sun4 port, Sparc hacker
S: 47 Robert Street
D: AUN network protocols
D: Co-architect of the parallel port sharing system
D: IPv6 netfilter
-S: FutureTV Labs Ltd
-S: Brunswick House, 61-69 Newmarket Rd, Cambridge CB5 8EG
-S: United Kingdom
N: Thomas Bogendörfer
E: tsbogend@alpha.franken.de
location). You also want to check out the PCMCIA-HOWTO, available
from http://www.linuxdoc.org/docs.html#howto .
-Hermes (AT&T/Lucent/Orinoco/3com) wireless support
+Hermes support (Orinoco/WavelanIEEE/PrismII/Symbol 802.11b cards)
CONFIG_PCMCIA_HERMES
A driver for "Hermes" chipset based PCMCIA wireless adaptors, such
- as the Lucent/Orinoco cards and the Cabletron RoamAbout. It should
- also be usable on Prism II based cards such as the Farallon Skyline
- and the 3Com AirConnect.
+ as the Lucent WavelanIEEE/Orinoco cards and their OEM (Cabletron/
+ EnteraSys RoamAbout 802.11, ELSA Airlancer, Melco Buffalo and others).
+ It should also be usable on various Prism II based cards such as the
+ Linksys, D-Link and Farallon Skyline. It should also work on Symbol
+ cards such as the 3Com AirConnect and Ericsson WLAN.
To use your PC-cards, you will need supporting software from David
Hinds' pcmcia-cs package (see the file Documentation/Changes for
location). You also want to check out the PCMCIA-HOWTO, available
from http://www.linuxdoc.org/docs.html#howto .
+ You will also very likely also need the Wireless Tools in order to
+ configure your card and that /etc/pcmcia/wireless.opts works :
+ http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/Tools.html
+
Aviator/Raytheon 2.4MHz wireless support
CONFIG_PCMCIA_RAYCS
Say Y here if you intend to attach an Aviator/Raytheon PCMCIA
say M here and read Documentation/modules.txt. This is recommended.
The module will be called 8139too.o.
+Use PIO instead of MMIO
+CONFIG_8139TOO_PIO
+ This instructs the driver to use programmed I/O ports (PIO) instead
+ of PCI shared memory (MMIO). This can possibly solve some problems in
+ case your mainboard has memory consistency issues. If unsure, say N.
+
+Support for automatic channel equalization (EXPERIMENTAL)
+CONFIG_8139TOO_TUNE_TWISTER
+ This implements a function which might come in handy in case you are
+ using low quality on long cabling. It tries to match the transceiver
+ to the cable characteristics. This is experimental since hardly
+ documented by the manufacturer. If unsure, say N.
+
+Support for older RTL-8129/8130 boards
+CONFIG_8139TOO_8129
+ This enables support for the older and uncommon RTL-8129 and
+ RTL-8130 chips, which support MII via an external transceiver, instead
+ of an internal one. Disabling this option will save some memory
+ by making the code size smaller. If unsure, say Y.
+
SiS 900 PCI Fast Ethernet Adapter support
CONFIG_SIS900
This is a driver for the Fast Ethernet PCI network cards based on
mapped into some user address space, there is always at least one more
mapping, that of the kernel in it's linear mapping starting at
PAGE_OFFSET. So immediately, once the first user maps a given
-physical page into it's address space, by implication the D-cache
+physical page into its address space, by implication the D-cache
aliasing problem has the potential to exist since the kernel already
-maps this page at it's virtual address.
+maps this page at its virtual address.
First, I describe the old method to deal with this problem. I am
describing it for documentation purposes, but it is deprecated and the
Admittedly, the author did not think very much when designing this
interface. It does not give the architecture enough information about
-what exactly is going on, and there is not context with which to base
-any judgment about whether an alias is possible at all. The new
-interfaces to deal with D-cache aliasing are meant to address this by
-telling the architecture specific code exactly which is going on at
-the proper points in time.
+what exactly is going on, and there is no context to base a judgment
+on about whether an alias is possible at all. The new interfaces to
+deal with D-cache aliasing are meant to address this by telling the
+architecture specific code exactly which is going on at the proper points
+in time.
Here is the new interface:
--- /dev/null
+ Hardware driver for Intel i810 Random Number Generator (RNG)
+ Copyright 2000,2001 Jeff Garzik <jgarzik@mandrakesoft.com>
+ Copyright 2000,2001 Philipp Rumpf <prumpf@mandrakesoft.com>
+
+Introduction:
+
+ The i810_rng device driver is software that makes use of a
+ special hardware feature on the Intel i8xx-based chipsets,
+ a Random Number Generator (RNG).
+
+ In order to make effective use of this device driver, you
+ should download the support software as well. Download the
+ latest version of the "intel-rng-tools" package from the
+ i810_rng driver's official Web site:
+
+ http://sourceforge.net/projects/gkernel/
+
+About the Intel RNG hardware, from the firmware hub datasheet:
+
+ The Firmware Hub integrates a Random Number Generator (RNG)
+ using thermal noise generated from inherently random quantum
+ mechanical properties of silicon. When not generating new random
+ bits the RNG circuitry will enter a low power state. Intel will
+ provide a binary software driver to give third party software
+ access to our RNG for use as a security feature. At this time,
+ the RNG is only to be used with a system in an OS-present state.
+
+Theory of operation:
+
+ Character driver. Using the standard open()
+ and read() system calls, you can read random data from
+ the i810 RNG device. This data is NOT CHECKED by any
+ fitness tests, and could potentially be bogus (if the
+ hardware is faulty or has been tampered with). Data is only
+ output if the hardware "has-data" flag is set, but nevertheless
+ a security-conscious person would run fitness tests on the
+ data before assuming it is truly random.
+
+ /dev/intel_rng is char device major 10, minor 183.
+
+Driver notes:
+
+ * FIXME: support poll(2)
+
+ NOTE: request_mem_region was removed, for two reasons:
+ 1) Only one RNG is supported by this driver, 2) The location
+ used by the RNG is a fixed location in MMIO-addressable memory,
+ 3) users with properly working BIOS e820 handling will always
+ have the region in which the RNG is located reserved, so
+ request_mem_region calls always fail for proper setups.
+ However, for people who use mem=XX, BIOS e820 information is
+ -not- in /proc/iomem, and request_mem_region(RNG_ADDR) can
+ succeed.
+
+Driver details:
+
+ Based on:
+ Intel 82802AB/82802AC Firmware Hub (FWH) Datasheet
+ May 1999 Order Number: 290658-002 R
+
+ Intel 82802 Firmware Hub: Random Number Generator
+ Programmer's Reference Manual
+ December 1999 Order Number: 298029-001 R
+
+ Intel 82802 Firmware HUB Random Number Generator Driver
+ Copyright (c) 2000 Matt Sottek <msottek@quiknet.com>
+
+ Special thanks to Matt Sottek. I did the "guts", he
+ did the "brains" and all the testing.
+
+Change history:
+
+ Version 0.9.5:
+ * Rip out entropy injection via timer. It never ever worked,
+ and a better solution (rngd) is now available.
+
+ Version 0.9.4:
+ * Fix: Remove request_mem_region
+ * Fix: Horrible bugs in FIPS calculation and test execution
+
+ Version 0.9.3:
+ * Clean up rng_read a bit.
+ * Update i810_rng driver Web site URL.
+ * Increase default timer interval to 4 samples per second.
+ * Abort if mem region is not available.
+ * BSS zero-initialization cleanup.
+ * Call misc_register() from rng_init_one.
+ * Fix O_NONBLOCK to occur before we schedule.
+
+ Version 0.9.2:
+ * Simplify open blocking logic
+
+ Version 0.9.1:
+ * Support i815 chipsets too (Matt Sottek)
+ * Fix reference counting when statically compiled (prumpf)
+ * Rewrite rng_dev_read (prumpf)
+ * Make module races less likely (prumpf)
+ * Small miscellaneous bug fixes (prumpf)
+ * Use pci table for PCI id list
+
+ Version 0.9.0:
+ * Don't register a pci_driver, because we are really
+ using PCI bridge vendor/device ids, and someone
+ may want to register a driver for the bridge. (bug fix)
+ * Don't let the usage count go negative (bug fix)
+ * Clean up spinlocks (bug fix)
+ * Enable PCI device, if necessary (bug fix)
+ * iounmap on module unload (bug fix)
+ * If RNG chrdev is already in use when open(2) is called,
+ sleep until it is available.
+ * Remove redundant globals rng_allocated, rng_use_count
+ * Convert numeric globals to unsigned
+ * Module unload cleanup
+
+ Version 0.6.2:
+ * Clean up spinlocks. Since we don't have any interrupts
+ to worry about, but we do have a timer to worry about,
+ we use spin_lock_bh everywhere except the timer function
+ itself.
+ * Fix module load/unload.
+ * Fix timer function and h/w enable/disable logic
+ * New timer interval sysctl
+ * Clean up sysctl names
"8139too" Fast Ethernet driver for Linux
- Improved support for RTL-8139 10/100 Fast Ethernet adapters
+ RTL-8139, -8129, and -8130 10/100 Fast Ethernet adapters
- Copyright 2000 Jeff Garzik <jgarzik@mandrakesoft.com>
+ Copyright 2000,2001 Jeff Garzik <jgarzik@mandrakesoft.com>
+
+ http://sourceforge.net/projects/gkernel/
Architectures supported (all PCI platforms):
Requirements
------------
-Kernel 2.3.41 or later.
+Kernel 2.4.3 or later.
A Fast Ethernet adapter containing an RTL8139-based chip.
AOpen ALN-325C
KTI KF-230TX
KTI KF-230TX/2
+Lantech FastNet TX
+SMC EZNET 10/100
(please add your adapter model to this list)
Submitting Bug Reports
----------------------
Obtain and compile the modified rtl8139-diag source code from the
-8139too driver Web site. This diagnostics programs, originally
-from Donald Becker, has been modified to display all registers
-on your RTL8139 chip, not just the first 0x80.
+8139too driver Web site, http://sourceforge.net/projects/gkernel/
+This diagnostics programs, originally from Donald Becker, has been
+modified to display all registers on your RTL8139 chip, not just the
+first 0x80.
If possible, send the output of a working and broken driver with
- rtl8139-diag -mmmaaavvveefN > my-output-file.txt
+ rtl8139-diag -mmaaavvveefN > my-output-file.txt
Send "lspci -vvv" or "cat /proc/pci" output for PCI information.
b) The "io" parameter must be specified on the command-line.
-c) In case you can not re-load the driver because Linux system
- returns the "device or resource busy" message, try to re-load it by
- increment the IO port address by one. The driver will write
- commands to the IO base addresses to reset the data port pointer.
- You can specify an I/O address with an address value one greater
- than the configured address. Example, to scan for an adapter
- located at IO base 0x300, specify an IO address of 0x301.
+c) The driver's hardware probe routine is designed to avoid
+ writing to I/O space until it knows that there is a cs89x0
+ card at the written addresses. This could cause problems
+ with device probing. To avoid this behaviour, add one
+ to the `io=' module parameter. This doesn't actually change
+ the I/O address, but it is a flag to tell the driver
+ topartially initialise the hardware before trying to
+ identify the card. This could be dangerous if you are
+ not sure that there is a cs89x0 card at the provided address.
+
+ For example, to scan for an adapter located at IO base 0x300,
+ specify an IO address of 0x301.
d) The "duplex=auto" parameter is only supported for the CS8920.
if it is <= 0.
Default: 2
+tcp_rfc1337 - BOOLEAN
+ If set, the TCP stack behaves conforming to RFC1337. If unset,
+ we are not conforming to RFC, but prevent TCP TIME_WAIT
+ asassination.
+ Default: 0
+
ip_local_port_range - 2 INTEGERS
Defines the local port range that is used by TCP and UDP to
choose the local port. The first number is the first, the
(i.e. by default) range 1024-4999 is enough to issue up to
2000 connections per second to systems supporting timestamps.
+ip_nonlocal_bind - BOOLEAN
+ If set, allows processes to bind() to non-local IP adresses,
+ which can be quite useful - but may break some applications.
+ Default: 0
+
+ip_dynaddr - BOOLEAN
+ If set non-zero, enables support for dynamic addresses.
+ If set to a non-zero value larger than 1, a kernel log
+ message will be printed when dynamic address rewriting
+ occurs.
+ Default: 0
+
icmp_echo_ignore_all - BOOLEAN
icmp_echo_ignore_broadcasts - BOOLEAN
If either is set to true, then the kernel will ignore either all
Alpha 1/1024s. See the HZ define in /usr/include/asm/param.h for the exact
value on your system.
+igmp_max_memberships - INTEGER
+ Change the maximum number of multicast groups we can subscribe to.
+ Default: 20
+
conf/interface/*:
conf/all/* is special and changes the settings for all interfaces.
Change special settings per interface.
Updated by:
Andi Kleen
ak@muc.de
-$Id: ip-sysctl.txt,v 1.17 2000/11/06 07:15:36 davem Exp $
+$Id: ip-sysctl.txt,v 1.18 2001/03/16 06:49:20 davem Exp $
P: Jakub Jelinek
M: jj@sunsite.ms.mff.cuni.cz
P: Anton Blanchard
-M: anton@linuxcare.com
+M: anton@samba.org
L: sparclinux@vger.kernel.org
L: ultralinux@vger.kernel.org
W: http://ultra.linux.cz
L: linux-x25@vger.kernel.org
S: Maintained
+X86 3-LEVEL PAGING (PAE) SUPPORT
+P: Ingo Molnar
+M: mingo@redhat.com
+S: Maintained
+
Z85230 SYNCHRONOUS DRIVER
P: Alan Cox
M: alan@redhat.com
VERSION = 2
PATCHLEVEL = 4
SUBLEVEL = 3
-EXTRAVERSION =-pre7
+EXTRAVERSION =-pre8
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
contains information about the problems, which may result by upgrading
your kernel.
+ - The Documentation/DocBook/ subdirectory contains several guides for
+ kernel developers and users. These guides can be rendered in a
+ number of formats: PostScript (.ps), PDF, and HTML, among others.
+ After installation, "make psdocs", "make pdfdocs", or "make htmldocs"
+ will render the documentation in the requested format.
+
INSTALLING the kernel:
- If you install the full sources, put the kernel tarball in a
/* ??? Experimenting with no HAE for CIA. */
#define CIA_DEFAULT_MEM_BASE ((32+2)*1024*1024)
+#define IRONGATE_DEFAULT_MEM_BASE ((256*8-16)*1024*1024)
/*
* A small note about bridges and interrupts. The DECchip 21050 (and
machine_check: nautilus_machine_check,
max_dma_address: ALPHA_NAUTILUS_MAX_DMA_ADDRESS,
min_io_address: DEFAULT_IO_BASE,
- min_mem_address: DEFAULT_MEM_BASE,
+ min_mem_address: IRONGATE_DEFAULT_MEM_BASE,
nr_irqs: 16,
device_interrupt: isa_device_interrupt,
ENTRY(empty_zero_page)
.org 0x5000
-ENTRY(empty_bad_page)
-
-.org 0x6000
-ENTRY(empty_bad_pte_table)
-
-#if CONFIG_X86_PAE
-
- .org 0x7000
- ENTRY(empty_bad_pmd_table)
-
- .org 0x8000
-
-#else
-
- .org 0x7000
-
-#endif
/*
* This starts the data section. Note that the above is all
visws_board_rev = raw;
}
- printk("Silicon Graphics %s (rev %d)\n",
+ printk(KERN_INFO "Silicon Graphics %s (rev %d)\n",
visws_board_type == VISWS_320 ? "320" :
(visws_board_type == VISWS_540 ? "540" :
"unknown"),
int x = e820.nr_map;
if (x == E820MAX) {
- printk("Ooops! Too many entries in the memory map!\n");
+ printk(KERN_ERR "Ooops! Too many entries in the memory map!\n");
return;
}
add_memory_region(0, LOWMEMSIZE(), E820_RAM);
add_memory_region(HIGH_MEMORY, mem_size << 10, E820_RAM);
}
- printk("BIOS-provided physical RAM map:\n");
+ printk(KERN_INFO "BIOS-provided physical RAM map:\n");
print_memory_map(who);
} /* setup_memory_region */
*to = '\0';
*cmdline_p = command_line;
if (usermem) {
- printk("user-defined physical RAM map:\n");
+ printk(KERN_INFO "user-defined physical RAM map:\n");
print_memory_map("user");
}
}
initrd_end = initrd_start+INITRD_SIZE;
}
else {
- printk("initrd extends beyond end of memory "
+ printk(KERN_ERR "initrd extends beyond end of memory "
"(0x%08lx > 0x%08lx)\ndisabling initrd\n",
INITRD_START + INITRD_SIZE,
max_low_pfn << PAGE_SHIFT);
if (n >= 0x80000005) {
cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
- printk("CPU: L1 I Cache: %dK (%d bytes/line), D cache %dK (%d bytes/line)\n",
+ printk(KERN_INFO "CPU: L1 I Cache: %dK (%d bytes/line), D cache %dK (%d bytes/line)\n",
edx>>24, edx&0xFF, ecx>>24, ecx&0xFF);
c->x86_cache_size=(ecx>>24)+(edx>>24);
}
c->x86_cache_size = l2size;
- printk("CPU: L2 Cache: %dK (%d bytes/line)\n",
+ printk(KERN_INFO "CPU: L2 Cache: %dK (%d bytes/line)\n",
l2size, ecx & 0xFF);
}
name="C6";
fcr_set=ECX8|DSMC|EDCTLB|EMMX|ERETSTK;
fcr_clr=DPDC;
- printk("Disabling bugged TSC.\n");
+ printk(KERN_NOTICE "Disabling bugged TSC.\n");
clear_bit(X86_FEATURE_TSC, &c->x86_capability);
break;
case 8:
newlo=(lo|fcr_set) & (~fcr_clr);
if (newlo!=lo) {
- printk("Centaur FCR was 0x%X now 0x%X\n", lo, newlo );
+ printk(KERN_INFO "Centaur FCR was 0x%X now 0x%X\n", lo, newlo );
wrmsr(0x107, newlo, hi );
} else {
- printk("Centaur FCR is 0x%X\n",lo);
+ printk(KERN_INFO "Centaur FCR is 0x%X\n",lo);
}
/* Emulate MTRRs using Centaur's MCR. */
set_bit(X86_FEATURE_CENTAUR_MCR, &c->x86_capability);
max = cpuid_eax(0x80860000);
if ( max >= 0x80860001 ) {
cpuid(0x80860001, &dummy, &cpu_rev, &cpu_freq, &cpu_flags);
- printk("CPU: Processor revision %u.%u.%u.%u, %u MHz\n",
+ printk(KERN_INFO "CPU: Processor revision %u.%u.%u.%u, %u MHz\n",
(cpu_rev >> 24) & 0xff,
(cpu_rev >> 16) & 0xff,
(cpu_rev >> 8) & 0xff,
}
if ( max >= 0x80860002 ) {
cpuid(0x80860002, &dummy, &cms_rev1, &cms_rev2, &dummy);
- printk("CPU: Code Morphing Software revision %u.%u.%u-%u-%u\n",
+ printk(KERN_INFO "CPU: Code Morphing Software revision %u.%u.%u-%u-%u\n",
(cms_rev1 >> 24) & 0xff,
(cms_rev1 >> 16) & 0xff,
(cms_rev1 >> 8) & 0xff,
(void *)&cpu_info[56],
(void *)&cpu_info[60]);
cpu_info[64] = '\0';
- printk("CPU: %s\n", cpu_info);
+ printk(KERN_INFO "CPU: %s\n", cpu_info);
}
/* Unhide possibly hidden capability flags */
c->f00f_bug = 1;
if ( !f00f_workaround_enabled ) {
trap_init_f00f_bug();
- printk(KERN_INFO "Intel Pentium with F0 0F bug - workaround enabled.\n");
+ printk(KERN_NOTICE "Intel Pentium with F0 0F bug - workaround enabled.\n");
f00f_workaround_enabled = 1;
}
}
}
}
if ( l1i || l1d )
- printk("CPU: L1 I cache: %dK, L1 D cache: %dK\n",
+ printk(KERN_INFO "CPU: L1 I cache: %dK, L1 D cache: %dK\n",
l1i, l1d);
if ( l2 )
- printk("CPU: L2 cache: %dK\n", l2);
+ printk(KERN_INFO "CPU: L2 cache: %dK\n", l2);
if ( l3 )
- printk("CPU: L3 cache: %dK\n", l3);
+ printk(KERN_INFO "CPU: L3 cache: %dK\n", l3);
/*
* This assumes the L3 cache is shared; it typically lives in
rdmsr(0x119,lo,hi);
lo |= 0x200000;
wrmsr(0x119,lo,hi);
- printk(KERN_INFO "CPU serial number disabled.\n");
+ printk(KERN_NOTICE "CPU serial number disabled.\n");
clear_bit(X86_FEATURE_PN, &c->x86_capability);
}
}
}
}
- printk("CPU: Before vendor init, caps: %08x %08x %08x, vendor = %d\n",
+ printk(KERN_DEBUG "CPU: Before vendor init, caps: %08x %08x %08x, vendor = %d\n",
c->x86_capability[0],
c->x86_capability[1],
c->x86_capability[2],
break;
}
- printk("CPU: After vendor init, caps: %08x %08x %08x %08x\n",
+ printk(KERN_DEBUG "CPU: After vendor init, caps: %08x %08x %08x %08x\n",
c->x86_capability[0],
c->x86_capability[1],
c->x86_capability[2],
/* Now the feature flags better reflect actual CPU features! */
- printk("CPU: After generic, caps: %08x %08x %08x %08x\n",
+ printk(KERN_DEBUG "CPU: After generic, caps: %08x %08x %08x %08x\n",
c->x86_capability[0],
c->x86_capability[1],
c->x86_capability[2],
boot_cpu_data.x86_capability[i] &= c->x86_capability[i];
}
- printk("CPU: Common caps: %08x %08x %08x %08x\n",
+ printk(KERN_DEBUG "CPU: Common caps: %08x %08x %08x %08x\n",
boot_cpu_data.x86_capability[0],
boot_cpu_data.x86_capability[1],
boot_cpu_data.x86_capability[2],
struct tss_struct * t = &init_tss[nr];
if (test_and_set_bit(nr, &cpu_initialized)) {
- printk("CPU#%d already initialized!\n", nr);
+ printk(KERN_WARNING "CPU#%d already initialized!\n", nr);
for (;;) __sti();
}
- printk("Initializing CPU#%d\n", nr);
+ printk(KERN_INFO "Initializing CPU#%d\n", nr);
if (cpu_has_vme || cpu_has_tsc || cpu_has_de)
clear_in_cr4(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
#ifndef CONFIG_X86_TSC
if (tsc_disable && cpu_has_tsc) {
- printk("Disabling TSC...\n");
+ printk(KERN_NOTICE "Disabling TSC...\n");
/**** FIX-HPA: DOES THIS REALLY BELONG HERE? ****/
clear_bit(X86_FEATURE_TSC, boot_cpu_data.x86_capability);
set_in_cr4(X86_CR4_TSD);
static unsigned long totalram_pages;
static unsigned long totalhigh_pages;
-/*
- * BAD_PAGE is the page that is used for page faults when linux
- * is out-of-memory. Older versions of linux just did a
- * do_exit(), but using this instead means there is less risk
- * for a process dying in kernel mode, possibly leaving an inode
- * unused etc..
- *
- * BAD_PAGETABLE is the accompanying page-table: it is initialized
- * to point to BAD_PAGE entries.
- *
- * ZERO_PAGE is a special page that is used for zero-initialized
- * data and COW.
- */
-
-/*
- * These are allocated in head.S so that we get proper page alignment.
- * If you change the size of these then change head.S as well.
- */
-extern char empty_bad_page[PAGE_SIZE];
-#if CONFIG_X86_PAE
-extern pmd_t empty_bad_pmd_table[PTRS_PER_PMD];
-#endif
-extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
-
-/*
- * We init them before every return and make them writable-shared.
- * This guarantees we get out of the kernel in some more or less sane
- * way.
- */
-#if CONFIG_X86_PAE
-static pmd_t * get_bad_pmd_table(void)
-{
- pmd_t v;
- int i;
-
- set_pmd(&v, __pmd(_PAGE_TABLE + __pa(empty_bad_pte_table)));
-
- for (i = 0; i < PAGE_SIZE/sizeof(pmd_t); i++)
- empty_bad_pmd_table[i] = v;
-
- return empty_bad_pmd_table;
-}
-#endif
-
-static pte_t * get_bad_pte_table(void)
-{
- pte_t v;
- int i;
-
- v = pte_mkdirty(mk_pte_phys(__pa(empty_bad_page), PAGE_SHARED));
-
- for (i = 0; i < PAGE_SIZE/sizeof(pte_t); i++)
- empty_bad_pte_table[i] = v;
-
- return empty_bad_pte_table;
-}
-
-
-
-void __handle_bad_pmd(pmd_t *pmd)
-{
- pmd_ERROR(*pmd);
- set_pmd(pmd, __pmd(_PAGE_TABLE + __pa(get_bad_pte_table())));
-}
-
-void __handle_bad_pmd_kernel(pmd_t *pmd)
-{
- pmd_ERROR(*pmd);
- set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(get_bad_pte_table())));
-}
-
int do_check_pgt_cache(int low, int high)
{
int freed = 0;
freed++;
}
if (pte_quicklist) {
- pte_free_slow(pte_alloc_one_fast());
+ pte_free_slow(pte_alloc_one_fast(0));
freed++;
}
} while(pgtable_cache_size > low);
pgd_base = swapper_pg_dir;
#if CONFIG_X86_PAE
- for (i = 0; i < PTRS_PER_PGD; i++) {
- pgd = pgd_base + i;
- __pgd_clear(pgd);
- }
+ for (i = 0; i < PTRS_PER_PGD; i++)
+ set_pgd(pgd_base + i, __pgd(1 + __pa(empty_zero_page)));
#endif
i = __pgd_offset(PAGE_OFFSET);
pgd = pgd_base + i;
#endif
void MMU_init(void);
-static void *MMU_get_page(void);
+void *early_get_page(void);
unsigned long prep_find_end_of_memory(void);
unsigned long pmac_find_end_of_memory(void);
unsigned long apus_find_end_of_memory(void);
unsigned long m8260_find_end_of_memory(void);
#endif /* CONFIG_8260 */
static void mapin_ram(void);
-void map_page(unsigned long va, unsigned long pa, int flags);
+int map_page(unsigned long va, unsigned long pa, int flags);
void set_phys_avail(unsigned long total_ram);
extern void die_if_kernel(char *,struct pt_regs *,long);
pmd_val(*pmd) = (unsigned long) BAD_PAGETABLE;
}
-pte_t *get_pte_slow(pmd_t *pmd, unsigned long offset)
-{
- pte_t *pte;
-
- if (pmd_none(*pmd)) {
- if (!mem_init_done)
- pte = (pte_t *) MMU_get_page();
- else if ((pte = (pte_t *) __get_free_page(GFP_KERNEL)))
- clear_page(pte);
- if (pte) {
- pmd_val(*pmd) = (unsigned long)pte;
- return pte + offset;
- }
- pmd_val(*pmd) = (unsigned long)BAD_PAGETABLE;
- return NULL;
- }
- if (pmd_bad(*pmd)) {
- __bad_pte(pmd);
- return NULL;
- }
- return (pte_t *) pmd_page(*pmd) + offset;
-}
-
int do_check_pgt_cache(int low, int high)
{
int freed = 0;
- if(pgtable_cache_size > high) {
+ if (pgtable_cache_size > high) {
do {
- if(pgd_quicklist)
- free_pgd_slow(get_pgd_fast()), freed++;
- if(pmd_quicklist)
- free_pmd_slow(get_pmd_fast()), freed++;
- if(pte_quicklist)
- free_pte_slow(get_pte_fast()), freed++;
- } while(pgtable_cache_size > low);
+ if (pgd_quicklist) {
+ free_pgd_slow(get_pgd_fast());
+ freed++;
+ }
+ if (pte_quicklist) {
+ pte_free_slow(pte_alloc_one_fast(0));
+ freed++;
+ }
+ } while (pgtable_cache_size > low);
}
return freed;
}
__ioremap(unsigned long addr, unsigned long size, unsigned long flags)
{
unsigned long p, v, i;
+ int err;
/*
* Choose an address to map it to.
flags |= _PAGE_GUARDED;
/*
- * Is it a candidate for a BAT mapping?
+ * Should check if it is a candidate for a BAT mapping
*/
- for (i = 0; i < size; i += PAGE_SIZE)
- map_page(v+i, p+i, flags);
+
+ spin_lock(&init_mm.page_table_lock);
+ err = 0;
+ for (i = 0; i < size && err == 0; i += PAGE_SIZE)
+ err = map_page(v+i, p+i, flags);
+ spin_unlock(&init_mm.page_table_lock);
+ if (err) {
+ if (mem_init_done)
+ vfree((void *)v);
+ return NULL;
+ }
+
out:
return (void *) (v + (addr & ~PAGE_MASK));
}
return (pte_val(*pg) & PAGE_MASK) | (addr & ~PAGE_MASK);
}
-void
+int
map_page(unsigned long va, unsigned long pa, int flags)
{
pmd_t *pd;
/* Use upper 10 bits of VA to index the first level map */
pd = pmd_offset(pgd_offset_k(va), va);
/* Use middle 10 bits of VA to index the second-level map */
- pg = pte_alloc(pd, va);
+ pg = pte_alloc(&init_mm, pd, va);
+ if (pg == 0)
+ return -ENOMEM;
set_pte(pg, mk_pte_phys(pa & PAGE_MASK, __pgprot(flags)));
if (mem_init_done)
flush_hash_page(0, va);
+ return 0;
}
#ifndef CONFIG_8xx
}
}
-/* In fact this is only called until mem_init is done. */
-static void __init *MMU_get_page(void)
+/* This is only called until mem_init is done. */
+void __init *early_get_page(void)
{
void *p;
- if (mem_init_done) {
- p = (void *) __get_free_page(GFP_KERNEL);
- } else if (init_bootmem_done) {
+ if (init_bootmem_done) {
p = alloc_bootmem_pages(PAGE_SIZE);
} else {
p = mem_pieces_find(PAGE_SIZE, PAGE_SIZE);
}
- if (p == 0)
- panic("couldn't get a page in MMU_get_page");
- __clear_user(p, PAGE_SIZE);
return p;
}
#
# CONFIG_SPARCAUDIO is not set
# CONFIG_SPARCAUDIO_AMD7930 is not set
-# CONFIG_SPARCAUDIO_CS4231 is not set
# CONFIG_SPARCAUDIO_DBRI is not set
+# CONFIG_SPARCAUDIO_CS4231 is not set
# CONFIG_SPARCAUDIO_DUMMY is not set
#
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID5 is not set
# CONFIG_BLK_DEV_LVM is not set
-# CONFIG_LVM_PROC_FS is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
# CONFIG_QUOTA is not set
CONFIG_AUTOFS_FS=m
CONFIG_AUTOFS4_FS=m
+# CONFIG_REISERFS_FS is not set
+# CONFIG_REISERFS_CHECK is not set
# CONFIG_ADFS_FS is not set
# CONFIG_ADFS_FS_RW is not set
CONFIG_AFFS_FS=m
goto out;
}
#endif
- if(!(child = find_task_by_pid(pid))) {
+ read_lock(&tasklist_lock);
+ child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
+ read_unlock(&tasklist_lock);
+
+ if (!child) {
pt_error_return(regs, ESRCH);
goto out;
}
* You'll never be able to kill the process. ;-)
*/
pt_error_return(regs, EPERM);
- goto out;
+ goto out_tsk;
}
if((!child->dumpable ||
(current->uid != child->euid) ||
(!cap_issubset(child->cap_permitted, current->cap_permitted)) ||
(current->gid != child->gid)) && !capable(CAP_SYS_PTRACE)) {
pt_error_return(regs, EPERM);
- goto out;
+ goto out_tsk;
}
/* the same process cannot be attached many times */
if (child->ptrace & PT_PTRACED) {
pt_error_return(regs, EPERM);
- goto out;
+ goto out_tsk;
}
child->ptrace |= PT_PTRACED;
write_lock_irqsave(&tasklist_lock, flags);
write_unlock_irqrestore(&tasklist_lock, flags);
send_sig(SIGSTOP, child, 1);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
if (!(child->ptrace & PT_PTRACED)) {
pt_error_return(regs, ESRCH);
- goto out;
+ goto out_tsk;
}
if(child->state != TASK_STOPPED) {
if(request != PTRACE_KILL) {
pt_error_return(regs, ESRCH);
- goto out;
+ goto out_tsk;
}
}
if(child->p_pptr != current) {
pt_error_return(regs, ESRCH);
- goto out;
+ goto out_tsk;
}
switch(request) {
case PTRACE_PEEKTEXT: /* read word at location addr. */
pt_os_succ_return(regs, tmp, (long *)data);
else
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
case PTRACE_PEEKUSR:
read_sunos_user(regs, addr, child, (long *) data);
- goto out;
+ goto out_tsk;
case PTRACE_POKEUSR:
write_sunos_user(regs, addr, child);
- goto out;
+ goto out_tsk;
case PTRACE_POKETEXT: /* write the word at location addr. */
case PTRACE_POKEDATA: {
pt_succ_return(regs, 0);
else
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
case PTRACE_GETREGS: {
rval = verify_area(VERIFY_WRITE, pregs, sizeof(struct pt_regs));
if(rval) {
pt_error_return(regs, -rval);
- goto out;
+ goto out_tsk;
}
__put_user(cregs->psr, (&pregs->psr));
__put_user(cregs->pc, (&pregs->pc));
#ifdef DEBUG_PTRACE
printk ("PC=%x nPC=%x o7=%x\n", cregs->pc, cregs->npc, cregs->u_regs [15]);
#endif
- goto out;
+ goto out_tsk;
}
case PTRACE_SETREGS: {
i = verify_area(VERIFY_READ, pregs, sizeof(struct pt_regs));
if(i) {
pt_error_return(regs, -i);
- goto out;
+ goto out_tsk;
}
__get_user(psr, (&pregs->psr));
__get_user(pc, (&pregs->pc));
for(i = 1; i < 16; i++)
__get_user(cregs->u_regs[i], (&pregs->u_regs[i-1]));
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_GETFPREGS: {
i = verify_area(VERIFY_WRITE, fps, sizeof(struct fps));
if(i) {
pt_error_return(regs, -i);
- goto out;
+ goto out_tsk;
}
for(i = 0; i < 32; i++)
__put_user(child->thread.float_regs[i], (&fps->regs[i]));
__put_user(child->thread.fpqueue[i].insn, (&fps->fpq[i].insn));
}
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_SETFPREGS: {
i = verify_area(VERIFY_READ, fps, sizeof(struct fps));
if(i) {
pt_error_return(regs, -i);
- goto out;
+ goto out_tsk;
}
copy_from_user(&child->thread.float_regs[0], &fps->regs[0], (32 * sizeof(unsigned long)));
__get_user(child->thread.fsr, (&fps->fsr));
__get_user(child->thread.fpqueue[i].insn, (&fps->fpq[i].insn));
}
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_READTEXT:
if (res == data) {
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
/* Partial read is an IO failure */
if (res >= 0)
res = -EIO;
pt_error_return(regs, -res);
- goto out;
+ goto out_tsk;
}
case PTRACE_WRITETEXT:
if (res == data) {
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
/* Partial write is an IO failure */
if (res >= 0)
res = -EIO;
pt_error_return(regs, -res);
- goto out;
+ goto out_tsk;
}
case PTRACE_SYSCALL: /* continue and stop at (return from) syscall */
case PTRACE_CONT: { /* restart after signal. */
if ((unsigned long) data > _NSIG) {
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
if (addr != 1) {
if (addr & 3) {
pt_error_return(regs, EINVAL);
- goto out;
+ goto out_tsk;
}
#ifdef DEBUG_PTRACE
printk ("Original: %08lx %08lx\n", child->thread.kregs->pc, child->thread.kregs->npc);
#endif
wake_up_process(child);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
/*
case PTRACE_KILL: {
if (child->state == TASK_ZOMBIE) { /* already dead */
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
wake_up_process(child);
child->exit_code = SIGKILL;
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_SUNDETACH: { /* detach a process that was attached. */
unsigned long flags;
if ((unsigned long) data > _NSIG) {
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
child->ptrace &= ~(PT_PTRACED|PT_TRACESYS);
wake_up_process(child);
SET_LINKS(child);
write_unlock_irqrestore(&tasklist_lock, flags);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
/* PTRACE_DUMPCORE unsupported... */
default:
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
+out_tsk:
+ if (child)
+ free_task_struct(child);
out:
unlock_kernel();
}
-/* $Id: sys_sparc.c,v 1.67 2000/11/30 08:37:31 anton Exp $
+/* $Id: sys_sparc.c,v 1.68 2001/03/24 09:36:10 davem Exp $
* linux/arch/sparc/kernel/sys_sparc.c
*
* This file contains various random system calls that
-/* $Id: sys_sunos.c,v 1.132 2001/02/13 01:16:43 davem Exp $
+/* $Id: sys_sunos.c,v 1.133 2001/03/24 09:36:10 davem Exp $
* sys_sunos.c: SunOS specific syscall compatibility support.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
-/* $Id: fault.c,v 1.118 2000/12/29 07:52:41 anton Exp $
+/* $Id: fault.c,v 1.119 2001/03/24 09:36:10 davem Exp $
* fault.c: Page fault handlers for the Sparc.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
-/* $Id: init.c,v 1.96 2000/11/30 08:51:50 anton Exp $
+/* $Id: init.c,v 1.97 2001/02/26 02:57:34 anton Exp $
* linux/arch/sparc/mm/init.c
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
* Copyright (C) 1995 Eddie C. Dost (ecd@skynet.be)
* Copyright (C) 1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
- * Copyright (C) 2000 Anton Blanchard (anton@linuxcare.com)
+ * Copyright (C) 2000 Anton Blanchard (anton@linuxcare.com.au)
*/
#include <linux/config.h>
unsigned long highstart_pfn, highend_pfn;
unsigned long totalram_pages;
-static unsigned long totalhigh_pages;
+unsigned long totalhigh_pages;
/*
* BAD_PAGE is the page that is used for page faults when linux
#define DEBUG_BOOTMEM
extern unsigned long cmdline_memory_size;
-extern unsigned long last_valid_pfn;
+unsigned long last_valid_pfn;
+
+unsigned long calc_highpages(void)
+{
+ int i;
+ int nr = 0;
+
+ for (i = 0; sp_banks[i].num_bytes != 0; i++) {
+ unsigned long start_pfn = sp_banks[i].base_addr >> PAGE_SHIFT;
+ unsigned long end_pfn = (sp_banks[i].base_addr + sp_banks[i].num_bytes) >> PAGE_SHIFT;
+
+ if (end_pfn <= max_low_pfn)
+ continue;
+
+ if (start_pfn < max_low_pfn)
+ start_pfn = max_low_pfn;
+
+ nr += end_pfn - start_pfn;
+ }
+
+ return nr;
+}
+
+unsigned long calc_max_low_pfn(void)
+{
+ int i;
+ unsigned long tmp = (SRMMU_MAXMEM >> PAGE_SHIFT);
+ unsigned long curr_pfn, last_pfn;
+
+ last_pfn = (sp_banks[0].base_addr + sp_banks[0].num_bytes) >> PAGE_SHIFT;
+ for (i = 1; sp_banks[i].num_bytes != 0; i++) {
+ curr_pfn = sp_banks[i].base_addr >> PAGE_SHIFT;
-void __init bootmem_init(void)
+ if (curr_pfn >= tmp) {
+ if (last_pfn < tmp)
+ tmp = last_pfn;
+ break;
+ }
+
+ last_pfn = (sp_banks[i].base_addr + sp_banks[i].num_bytes) >> PAGE_SHIFT;
+ }
+
+ return tmp;
+}
+
+unsigned long __init bootmem_init(unsigned long *pages_avail)
{
unsigned long bootmap_size, start_pfn, max_pfn;
unsigned long end_of_phys_memory = 0UL;
- unsigned long bootmap_pfn;
+ unsigned long bootmap_pfn, bytes_avail, size;
int i;
- /* XXX It is a bit ambiguous here, whether we should
- * XXX treat the user specified mem=xxx as total wanted
- * XXX physical memory, or as a limit to the upper
- * XXX physical address we allow. For now it is the
- * XXX latter. -DaveM
- */
#ifdef DEBUG_BOOTMEM
prom_printf("bootmem_init: Scan sp_banks, ");
#endif
+ bytes_avail = 0UL;
for (i = 0; sp_banks[i].num_bytes != 0; i++) {
end_of_phys_memory = sp_banks[i].base_addr +
sp_banks[i].num_bytes;
+ bytes_avail += sp_banks[i].num_bytes;
if (cmdline_memory_size) {
- if (end_of_phys_memory > cmdline_memory_size) {
- if (cmdline_memory_size < sp_banks[i].base_addr) {
- end_of_phys_memory =
- sp_banks[i-1].base_addr +
- sp_banks[i-1].num_bytes;
+ if (bytes_avail > cmdline_memory_size) {
+ unsigned long slack = bytes_avail - cmdline_memory_size;
+
+ bytes_avail -= slack;
+ end_of_phys_memory -= slack;
+
+ sp_banks[i].num_bytes -= slack;
+ if (sp_banks[i].num_bytes == 0) {
sp_banks[i].base_addr = 0xdeadbeef;
- sp_banks[i].num_bytes = 0;
} else {
- sp_banks[i].num_bytes -=
- (end_of_phys_memory -
- cmdline_memory_size);
- end_of_phys_memory = cmdline_memory_size;
- sp_banks[++i].base_addr = 0xdeadbeef;
- sp_banks[i].num_bytes = 0;
+ sp_banks[i+1].num_bytes = 0;
+ sp_banks[i+1].base_addr = 0xdeadbeef;
}
break;
}
highstart_pfn = highend_pfn = max_pfn;
if (max_low_pfn > (SRMMU_MAXMEM >> PAGE_SHIFT)) {
- highstart_pfn = max_low_pfn = (SRMMU_MAXMEM >> PAGE_SHIFT);
- printk(KERN_NOTICE "%ldMB HIGHMEM available.\n",
- (highend_pfn - highstart_pfn) >> (20-PAGE_SHIFT));
+ highstart_pfn = (SRMMU_MAXMEM >> PAGE_SHIFT);
+ max_low_pfn = calc_max_low_pfn();
+ printk(KERN_NOTICE "%ldMB HIGHMEM available.\n", calc_highpages());
}
#ifdef CONFIG_BLK_DEV_INITRD
prom_printf("init_bootmem(spfn[%lx],bpfn[%lx],mlpfn[%lx])\n",
start_pfn, bootmap_pfn, max_low_pfn);
#endif
- bootmap_size = init_bootmem(bootmap_pfn, max_low_pfn);
+ bootmap_size = init_bootmem_node(NODE_DATA(0), bootmap_pfn, phys_base>>PAGE_SHIFT, max_low_pfn);
/* Now register the available physical memory with the
* allocator.
*/
+ *pages_avail = 0;
for (i = 0; sp_banks[i].num_bytes != 0; i++) {
- unsigned long curr_pfn, last_pfn, size;
+ unsigned long curr_pfn, last_pfn;
curr_pfn = sp_banks[i].base_addr >> PAGE_SHIFT;
if (curr_pfn >= max_low_pfn)
continue;
size = (last_pfn - curr_pfn) << PAGE_SHIFT;
-
+ *pages_avail += last_pfn - curr_pfn;
#ifdef DEBUG_BOOTMEM
prom_printf("free_bootmem: base[%lx] size[%lx]\n",
sp_banks[i].base_addr,
size);
}
- /* Reserve the kernel text/data/bss, the bootmem bitmap and initrd. */
-#ifdef DEBUG_BOOTMEM
#ifdef CONFIG_BLK_DEV_INITRD
- if (initrd_start)
+ if (initrd_start) {
+ size = initrd_end - initrd_start;
+#ifdef DEBUG_BOOTMEM
prom_printf("reserve_bootmem: base[%lx] size[%lx]\n",
- initrd_start, initrd_end - initrd_start);
+ initrd_start, size);
#endif
- prom_printf("reserve_bootmem: base[%lx] size[%lx]\n",
- phys_base, (start_pfn << PAGE_SHIFT) - phys_base);
- prom_printf("reserve_bootmem: base[%lx] size[%lx]\n",
- (bootmap_pfn << PAGE_SHIFT), bootmap_size);
-#endif
-#ifdef CONFIG_BLK_DEV_INITRD
- if (initrd_start) {
- reserve_bootmem(initrd_start, initrd_end - initrd_start);
+ /* Reserve the initrd image area. */
+ reserve_bootmem(initrd_start, size);
+ *pages_avail -= PAGE_ALIGN(size) >> PAGE_SHIFT;
+
initrd_start += PAGE_OFFSET;
initrd_end += PAGE_OFFSET;
}
#endif
- reserve_bootmem(phys_base, (start_pfn << PAGE_SHIFT) - phys_base);
- reserve_bootmem((bootmap_pfn << PAGE_SHIFT), bootmap_size);
+ /* Reserve the kernel text/data/bss. */
+ size = (start_pfn << PAGE_SHIFT) - phys_base;
+#ifdef DEBUG_BOOTMEM
+ prom_printf("reserve_bootmem: base[%lx] size[%lx]\n", phys_base, size);
+#endif
+ reserve_bootmem(phys_base, size);
+ *pages_avail -= PAGE_ALIGN(size) >> PAGE_SHIFT;
+
+ /* Reserve the bootmem map. We do not account for it
+ * in pages_avail because we will release that memory
+ * in free_all_bootmem.
+ */
+ size = bootmap_size;
+#ifdef DEBUG_BOOTMEM
+ prom_printf("reserve_bootmem: base[%lx] size[%lx]\n",
+ (bootmap_pfn << PAGE_SHIFT), size);
+#endif
+ reserve_bootmem((bootmap_pfn << PAGE_SHIFT), size);
+ *pages_avail -= PAGE_ALIGN(size) >> PAGE_SHIFT;
- last_valid_pfn = max_pfn;
+ return max_pfn;
}
/*
extern void srmmu_paging_init(void);
extern void device_scan(void);
-unsigned long last_valid_pfn;
-
void __init paging_init(void)
{
switch(sparc_cpu_model) {
}
}
-void __init free_mem_map_range(struct page *first, struct page *last)
-{
- first = (struct page *) PAGE_ALIGN((unsigned long)first);
- last = (struct page *) ((unsigned long)last & PAGE_MASK);
-#ifdef DEBUG_BOOTMEM
- prom_printf("[%p,%p] ", first, last);
-#endif
- while (first < last) {
- ClearPageReserved(virt_to_page(first));
- set_page_count(virt_to_page(first), 1);
- free_page((unsigned long)first);
- totalram_pages++;
- num_physpages++;
-
- first = (struct page *)((unsigned long)first + PAGE_SIZE);
- }
-}
-
-/* Walk through holes in sp_banks regions, if the mem_map array
- * areas representing those holes consume a page or more, free
- * up such pages. This helps a lot on machines where physical
- * ram is configured such that it begins at some hugh value.
- *
- * The sp_banks array is sorted by base address.
- */
-void __init free_unused_mem_map(void)
-{
- int i;
-
-#ifdef DEBUG_BOOTMEM
- prom_printf("free_unused_mem_map: ");
-#endif
- for (i = 0; sp_banks[i].num_bytes; i++) {
- if (i == 0) {
- struct page *first, *last;
-
- first = mem_map;
- last = &mem_map[sp_banks[i].base_addr >> PAGE_SHIFT];
- free_mem_map_range(first, last);
- } else {
- struct page *first, *last;
- unsigned long prev_end;
-
- prev_end = sp_banks[i-1].base_addr +
- sp_banks[i-1].num_bytes;
- prev_end = PAGE_ALIGN(prev_end);
- first = &mem_map[prev_end >> PAGE_SHIFT];
- last = &mem_map[sp_banks[i].base_addr >> PAGE_SHIFT];
-
- free_mem_map_range(first, last);
-
- if (!sp_banks[i+1].num_bytes) {
- prev_end = sp_banks[i].base_addr +
- sp_banks[i].num_bytes;
- first = &mem_map[prev_end >> PAGE_SHIFT];
- last = &mem_map[last_valid_pfn];
- free_mem_map_range(first, last);
- }
- }
- }
-#ifdef DEBUG_BOOTMEM
- prom_printf("\n");
-#endif
-}
-
void map_high_region(unsigned long start_pfn, unsigned long end_pfn)
{
unsigned long tmp;
/* Saves us work later. */
memset((void *)&empty_zero_page, 0, PAGE_SIZE);
- i = last_valid_pfn >> (8 + 5);
+ i = last_valid_pfn >> ((20 - PAGE_SHIFT) + 5);
i += 1;
-
sparc_valid_addr_bitmap = (unsigned long *)
__alloc_bootmem(i << 2, SMP_CACHE_BYTES, 0UL);
taint_real_pages();
- max_mapnr = last_valid_pfn;
+ max_mapnr = last_valid_pfn - (phys_base >> PAGE_SHIFT);
high_memory = __va(max_low_pfn << PAGE_SHIFT);
#ifdef DEBUG_BOOTMEM
#endif
num_physpages = totalram_pages = free_all_bootmem();
-#if 0
- free_unused_mem_map();
-#endif
-
for (i = 0; sp_banks[i].num_bytes != 0; i++) {
unsigned long start_pfn = sp_banks[i].base_addr >> PAGE_SHIFT;
unsigned long end_pfn = (sp_banks[i].base_addr + sp_banks[i].num_bytes) >> PAGE_SHIFT;
initpages << (PAGE_SHIFT-10),
totalhigh_pages << (PAGE_SHIFT-10),
(unsigned long)PAGE_OFFSET, (last_valid_pfn << PAGE_SHIFT));
-
- /* NOTE NOTE NOTE NOTE
- * Please keep track of things and make sure this
- * always matches the code in mm/page_alloc.c -DaveM
- */
- i = nr_free_pages() >> 7;
- if (i < 48)
- i = 48;
- if (i > 256)
- i = 256;
- freepages.min = i;
- freepages.low = i << 1;
- freepages.high = freepages.low + i;
}
void free_initmem (void)
-/* $Id: srmmu.c,v 1.226 2001/02/13 01:16:44 davem Exp $
+/* $Id: srmmu.c,v 1.228 2001/03/16 06:56:20 davem Exp $
* srmmu.c: SRMMU specific routines for memory management.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
unsigned long pstart = (sp_banks[sp_entry].base_addr & SRMMU_PGDIR_MASK);
unsigned long vstart = (vbase & SRMMU_PGDIR_MASK);
unsigned long vend = SRMMU_PGDIR_ALIGN(vbase + sp_banks[sp_entry].num_bytes);
+ /* Map "low" memory only */
+ const unsigned long min_vaddr = PAGE_OFFSET;
+ const unsigned long max_vaddr = PAGE_OFFSET + SRMMU_MAXMEM;
+
+ if (vstart < min_vaddr || vstart >= max_vaddr)
+ return vstart;
+
+ if (vend > max_vaddr || vend < min_vaddr)
+ vend = max_vaddr;
while(vstart < vend) {
do_large_mapping(vstart, pstart);
extern void sparc_context_init(int);
extern int linux_num_cpus;
+extern unsigned long totalhigh_pages;
void (*poke_srmmu)(void) __initdata = NULL;
-extern void bootmem_init(void);
+extern unsigned long bootmem_init(unsigned long *pages_avail);
extern void sun_serial_setup(void);
void __init srmmu_paging_init(void)
pgd_t *pgd;
pmd_t *pmd;
pte_t *pte;
+ unsigned long pages_avail;
sparc_iomap.start = SUN4M_IOBASE_VADDR; /* 16MB of IOSPACE on all sun4m's. */
prom_halt();
}
- bootmem_init();
+ pages_avail = 0;
+ last_valid_pfn = bootmem_init(&pages_avail);
srmmu_nocache_init();
srmmu_inherit_prom_mappings(0xfe400000,(LINUX_OPPROM_ENDVM-PAGE_SIZE));
kmap_init();
{
- unsigned long zones_size[MAX_NR_ZONES] = { 0, 0, 0};
+ unsigned long zones_size[MAX_NR_ZONES];
+ unsigned long zholes_size[MAX_NR_ZONES];
+ unsigned long npages;
+ int znum;
+
+ for (znum = 0; znum < MAX_NR_ZONES; znum++)
+ zones_size[znum] = zholes_size[znum] = 0;
+
+ npages = max_low_pfn - (phys_base >> PAGE_SHIFT);
+
+ zones_size[ZONE_DMA] = npages;
+ zholes_size[ZONE_DMA] = npages - pages_avail;
+
+ npages = highend_pfn - max_low_pfn;
+ zones_size[ZONE_HIGHMEM] = npages;
+ zholes_size[ZONE_HIGHMEM] = npages - calc_highpages();
- zones_size[ZONE_DMA] = max_low_pfn;
- zones_size[ZONE_HIGHMEM] = highend_pfn - max_low_pfn;
- free_area_init(zones_size);
+ free_area_init_node(0, NULL, NULL, zones_size,
+ phys_base, zholes_size);
}
}
-/* $Id: sun4c.c,v 1.202 2000/12/01 03:17:31 anton Exp $
+/* $Id: sun4c.c,v 1.205 2001/03/16 06:57:41 davem Exp $
* sun4c.c: Doing in software what should be done in hardware.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/bootmem.h>
+#include <linux/highmem.h>
#include <asm/scatterlist.h>
#include <asm/page.h>
#include <asm/openprom.h>
#include <asm/mmu_context.h>
#include <asm/sun4paddr.h>
+#include <asm/highmem.h>
/* Because of our dynamic kernel TLB miss strategy, and how
* our DVMA mapping allocation works, you _MUST_:
extern void sparc_context_init(int);
extern unsigned long end;
-extern void bootmem_init(void);
+extern unsigned long bootmem_init(unsigned long *pages_avail);
extern unsigned long last_valid_pfn;
extern void sun_serial_setup(void);
int i, cnt;
unsigned long kernel_end, vaddr;
extern struct resource sparc_iomap;
- unsigned long end_pfn;
+ unsigned long end_pfn, pages_avail;
kernel_end = (unsigned long) &end;
kernel_end += (SUN4C_REAL_PGDIR_SIZE * 4);
kernel_end = SUN4C_REAL_PGDIR_ALIGN(kernel_end);
- bootmem_init();
+ pages_avail = 0;
+ last_valid_pfn = bootmem_init(&pages_avail);
end_pfn = last_valid_pfn;
/* This does not logically belong here, but we need to
sparc_context_init(num_contexts);
{
- unsigned long zones_size[MAX_NR_ZONES] = { 0, 0, 0};
+ unsigned long zones_size[MAX_NR_ZONES];
+ unsigned long zholes_size[MAX_NR_ZONES];
+ unsigned long npages;
+ int znum;
- zones_size[ZONE_DMA] = end_pfn;
- free_area_init(zones_size);
+ for (znum = 0; znum < MAX_NR_ZONES; znum++)
+ zones_size[znum] = zholes_size[znum] = 0;
+
+ npages = max_low_pfn - (phys_base >> PAGE_SHIFT);
+
+ zones_size[ZONE_DMA] = npages;
+ zholes_size[ZONE_DMA] = npages - pages_avail;
+
+ npages = highend_pfn - max_low_pfn;
+ zones_size[ZONE_HIGHMEM] = npages;
+ zholes_size[ZONE_HIGHMEM] = npages - calc_highpages();
+
+ free_area_init_node(0, NULL, NULL, zones_size,
+ phys_base, zholes_size);
}
cnt = 0;
-# $Id: config.in,v 1.133 2001/03/07 00:44:36 davem Exp $
+# $Id: config.in,v 1.136 2001/03/24 06:04:24 davem Exp $
# For a description of the syntax of this configuration file,
# see the Configure script.
#
define_bool CONFIG_SUN_AUXIO y
define_bool CONFIG_SUN_IO y
bool 'PCI support' CONFIG_PCI
+if [ "$CONFIG_PCI" = "y" ] ; then
+ define_bool CONFIG_RTC y
+fi
source drivers/pci/Config.in
tristate 'Openprom tree appears in /proc/openprom' CONFIG_SUN_OPENPROMFS
tristate 'SUNW, envctrl support' CONFIG_ENVCTRL
tristate '7-Segment Display support' CONFIG_DISPLAY7SEG
tristate 'CP1XXX Hardware Watchdog support' CONFIG_WATCHDOG_CP1XXX
+ tristate 'RIO Hardware Watchdog support' CONFIG_WATCHDOG_RIO
fi
endmenu
bool ' Omit support for old Tigon I based AceNICs' CONFIG_ACENIC_OMIT_TIGON_I
fi
tristate 'SysKonnect SK-98xx support' CONFIG_SK98LIN
+ tristate 'Sun GEM support' CONFIG_SUNGEM
fi
tristate 'MyriCOM Gigabit Ethernet support' CONFIG_MYRI_SBUS
endmenu
dep_tristate ' Creator/Creator3D' CONFIG_DRM_FFB $CONFIG_DRM
endmenu
+source drivers/input/Config.in
+
source fs/Config.in
+source drivers/usb/Config.in
+
mainmenu_option next_comment
comment 'Watchdog'
CONFIG_SUN_AUXIO=y
CONFIG_SUN_IO=y
CONFIG_PCI=y
+CONFIG_RTC=y
CONFIG_PCI_NAMES=y
CONFIG_SUN_OPENPROMFS=m
CONFIG_NET=y
CONFIG_ENVCTRL=m
CONFIG_DISPLAY7SEG=m
CONFIG_WATCHDOG_CP1XXX=m
+CONFIG_WATCHDOG_RIO=m
#
# Console drivers
#
# Linux/SPARC audio subsystem (EXPERIMENTAL)
#
-CONFIG_SPARCAUDIO=y
-CONFIG_SPARCAUDIO_CS4231=y
+CONFIG_SPARCAUDIO=m
+CONFIG_SPARCAUDIO_CS4231=m
# CONFIG_SPARCAUDIO_DUMMY is not set
#
# CONFIG_SCSI_NCR53C8XX_PQS_PDS is not set
# CONFIG_SCSI_NCR53C8XX_SYMBIOS_COMPAT is not set
CONFIG_SCSI_QLOGIC_ISP=m
-CONFIG_SCSI_QLOGIC_FC=m
+CONFIG_SCSI_QLOGIC_FC=y
#
# Fibre Channel support
CONFIG_ACENIC=m
# CONFIG_ACENIC_OMIT_TIGON_I is not set
CONFIG_SK98LIN=m
+CONFIG_SUNGEM=y
CONFIG_MYRI_SBUS=m
CONFIG_FDDI=y
CONFIG_SKFP=m
CONFIG_DRM_FFB=m
#
+# Input core support
+#
+CONFIG_INPUT=y
+CONFIG_INPUT_KEYBDEV=y
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+CONFIG_INPUT_EVDEV=y
+
+#
# File systems
#
# CONFIG_QUOTA is not set
# CONFIG_NLS_UTF8 is not set
#
+# USB support
+#
+CONFIG_USB=y
+CONFIG_USB_DEBUG=y
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEVICEFS=y
+# CONFIG_USB_BANDWIDTH is not set
+
+#
+# USB Controllers
+#
+# CONFIG_USB_UHCI is not set
+# CONFIG_USB_UHCI_ALT is not set
+CONFIG_USB_OHCI=y
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_AUDIO is not set
+CONFIG_USB_BLUETOOTH=m
+CONFIG_USB_STORAGE=m
+# CONFIG_USB_STORAGE_DEBUG is not set
+CONFIG_USB_STORAGE_FREECOM=y
+CONFIG_USB_ACM=m
+CONFIG_USB_PRINTER=m
+
+#
+# USB Human Interface Devices (HID)
+#
+CONFIG_USB_HID=y
+CONFIG_USB_WACOM=m
+
+#
+# USB Imaging devices
+#
+CONFIG_USB_DC2XX=m
+CONFIG_USB_MDC800=m
+CONFIG_USB_SCANNER=m
+CONFIG_USB_MICROTEK=m
+
+#
+# USB Multimedia devices
+#
+CONFIG_USB_IBMCAM=m
+CONFIG_USB_OV511=m
+CONFIG_USB_DSBR=m
+CONFIG_USB_DABUSB=m
+
+#
+# USB Network adaptors
+#
+CONFIG_USB_PLUSB=m
+CONFIG_USB_PEGASUS=m
+CONFIG_USB_NET1080=m
+
+#
+# USB port drivers
+#
+CONFIG_USB_USS720=m
+
+#
+# USB Serial Converter support
+#
+CONFIG_USB_SERIAL=m
+# CONFIG_USB_SERIAL_DEBUG is not set
+CONFIG_USB_SERIAL_GENERIC=y
+CONFIG_USB_SERIAL_BELKIN=m
+CONFIG_USB_SERIAL_WHITEHEAT=m
+CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
+CONFIG_USB_SERIAL_EMPEG=m
+CONFIG_USB_SERIAL_FTDI_SIO=m
+CONFIG_USB_SERIAL_VISOR=m
+CONFIG_USB_SERIAL_EDGEPORT=m
+CONFIG_USB_SERIAL_KEYSPAN_PDA=m
+CONFIG_USB_SERIAL_KEYSPAN=m
+# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set
+CONFIG_USB_SERIAL_MCT_U232=m
+CONFIG_USB_SERIAL_OMNINET=m
+
+#
+# USB misc drivers
+#
+CONFIG_USB_RIO500=m
+
+#
# Watchdog
#
# CONFIG_SOFT_WATCHDOG is not set
#include <asm/sbus.h>
#include <asm/ebus.h>
#include <asm/fhc.h>
+#include <asm/spitfire.h>
#include <asm/starfire.h>
/* Probe and map in the Auxiliary I/O register */
return;
}
#endif
- if(central_bus || this_is_starfire) {
+ if (central_bus || this_is_starfire || (tlb_type == cheetah)) {
auxio_register = 0UL;
return;
}
unsigned long pgd_cache;
pgd_cache = ((unsigned long)current->mm->pgd[0])<<11UL;
- __asm__ __volatile__("stxa\t%0, [%1] %2"
+ __asm__ __volatile__("stxa\t%0, [%1] %2\n\t"
+ "membar #Sync"
: /* no outputs */
: "r" (pgd_cache),
"r" (TSB_REG), "i" (ASI_DMMU));
{ 0x17, 0x11, 0, "UltraSparc II integrated FPU"},
{ 0x17, 0x12, 0, "UltraSparc IIi integrated FPU"},
{ 0x17, 0x13, 0, "UltraSparc IIe integrated FPU"},
- { 0x17, 0x14, 0, "UltraSparc III integrated FPU"},
+ { 0x3e, 0x14, 0, "UltraSparc III integrated FPU"},
};
#define NSPARCFPU (sizeof(linux_sparc_fpu)/sizeof(struct cpu_fp_info))
#include <asm/oplib.h>
#include <asm/system.h>
#include <asm/smp.h>
+#include <asm/spitfire.h>
struct prom_cpuinfo linux_cpus[64] __initdata = { { 0 } };
unsigned prom_cpu_nodes[64];
if(strcmp(node_str, "cpu") == 0) {
cpu_nds[cpu_ctr] = scan;
linux_cpus[cpu_ctr].prom_node = scan;
- prom_getproperty(scan, "upa-portid",
- (char *) &thismid, sizeof(thismid));
+ thismid = 0;
+ if (tlb_type == spitfire) {
+ prom_getproperty(scan, "upa-portid",
+ (char *) &thismid, sizeof(thismid));
+ } else if (tlb_type == cheetah) {
+ prom_getproperty(scan, "portid",
+ (char *) &thismid, sizeof(thismid));
+ }
linux_cpus[cpu_ctr].mid = thismid;
printk("Found CPU %d (node=%08x,mid=%d)\n",
cpu_ctr, (unsigned) scan, thismid);
-/* $Id: dtlb_base.S,v 1.8 2000/11/10 08:28:45 davem Exp $
+/* $Id: dtlb_base.S,v 1.9 2001/03/22 00:12:32 davem Exp $
* dtlb_base.S: Front end to DTLB miss replacement strategy.
* This is included directly into the trap table.
*
#define VPTE_SHIFT (PAGE_SHIFT - 3)
#define KERN_HIGHBITS ((_PAGE_VALID | _PAGE_SZ4MB) ^ 0xfffff80000000000)
#define KERN_LOWBITS (_PAGE_CP | _PAGE_CV | _PAGE_P | _PAGE_W)
-#define KERN_LOWBITS_IO (_PAGE_E | _PAGE_P | _PAGE_W)
-#define KERN_IOBITS (KERN_LOWBITS ^ KERN_LOWBITS_IO)
/* %g1 TLB_SFSR (%g1 + %g1 == TLB_TAG_ACCESS)
* %g2 (KERN_HIGHBITS | KERN_LOWBITS)
#undef VPTE_SHIFT
#undef KERN_HIGHBITS
#undef KERN_LOWBITS
-#undef KERN_LOWBITS_IO
-#undef KERN_IOBITS
-/* $Id: ebus.c,v 1.57 2001/02/28 03:28:55 davem Exp $
+/* $Id: ebus.c,v 1.60 2001/03/15 02:11:09 davem Exp $
* ebus.c: PCI to EBus bridge device.
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
#ifdef CONFIG_SUN_AUXIO
extern void auxio_probe(void);
#endif
+extern void rs_init(void);
static inline void *ebus_alloc(size_t size)
{
dev->num_addrs = len / sizeof(struct linux_prom_registers);
for (i = 0; i < dev->num_addrs; i++) {
- n = (regs[i].which_io - 0x10) >> 2;
+ if (dev->bus->is_rio == 0)
+ n = (regs[i].which_io - 0x10) >> 2;
+ else
+ n = regs[i].which_io;
dev->resource[i].start = dev->bus->self->resource[n].start;
dev->resource[i].start += (unsigned long)regs[i].phys_addr;
struct linux_ebus *ebus;
struct pci_dev *pdev;
struct pcidev_cookie *cookie;
- int nd, ebusnd;
+ int nd, ebusnd, is_rio;
int num_ebus = 0;
if (!pci_present())
return;
+ is_rio = 0;
pdev = pci_find_device(PCI_VENDOR_ID_SUN, PCI_DEVICE_ID_SUN_EBUS, 0);
if (!pdev) {
+ pdev = pci_find_device(PCI_VENDOR_ID_SUN, PCI_DEVICE_ID_SUN_RIO_EBUS, 0);
+ is_rio = 1;
+ }
+ if (!pdev) {
printk("ebus: No EBus's found.\n");
return;
}
ebus_chain = ebus = ebus_alloc(sizeof(struct linux_ebus));
ebus->next = 0;
+ ebus->is_rio = is_rio;
while (ebusnd) {
/* SUNW,pci-qfe uses four empty ebuses on it.
we'd have to tweak with the ebus_chain
in the runtime after initialization. -jj */
if (!prom_getchild (ebusnd)) {
+ struct pci_dev *orig_pdev = pdev;
+
+ is_rio = 0;
pdev = pci_find_device(PCI_VENDOR_ID_SUN,
- PCI_DEVICE_ID_SUN_EBUS, pdev);
+ PCI_DEVICE_ID_SUN_EBUS, orig_pdev);
+ if (!pdev) {
+ pdev = pci_find_device(PCI_VENDOR_ID_SUN,
+ PCI_DEVICE_ID_SUN_RIO_EBUS, orig_pdev);
+ is_rio = 1;
+ }
if (!pdev) {
if (ebus == ebus_chain) {
ebus_chain = NULL;
}
break;
}
-
+ ebus->is_rio = is_rio;
cookie = pdev->sysdata;
ebusnd = cookie->prom_node;
continue;
next_ebus:
printk("\n");
- pdev = pci_find_device(PCI_VENDOR_ID_SUN,
- PCI_DEVICE_ID_SUN_EBUS, pdev);
- if (!pdev)
- break;
+ {
+ struct pci_dev *orig_pdev = pdev;
+
+ is_rio = 0;
+ pdev = pci_find_device(PCI_VENDOR_ID_SUN,
+ PCI_DEVICE_ID_SUN_EBUS, orig_pdev);
+ if (!pdev) {
+ pdev = pci_find_device(PCI_VENDOR_ID_SUN,
+ PCI_DEVICE_ID_SUN_RIO_EBUS, orig_pdev);
+ is_rio = 1;
+ }
+ if (!pdev)
+ break;
+ }
cookie = pdev->sysdata;
ebusnd = cookie->prom_node;
ebus->next = ebus_alloc(sizeof(struct linux_ebus));
ebus = ebus->next;
ebus->next = 0;
+ ebus->is_rio = is_rio;
++num_ebus;
}
+ rs_init();
#ifdef CONFIG_SUN_AUXIO
auxio_probe();
#endif
-/* $Id: entry.S,v 1.120 2000/09/08 13:58:12 jj Exp $
+/* $Id: entry.S,v 1.127 2001/03/23 07:56:30 davem Exp $
* arch/sparc64/kernel/entry.S: Sparc64 trap low-level entry points.
*
* Copyright (C) 1995,1997 David S. Miller (davem@caip.rutgers.edu)
/* This is trivial with the new code... */
.globl do_fpdis
do_fpdis:
- ldub [%g6 + AOFF_task_thread + AOFF_thread_fpsaved], %g5 ! Load Group
sethi %hi(TSTATE_PEF), %g4 ! IEU0
+ rdpr %tstate, %g5
+ andcc %g5, %g4, %g0
+ be,pt %xcc, 1f
+ nop
+ rd %fprs, %g5
+ andcc %g5, FPRS_FEF, %g0
+ be,pt %xcc, 1f
+ nop
+
+ /* Legal state when DCR_IFPOE is set in Cheetah %dcr. */
+ sethi %hi(109f), %g7
+ ba,pt %xcc, etrap
+109: or %g7, %lo(109b), %g7
+ add %g0, %g0, %g0
+ ba,a,pt %xcc, rtrap_clr_l6
+
+1: ldub [%g6 + AOFF_task_thread + AOFF_thread_fpsaved], %g5 ! Load Group
wr %g0, FPRS_FEF, %fprs ! LSU Group+4bubbles
andcc %g5, FPRS_FEF, %g0 ! IEU1 Group
be,a,pt %icc, 1f ! CTI
clr %g7 ! IEU0
- ldub [%g6 + AOFF_task_thread + AOFF_thread_gsr], %g7 ! Load Group
+ ldx [%g6 + AOFF_task_thread + AOFF_thread_gsr], %g7 ! Load Group
1: andcc %g5, FPRS_DL, %g0 ! IEU1
bne,pn %icc, 2f ! CTI
fzero %f0 ! FPA
ldxa [%g3] ASI_DMMU, %g5
add %g6, AOFF_task_fpregs + 0xc0, %g2
stxa %g0, [%g3] ASI_DMMU
+ membar #Sync
faddd %f0, %f2, %f8
fmuld %f0, %f2, %f10
- flush %g6
- membar #StoreLoad | #LoadLoad
ldda [%g1] ASI_BLK_S, %f32 ! grrr, where is ASI_BLK_NUCLEUS 8-(
ldda [%g2] ASI_BLK_S, %f48
faddd %f0, %f2, %f12
ldxa [%g3] ASI_DMMU, %g5
add %g6, AOFF_task_fpregs, %g1
stxa %g0, [%g3] ASI_DMMU
+ membar #Sync
add %g6, AOFF_task_fpregs + 0x40, %g2
faddd %f32, %f34, %f36
fmuld %f32, %f34, %f38
- flush %g6
- membar #StoreLoad | #LoadLoad
ldda [%g1] ASI_BLK_S, %f0 ! grrr, where is ASI_BLK_NUCLEUS 8-(
ldda [%g2] ASI_BLK_S, %f16
faddd %f32, %f34, %f40
fmuld %f32, %f34, %f58
faddd %f32, %f34, %f60
fmuld %f32, %f34, %f62
- b,pt %xcc, fpdis_exit
+ ba,pt %xcc, fpdis_exit
membar #Sync
3: mov SECONDARY_CONTEXT, %g3
add %g6, AOFF_task_fpregs, %g1
ldxa [%g3] ASI_DMMU, %g5
mov 0x40, %g2
stxa %g0, [%g3] ASI_DMMU
- flush %g6
- membar #StoreLoad | #LoadLoad
+ membar #Sync
ldda [%g1] ASI_BLK_S, %f0 ! grrr, where is ASI_BLK_NUCLEUS 8-(
ldda [%g1 + %g2] ASI_BLK_S, %f16
add %g1, 0x80, %g1
membar #Sync
fpdis_exit:
stxa %g5, [%g3] ASI_DMMU
- flush %g6
+ membar #Sync
fpdis_exit2:
wr %g7, 0, %gsr
ldx [%g6 + AOFF_task_thread + AOFF_thread_xfsr], %fsr
wr %g0, FPRS_FEF, %fprs ! clean DU/DL bits
retry
+ .align 32
+fp_other_bounce:
+ call do_fpother
+ add %sp, STACK_BIAS + REGWIN_SZ, %o0
+ ba,pt %xcc, rtrap
+ clr %l6
+
+ .globl do_fpother_check_fitos
+ .align 32
+do_fpother_check_fitos:
+ sethi %hi(fp_other_bounce - 4), %g7
+ or %g7, %lo(fp_other_bounce - 4), %g7
+
+ /* NOTE: Need to preserve %g7 until we fully commit
+ * to the fitos fixup.
+ */
+ stx %fsr, [%g6 + AOFF_task_thread + AOFF_thread_xfsr]
+ rdpr %tstate, %g3
+ andcc %g3, TSTATE_PRIV, %g0
+ bne,pn %xcc, do_fptrap_after_fsr
+ nop
+ ldx [%g6 + AOFF_task_thread + AOFF_thread_xfsr], %g3
+ srlx %g3, 14, %g1
+ and %g1, 7, %g1
+ cmp %g1, 2 ! Unfinished FP-OP
+ bne,pn %xcc, do_fptrap_after_fsr
+ sethi %hi(1 << 23), %g1 ! Inexact
+ andcc %g3, %g1, %g0
+ bne,pn %xcc, do_fptrap_after_fsr
+ rdpr %tpc, %g1
+ lduwa [%g1] ASI_AIUP, %g3 ! This cannot ever fail
+#define FITOS_MASK 0xc1f83fe0
+#define FITOS_COMPARE 0x81a01880
+ sethi %hi(FITOS_MASK), %g1
+ or %g1, %lo(FITOS_MASK), %g1
+ and %g3, %g1, %g1
+ sethi %hi(FITOS_COMPARE), %g2
+ or %g2, %lo(FITOS_COMPARE), %g2
+ cmp %g1, %g2
+ bne,pn %xcc, do_fptrap_after_fsr
+ nop
+ std %f62, [%g6 + AOFF_task_fpregs + (62 * 4)]
+ sethi %hi(fitos_table_1), %g1
+ and %g3, 0x1f, %g2
+ or %g1, %lo(fitos_table_1), %g1
+ sllx %g2, 2, %g2
+ jmpl %g1 + %g2, %g0
+ ba,pt %xcc, fitos_emul_continue
+
+fitos_table_1:
+ fitod %f0, %f62
+ fitod %f1, %f62
+ fitod %f2, %f62
+ fitod %f3, %f62
+ fitod %f4, %f62
+ fitod %f5, %f62
+ fitod %f6, %f62
+ fitod %f7, %f62
+ fitod %f8, %f62
+ fitod %f9, %f62
+ fitod %f10, %f62
+ fitod %f11, %f62
+ fitod %f12, %f62
+ fitod %f13, %f62
+ fitod %f14, %f62
+ fitod %f15, %f62
+ fitod %f16, %f62
+ fitod %f17, %f62
+ fitod %f18, %f62
+ fitod %f19, %f62
+ fitod %f20, %f62
+ fitod %f21, %f62
+ fitod %f22, %f62
+ fitod %f23, %f62
+ fitod %f24, %f62
+ fitod %f25, %f62
+ fitod %f26, %f62
+ fitod %f27, %f62
+ fitod %f28, %f62
+ fitod %f29, %f62
+ fitod %f30, %f62
+ fitod %f31, %f62
+
+fitos_emul_continue:
+ sethi %hi(fitos_table_2), %g1
+ srl %g3, 25, %g2
+ or %g1, %lo(fitos_table_2), %g1
+ and %g2, 0x1f, %g2
+ sllx %g2, 2, %g2
+ jmpl %g1 + %g2, %g0
+ ba,pt %xcc, fitos_emul_fini
+
+fitos_table_2:
+ fdtos %f62, %f0
+ fdtos %f62, %f1
+ fdtos %f62, %f2
+ fdtos %f62, %f3
+ fdtos %f62, %f4
+ fdtos %f62, %f5
+ fdtos %f62, %f6
+ fdtos %f62, %f7
+ fdtos %f62, %f8
+ fdtos %f62, %f9
+ fdtos %f62, %f10
+ fdtos %f62, %f11
+ fdtos %f62, %f12
+ fdtos %f62, %f13
+ fdtos %f62, %f14
+ fdtos %f62, %f15
+ fdtos %f62, %f16
+ fdtos %f62, %f17
+ fdtos %f62, %f18
+ fdtos %f62, %f19
+ fdtos %f62, %f20
+ fdtos %f62, %f21
+ fdtos %f62, %f22
+ fdtos %f62, %f23
+ fdtos %f62, %f24
+ fdtos %f62, %f25
+ fdtos %f62, %f26
+ fdtos %f62, %f27
+ fdtos %f62, %f28
+ fdtos %f62, %f29
+ fdtos %f62, %f30
+ fdtos %f62, %f31
+
+fitos_emul_fini:
+ ldd [%g6 + AOFF_task_fpregs + (62 * 4)], %f62
+ done
+
.globl do_fptrap
.align 32
do_fptrap:
- ldub [%g6 + AOFF_task_thread + AOFF_thread_fpsaved], %g3
stx %fsr, [%g6 + AOFF_task_thread + AOFF_thread_xfsr]
+do_fptrap_after_fsr:
+ ldub [%g6 + AOFF_task_thread + AOFF_thread_fpsaved], %g3
rd %fprs, %g1
or %g3, %g1, %g3
stb %g3, [%g6 + AOFF_task_thread + AOFF_thread_fpsaved]
rd %gsr, %g3
- stb %g3, [%g6 + AOFF_task_thread + AOFF_thread_gsr]
+ stx %g3, [%g6 + AOFF_task_thread + AOFF_thread_gsr]
mov SECONDARY_CONTEXT, %g3
add %g6, AOFF_task_fpregs, %g2
ldxa [%g3] ASI_DMMU, %g5
stxa %g0, [%g3] ASI_DMMU
- flush %g6
- membar #StoreStore | #LoadStore
+ membar #Sync
andcc %g1, FPRS_DL, %g0
be,pn %icc, 4f
mov 0x40, %g3
5: mov SECONDARY_CONTEXT, %g1
membar #Sync
stxa %g5, [%g1] ASI_DMMU
- flush %g6
+ membar #Sync
ba,pt %xcc, etrap
wr %g0, 0, %fprs
.globl do_ivec
do_ivec:
mov 0x40, %g3
- ldxa [%g3 + %g0] ASI_UDB_INTR_R, %g3
+ ldxa [%g3 + %g0] ASI_INTR_R, %g3
sethi %hi(KERNBASE), %g4
cmp %g3, %g4
bgeu,pn %xcc, do_ivec_xcall
do_ivec_xcall:
mov 0x50, %g1
- ldxa [%g1 + %g0] ASI_UDB_INTR_R, %g1
+ ldxa [%g1 + %g0] ASI_INTR_R, %g1
srl %g3, 0, %g3
mov 0x60, %g7
- ldxa [%g7 + %g0] ASI_UDB_INTR_R, %g7
+ ldxa [%g7 + %g0] ASI_INTR_R, %g7
stxa %g0, [%g0] ASI_INTR_RECEIVE
membar #Sync
jmpl %g3, %g0
ldx [%g3 + %lo(timer_tick_offset)], %g3
or %g2, %lo(xtime), %g2
or %g1, %lo(timer_tick_compare), %g1
-1: ldda [%g2] ASI_NUCLEUS_QUAD_LDD, %o4
- rd %tick, %o1
- ldx [%g1], %g7
+1: rdpr %ver, %o2
+ sethi %hi(0x003e0014), %o1
+ srlx %o2, 32, %o2
+ or %o1, %lo(0x003e0014), %o1
+ ldda [%g2] ASI_NUCLEUS_QUAD_LDD, %o4
+ cmp %o2, %o1
+ bne,pt %xcc, 2f
+ nop
+ ba,pt %xcc, 3f
+ rd %asr24, %o1
+2: rd %tick, %o1
+3: ldx [%g1], %g7
ldda [%g2] ASI_NUCLEUS_QUAD_LDD, %o2
xor %o4, %o2, %o2
xor %o5, %o3, %o3
-/* $Id: etrap.S,v 1.43 2000/03/29 09:55:30 davem Exp $
+/* $Id: etrap.S,v 1.44 2001/03/22 00:51:25 davem Exp $
* etrap.S: Preparing for entry into the kernel on Sparc V9.
*
* Copyright (C) 1996, 1997 David S. Miller (davem@caip.rutgers.edu)
nop
#undef TASK_REGOFF
+#undef ETRAP_PSTATE1
+#undef ETRAP_PSTATE2
-/* $Id: head.S,v 1.67 2001/03/04 18:31:00 davem Exp $
+/* $Id: head.S,v 1.75 2001/03/22 09:54:26 davem Exp $
* head.S: Initial boot code for the Sparc64 port of Linux.
*
* Copyright (C) 1996,1997 David S. Miller (davem@caip.rutgers.edu)
nop
cheetah_boot:
- mov DCR_BPE | DCR_RPE | DCR_SI | DCR_MS, %g1
+ mov DCR_BPE | DCR_RPE | DCR_SI | DCR_IFPOE | DCR_MS, %g1
wr %g1, %asr18
sethi %uhi(DCU_ME | DCU_RE | DCU_PE | DCU_HPE | DCU_SPE | DCU_SL | DCU_WE), %g5
or %g5, %ulo(DCU_ME | DCU_RE | DCU_PE | DCU_HPE | DCU_SPE | DCU_SL | DCU_WE), %g5
sllx %g5, 32, %g5
- ldxa [%g0] ASI_DCU_CONTROL_REG, %g1
- or %g1, %g5, %g1
+ or %g5, DCU_DM | DCU_IM | DCU_DC | DCU_IC, %g5
+ ldxa [%g0] ASI_DCU_CONTROL_REG, %g3
+ or %g5, %g3, %g5
stxa %g5, [%g0] ASI_DCU_CONTROL_REG
membar #Sync
*/
mov (LSU_CONTROL_IC|LSU_CONTROL_DC|LSU_CONTROL_IM|LSU_CONTROL_DM), %g1
stxa %g1, [%g0] ASI_LSU_CONTROL
+ membar #Sync
/*
* Make sure we are in privileged mode, have address masking,
cmp %g1, %g2
be,a,pn %xcc, spitfire_got_tlbentry
ldxa [%l0] ASI_ITLB_DATA_ACCESS, %g1
- /* XXX Spitfire dependency... */
cmp %l0, (63 << 3)
blu,pt %xcc, 1b
add %l0, (1 << 3), %l0
nop
stxa %g0, [%l7] ASI_IMMU
stxa %g0, [%l0] ASI_ITLB_DATA_ACCESS
+ membar #Sync
2:
- /* XXX Spitfire dependency... */
cmp %l0, (63 << 3)
blu,pt %xcc, 1b
add %l0, (1 << 3), %l0
nop
stxa %g0, [%l7] ASI_DMMU
stxa %g0, [%l0] ASI_DTLB_DATA_ACCESS
+ membar #Sync
2:
- /* XXX Spitfire dependency... */
cmp %l0, (63 << 3)
blu,pt %xcc, 1b
add %l0, (1 << 3), %l0
*/
sethi %hi(KERNBASE), %g3
- /* XXX Spitfire dependency... */
mov (63 << 3), %g7
stxa %g3, [%l7] ASI_DMMU /* KERNBASE into TLB TAG */
stxa %g5, [%g7] ASI_DTLB_DATA_ACCESS /* TTE into TLB DATA */
mov TLB_TAG_ACCESS, %g2
stxa %g3, [%g2] ASI_IMMU
stxa %g3, [%g2] ASI_DMMU
+ membar #Sync
+
+ rdpr %ver, %g1
+ sethi %hi(0x003e0014), %g5
+ srlx %g1, 32, %g1
+ or %g5, %lo(0x003e0014), %g5
+ cmp %g1, %g5
+ bne,pt %icc, spitfire_tlb_fixup
+ nop
+
+cheetah_tlb_fixup:
+ set (0 << 16) | (15 << 3), %g7
+ ldxa [%g7] ASI_ITLB_DATA_ACCESS, %g1
+ andn %g1, (_PAGE_G), %g1
+ stxa %g1, [%g7] ASI_ITLB_DATA_ACCESS
+ membar #Sync
+
+ ldxa [%g7] ASI_DTLB_DATA_ACCESS, %g1
+ andn %g1, (_PAGE_G), %g1
+ stxa %g1, [%g7] ASI_DTLB_DATA_ACCESS
+ membar #Sync
+
+ /* Kill instruction prefetch queues. */
+ flush %g3
+ membar #Sync
+
+ /* Set TLB type to cheetah. */
+ mov 1, %g2
+ sethi %hi(tlb_type), %g5
+ stw %g2, [%g5 + %lo(tlb_type)]
- /* XXX Spitfire dependency... */
+ /* Patch copy/page operations to cheetah optimized versions. */
+ call cheetah_patch_copyops
+ nop
+ call cheetah_patch_pgcopyops
+ nop
+
+ ba,pt %xcc, tlb_fixup_done
+ nop
+
+spitfire_tlb_fixup:
mov (63 << 3), %g7
ldxa [%g7] ASI_ITLB_DATA_ACCESS, %g1
andn %g1, (_PAGE_G), %g1
flush %g3
membar #Sync
+ /* Set TLB type to spitfire. */
+ mov 0, %g2
+ sethi %hi(tlb_type), %g5
+ stw %g2, [%g5 + %lo(tlb_type)]
+
+tlb_fixup_done:
sethi %hi(init_task_union), %g6
or %g6, %lo(init_task_union), %g6
mov %sp, %l6
wrpr %g0, 0x0, %tl
/* Clear the bss */
- sethi %hi(8191), %l2
- or %l2, %lo(8191), %l2
- sethi %hi(__bss_start), %l0
- or %l0, %lo(__bss_start), %l0
- sethi %hi(_end), %l1
- or %l1, %lo(_end), %l1
- add %l1, %l2, %l1
- andn %l1, %l2, %l1
- add %l2, 1, %l2
- add %l0, %g0, %o0
-1:
- mov %l2, %o1
+ sethi %hi(__bss_start), %o0
+ or %o0, %lo(__bss_start), %o0
+ sethi %hi(_end), %o1
+ or %o1, %lo(_end), %o1
call __bzero
- add %l0, %l2, %l0
- cmp %l0, %l1
- blu,pt %xcc, 1b
- add %l0, %g0, %o0
+ sub %o1, %o0, %o1
/* Now clear empty_zero_page */
- mov %l2, %o1
+ sethi %hi(8192), %o1
+ or %o1, %lo(8192), %o1
+ sethi %hi(KERNBASE), %g3
call __bzero
- mov %g3, %o0
+ or %g3, %lo(KERNBASE), %o0
mov %l6, %o1 ! OpenPROM stack
call prom_init
wrpr %o1, (PSTATE_MG|PSTATE_IE), %pstate
/* Set fixed globals used by dTLB miss handler. */
-#define KERN_HIGHBITS ((_PAGE_VALID | _PAGE_SZ4MB) ^ 0xfffff80000000000)
+#define KERN_HIGHBITS ((_PAGE_VALID|_PAGE_SZ4MB)^0xfffff80000000000)
#define KERN_LOWBITS (_PAGE_CP | _PAGE_CV | _PAGE_P | _PAGE_W)
-#define VPTE_BASE_CHEETAH 0xffe0000000000000
#define VPTE_BASE_SPITFIRE 0xfffffffe00000000
+#if 1
+#define VPTE_BASE_CHEETAH VPTE_BASE_SPITFIRE
+#else
+#define VPTE_BASE_CHEETAH 0xffe0000000000000
+#endif
mov TSB_REG, %g1
stxa %g0, [%g1] ASI_DMMU
sethi %uhi(VPTE_BASE_CHEETAH), %g3
or %g3, %ulo(VPTE_BASE_CHEETAH), %g3
ba,pt %xcc, 2f
- sllx %g3, 32, %g3
+ sllx %g3, 32, %g3
1:
sethi %uhi(VPTE_BASE_SPITFIRE), %g3
or %g3, %ulo(VPTE_BASE_SPITFIRE), %g3
clr %g7
#undef KERN_HIGHBITS
#undef KERN_LOWBITS
-#undef VPTE_BASE
+#undef VPTE_BASE_SPITFIRE
+#undef VPTE_BASE_CHEETAH
/* Setup Interrupt globals */
wrpr %o1, (PSTATE_IG|PSTATE_IE), %pstate
or %g5, %lo(__up_workvec), %g6
#else
/* By definition of where we are, this is boot_cpu. */
- sethi %hi(cpu_data), %g5
- or %g5, %lo(cpu_data), %g5
-
brz,pt %i0, not_starfire
sethi %hi(0x1fff4000), %g1
or %g1, %lo(0x1fff4000), %g1
nop
not_starfire:
+ rdpr %ver, %g1
+ sethi %hi(0x003e0014), %g5
+ srlx %g1, 32, %g1
+ or %g7, %lo(0x003e0014), %g5
+ cmp %g1, %g5
+ bne,pt %icc, not_cheetah
+ nop
+
+ ldxa [%g0] ASI_SAFARI_CONFIG, %g1
+ srlx %g1, 17, %g1
+ and %g1, 0x3ff, %g1 ! 10bit Safari Agent ID
+
+not_cheetah:
ldxa [%g0] ASI_UPA_CONFIG, %g1
srlx %g1, 17, %g1
and %g1, 0x1f, %g1
/* In theory this is: &(cpu_data[boot_cpu_id].irq_worklists[0]) */
set_worklist:
+ sethi %hi(cpu_data), %g5
+ or %g5, %lo(cpu_data), %g5
sllx %g1, 7, %g1
add %g5, %g1, %g5
add %g5, 64, %g6
/* Kill PROM timer */
wr %g0, 0, %tick_cmpr
+ rdpr %ver, %g1
+ sethi %hi(0x003e0014), %g5
+ srlx %g1, 32, %g1
+ or %g7, %lo(0x003e0014), %g5
+ cmp %g1, %g5
+ bne,pt %icc, 1f
+ nop
+
+ /* Disable STICK_INT interrupts. */
+ sethi %hi(0x80000000), %g1
+ sllx %g1, 32, %g1
+ wr %g1, %asr25
+
/* Ok, we're done setting up all the state our trap mechanims needs,
* now get back into normal globals and let the PROM know what is up.
*/
+1:
wrpr %g0, %g0, %wstate
wrpr %o1, PSTATE_IE, %pstate
-/* $Id: ioctl32.c,v 1.107 2001/02/13 01:16:44 davem Exp $
+/* $Id: ioctl32.c,v 1.110 2001/03/22 12:51:25 davem Exp $
* ioctl32.c: Conversion between 32bit and 64bit native ioctls.
*
* Copyright (C) 1997-2000 Jakub Jelinek (jakub@redhat.com)
#include <linux/blkpg.h>
#include <linux/blk.h>
#include <linux/elevator.h>
+#include <linux/rtc.h>
#if defined(CONFIG_BLK_DEV_LVM) || defined(CONFIG_BLK_DEV_LVM_MODULE)
/* Ugh. This header really is not clean */
#define min min
#include <asm/fbio.h>
#include <asm/kbio.h>
#include <asm/vuid_event.h>
-#include <asm/rtc.h>
#include <asm/openpromio.h>
#include <asm/envctrl.h>
#include <asm/audioio.h>
u32 proc;
u32 pv[ABS_MAX_PV + 1];
u32 lv[ABS_MAX_LV + 1];
+ uint8_t vg_uuid[UUID_LEN+1]; /* volume group UUID */
} vg32_t;
typedef struct {
uint32_t pe_stale;
u32 pe;
u32 inode;
+ uint8_t pv_uuid[UUID_LEN+1];
} pv32_t;
typedef struct {
typedef struct {
u32 lv_index;
u32 lv;
+ /* Transfer size because user space and kernel space differ */
+ uint16_t size;
} lv_status_byindex_req32_t;
typedef struct {
+ dev_t dev;
+ u32 lv;
+} lv_status_bydev_req32_t;
+
+typedef struct {
uint8_t lv_name[NAME_LEN];
kdev_t old_dev;
kdev_t new_dev;
if (l->lv_block_exception) {
lbe32 = (lv_block_exception32_t *)A(ptr2);
memset(lbe, 0, size);
- for (i = 0; i < l->lv_remap_end; i++, lbe++, lbe32++) {
- err |= get_user(lbe->rsector_org, &lbe32->rsector_org);
- err |= __get_user(lbe->rdev_org, &lbe32->rdev_org);
- err |= __get_user(lbe->rsector_new, &lbe32->rsector_new);
- err |= __get_user(lbe->rdev_new, &lbe32->rdev_new);
+ for (i = 0; i < l->lv_remap_end; i++, lbe++, lbe32++) {
+ err |= get_user(lbe->rsector_org, &lbe32->rsector_org);
+ err |= __get_user(lbe->rdev_org, &lbe32->rdev_org);
+ err |= __get_user(lbe->rsector_new, &lbe32->rsector_new);
+ err |= __get_user(lbe->rdev_new, &lbe32->rdev_new);
+
}
}
}
err |= __copy_to_user(&ul->lv_remap_ptr, &l->lv_remap_ptr,
((long)&ul->dummy[0]) - ((long)&ul->lv_remap_ptr));
size = l->lv_allocated_le * sizeof(pe_t);
- err |= __copy_to_user((void *)A(ptr1), l->lv_current_pe, size);
- return -EFAULT;
+ if (ptr1)
+ err |= __copy_to_user((void *)A(ptr1), l->lv_current_pe, size);
+ return err ? -EFAULT : 0;
}
static int do_lvm_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg)
lv_req_t lv_req;
le_remap_req_t le_remap;
lv_status_byindex_req_t lv_byindex;
- pv_status_req32_t pv_status;
+ lv_status_bydev_req_t lv_bydev;
+ pv_status_req_t pv_status;
} u;
pv_t p;
int err;
kfree(v);
return -EFAULT;
}
+ if (copy_from_user(v->vg_uuid, ((vg32_t *)arg)->vg_uuid, UUID_LEN+1)) {
+ kfree(v);
+ return -EFAULT;
+ }
+
karg = v;
memset(v->pv, 0, sizeof(v->pv) + sizeof(v->lv));
if (v->pv_max > ABS_MAX_PV || v->lv_max > ABS_MAX_LV)
err = -ENOMEM;
break;
}
- err = copy_from_user(v->pv[i], (void *)A(ptr), sizeof(pv32_t) - 8);
+ err = copy_from_user(v->pv[i], (void *)A(ptr), sizeof(pv32_t) - 8 - UUID_LEN+1);
if (err) {
err = -EFAULT;
break;
}
+ err = copy_from_user(v->pv[i]->pv_uuid, ((pv32_t *)A(ptr))->pv_uuid, UUID_LEN+1);
+ if (err) {
+ err = -EFAULT;
+ break;
+ }
+
+
v->pv[i]->pe = NULL; v->pv[i]->inode = NULL;
}
}
case LV_EXTEND:
case LV_REDUCE:
case LV_REMOVE:
+ case LV_RENAME:
case LV_STATUS_BYNAME:
- err = copy_from_user(&u.pv_status, arg, sizeof(u.pv_status.pv_name));
+ err = copy_from_user(&u.pv_status, arg, sizeof(u.pv_status.pv_name));
if (err) return -EFAULT;
if (cmd != LV_REMOVE) {
err = __get_user(ptr, &((lv_req32_t *)arg)->lv);
} else
u.lv_req.lv = NULL;
break;
+
+
case LV_STATUS_BYINDEX:
err = get_user(u.lv_byindex.lv_index, &((lv_status_byindex_req32_t *)arg)->lv_index);
err |= __get_user(ptr, &((lv_status_byindex_req32_t *)arg)->lv);
if (err) return err;
u.lv_byindex.lv = get_lv_t(ptr, &err);
break;
+ case LV_STATUS_BYDEV:
+ err = get_user(u.lv_bydev.dev, &((lv_status_bydev_req32_t *)arg)->dev);
+ u.lv_bydev.lv = get_lv_t(ptr, &err);
+ if (err) return err;
+ u.lv_bydev.lv = &p;
+ p.pe = NULL; p.inode = NULL;
+ break;
case VG_EXTEND:
- err = copy_from_user(&p, (void *)arg, sizeof(pv32_t) - 8);
+ err = copy_from_user(&p, (void *)arg, sizeof(pv32_t) - 8 - UUID_LEN+1);
+ if (err) return -EFAULT;
+ err = copy_from_user(p.pv_uuid, ((pv32_t *)arg)->pv_uuid, UUID_LEN+1);
if (err) return -EFAULT;
p.pe = NULL; p.inode = NULL;
karg = &p;
break;
- case LE_REMAP:
- err = copy_from_user(&u.le_remap, (void *)arg, sizeof(le_remap_req32_t));
- if (err) return -EFAULT;
- u.le_remap.new_pe = ((le_remap_req32_t *)&u.le_remap)->new_pe;
- u.le_remap.old_pe = ((le_remap_req32_t *)&u.le_remap)->old_pe;
- break;
case PV_CHANGE:
case PV_STATUS:
err = copy_from_user(&u.pv_status, arg, sizeof(u.lv_req.lv_name));
if (err) return err;
u.pv_status.pv = &p;
if (cmd == PV_CHANGE) {
- err = copy_from_user(&p, (void *)A(ptr), sizeof(pv32_t) - 8);
+ err = copy_from_user(&p, (void *)A(ptr), sizeof(pv32_t) - 8 - UUID_LEN+1);
if (err) return -EFAULT;
p.pe = NULL; p.inode = NULL;
}
clear_user(&((vg32_t *)arg)->proc, sizeof(vg32_t) - (long)&((vg32_t *)0)->proc))
err = -EFAULT;
}
+ if (copy_to_user(((vg32_t *)arg)->vg_uuid, v->vg_uuid, UUID_LEN+1)) {
+ err = -EFAULT;
+ }
kfree(v);
break;
case VG_CREATE:
if (!err) err = copy_lv_t(ptr, u.lv_byindex.lv);
put_lv_t(u.lv_byindex.lv);
}
+ break;
case PV_STATUS:
if (!err) {
- err = copy_to_user((void *)A(ptr), &p, sizeof(pv32_t) - 8);
- if (err) return -EFAULT;
+ err = copy_to_user((void *)A(ptr), &p, sizeof(pv32_t) - 8 - UUID_LEN+1);
+ if (err) return -EFAULT;
+ err = copy_to_user(((pv_t *)A(ptr))->pv_uuid, p.pv_uuid, UUID_LEN + 1);
+ if (err) return -EFAULT;
}
break;
+ case LV_STATUS_BYDEV:
+ if (!err) {
+ if (!err) err = copy_lv_t(ptr, u.lv_bydev.lv);
+ put_lv_t(u.lv_byindex.lv);
+ }
+ break;
}
return err;
}
COMPATIBLE_IOCTL(_IOR('v' , BASE_VIDIOCPRIVATE+6, int))
COMPATIBLE_IOCTL(_IOR('v' , BASE_VIDIOCPRIVATE+7, int))
/* Little p (/dev/rtc, /dev/envctrl, etc.) */
-COMPATIBLE_IOCTL(RTCGET)
-COMPATIBLE_IOCTL(RTCSET)
+COMPATIBLE_IOCTL(_IOR('p', 20, int[7])) /* RTCGET */
+COMPATIBLE_IOCTL(_IOW('p', 21, int[7])) /* RTCSET */
+COMPATIBLE_IOCTL(RTC_AIE_ON)
+COMPATIBLE_IOCTL(RTC_AIE_OFF)
+COMPATIBLE_IOCTL(RTC_UIE_ON)
+COMPATIBLE_IOCTL(RTC_UIE_OFF)
+COMPATIBLE_IOCTL(RTC_PIE_ON)
+COMPATIBLE_IOCTL(RTC_PIE_OFF)
+COMPATIBLE_IOCTL(RTC_WIE_ON)
+COMPATIBLE_IOCTL(RTC_WIE_OFF)
+COMPATIBLE_IOCTL(RTC_ALM_SET)
+COMPATIBLE_IOCTL(RTC_ALM_READ)
+COMPATIBLE_IOCTL(RTC_RD_TIME)
+COMPATIBLE_IOCTL(RTC_SET_TIME)
+COMPATIBLE_IOCTL(RTC_WKALM_SET)
+COMPATIBLE_IOCTL(RTC_WKALM_RD)
COMPATIBLE_IOCTL(ENVCTRL_RD_WARNING_TEMPERATURE)
COMPATIBLE_IOCTL(ENVCTRL_RD_SHUTDOWN_TEMPERATURE)
COMPATIBLE_IOCTL(ENVCTRL_RD_CPU_TEMPERATURE)
COMPATIBLE_IOCTL(VG_STATUS_GET_COUNT)
COMPATIBLE_IOCTL(VG_STATUS_GET_NAMELIST)
COMPATIBLE_IOCTL(VG_REMOVE)
+COMPATIBLE_IOCTL(VG_RENAME)
COMPATIBLE_IOCTL(VG_REDUCE)
COMPATIBLE_IOCTL(PE_LOCK_UNLOCK)
COMPATIBLE_IOCTL(PV_FLUSH)
COMPATIBLE_IOCTL(LV_SET_ACCESS)
COMPATIBLE_IOCTL(LV_SET_STATUS)
COMPATIBLE_IOCTL(LV_SET_ALLOCATION)
+COMPATIBLE_IOCTL(LE_REMAP)
+COMPATIBLE_IOCTL(LV_BMAP)
+COMPATIBLE_IOCTL(LV_SNAPSHOT_USE_RATE)
#endif /* LVM */
#if defined(CONFIG_DRM) || defined(CONFIG_DRM_MODULE)
COMPATIBLE_IOCTL(DRM_IOCTL_GET_MAGIC)
HANDLE_IOCTL(LV_REMOVE, do_lvm_ioctl)
HANDLE_IOCTL(LV_EXTEND, do_lvm_ioctl)
HANDLE_IOCTL(LV_REDUCE, do_lvm_ioctl)
+HANDLE_IOCTL(LV_RENAME, do_lvm_ioctl)
HANDLE_IOCTL(LV_STATUS_BYNAME, do_lvm_ioctl)
HANDLE_IOCTL(LV_STATUS_BYINDEX, do_lvm_ioctl)
-HANDLE_IOCTL(LE_REMAP, do_lvm_ioctl)
HANDLE_IOCTL(PV_CHANGE, do_lvm_ioctl)
HANDLE_IOCTL(PV_STATUS, do_lvm_ioctl)
#endif /* LVM */
HANDLE_IOCTL(DRM32_IOCTL_DMA, drm32_dma);
HANDLE_IOCTL(DRM32_IOCTL_RES_CTX, drm32_res_ctx);
#endif /* DRM */
+#if 0
+HANDLE_IOCTL(RTC32_IRQP_READ, do_rtc_ioctl)
+HANDLE_IOCTL(RTC32_IRQP_SET, do_rtc_ioctl)
+HANDLE_IOCTL(RTC32_EPOCH_READ, do_rtc_ioctl)
+HANDLE_IOCTL(RTC32_EPOCH_SET, do_rtc_ioctl)
+#endif
IOCTL_TABLE_END
unsigned int ioctl32_hash_table[1024];
-/* $Id: irq.c,v 1.95 2001/02/13 01:16:44 davem Exp $
+/* $Id: irq.c,v 1.99 2001/03/22 02:19:23 davem Exp $
* irq.c: UltraSparc IRQ handling/init/registry.
*
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
if (imap == 0UL)
return;
- if(this_is_starfire == 0) {
+ if (tlb_type == cheetah) {
+ /* We set it to our Safari AID. */
+ __asm__ __volatile__("ldxa [%%g0] %1, %0"
+ : "=r" (tid)
+ : "i" (ASI_SAFARI_CONFIG));
+ tid = ((tid & (0x3ffUL<<17)) << 9);
+ tid &= IMAP_AID_SAFARI;
+ } else if (this_is_starfire == 0) {
/* We set it to our UPA MID. */
__asm__ __volatile__("ldxa [%%g0] %1, %0"
: "=r" (tid)
: "i" (ASI_UPA_CONFIG));
tid = ((tid & UPA_CONFIG_MID) << 9);
+ tid &= IMAP_TID_UPA;
} else {
tid = (starfire_translate(imap, current->processor) << 26);
+ tid &= IMAP_TID_UPA;
}
/* NOTE NOTE NOTE, IGN and INO are read-only, IGN is a product
*
* Things like FFB can now be handled via the new IRQ mechanism.
*/
- upa_writel(IMAP_VALID | (tid & IMAP_TID), imap);
+ upa_writel(tid | IMAP_VALID, imap);
}
/* This now gets passed true ino's as well. */
/* Voo-doo programming. */
if (cpu_data[buddy].idle_volume < FORWARD_VOLUME)
should_forward = 0;
+
+ /* This just so happens to be correct on Cheetah
+ * at the moment.
+ */
buddy <<= 26;
}
#endif
/*
* Check for TICK_INT on level 14 softint.
*/
- if ((irq == 14) && (get_softint() & (1UL << 0)))
- irq = 0;
-#endif
+ {
+ unsigned long clr_mask = 1 << irq;
+ unsigned long tick_mask;
+
+ if (SPARC64_USE_STICK)
+ tick_mask = (1UL << 16);
+ else
+ tick_mask = (1UL << 0);
+ if ((irq == 14) && (get_softint() & tick_mask)) {
+ irq = 0;
+ clr_mask = tick_mask;
+ }
+ clear_softint(clr_mask);
+ }
+#else
clear_softint(1 << irq);
+#endif
irq_enter(cpu, irq);
kstat.irqs[cpu][irq]++;
extern void smp_tick_init(void);
#endif
- node = linux_cpus[0].prom_node;
- *clock = prom_getint(node, "clock-frequency");
+ if (!SPARC64_USE_STICK) {
+ node = linux_cpus[0].prom_node;
+ *clock = prom_getint(node, "clock-frequency");
+ } else {
+ node = prom_root_node;
+ *clock = prom_getint(node, "stick-frequency");
+ }
timer_tick_offset = *clock / HZ;
#ifdef CONFIG_SMP
smp_tick_init();
* at the start of an I-cache line, and perform a dummy
* read back from %tick_cmpr right after writing to it. -DaveM
*/
+ if (!SPARC64_USE_STICK) {
__asm__ __volatile__("
rd %%tick, %%g1
ba,pt %%xcc, 1f
: /* no outputs */
: "r" (timer_tick_offset)
: "g1");
+ } else {
+ /* Let the user get at STICK too. */
+ __asm__ __volatile__("
+ sethi %%hi(0x80000000), %%g1
+ sllx %%g1, 32, %%g1
+ rd %%asr24, %%g2
+ andn %%g2, %%g1, %%g2
+ wr %%g2, 0, %%asr24"
+ : /* no outputs */
+ : /* no inputs */
+ : "g1", "g2");
+
+ __asm__ __volatile__("
+ rd %%asr24, %%g1
+ add %%g1, %0, %%g1
+ wr %%g1, 0x0, %%asr25"
+ : /* no outputs */
+ : "r" (timer_tick_offset)
+ : "g1");
+ }
/* Restore PSTATE_IE. */
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
if (bucket->pil == 12)
return goal_cpu;
- if(this_is_starfire == 0) {
+ if (tlb_type == cheetah) {
+ tid = __cpu_logical_map[goal_cpu] << 26;
+ tid &= IMAP_AID_SAFARI;
+ } else if (this_is_starfire == 0) {
tid = __cpu_logical_map[goal_cpu] << 26;
+ tid &= IMAP_TID_UPA;
} else {
tid = (starfire_translate(imap, __cpu_logical_map[goal_cpu]) << 26);
+ tid &= IMAP_TID_UPA;
}
- upa_writel(IMAP_VALID | (tid & IMAP_TID), imap);
+ upa_writel(tid | IMAP_VALID, imap);
goal_cpu++;
if(goal_cpu >= NR_CPUS ||
stxa %%g0, [%%g0] %0
membar #Sync
" : /* no outputs */
- : "i" (ASI_INTR_RECEIVE), "i" (ASI_UDB_INTR_R)
+ : "i" (ASI_INTR_RECEIVE), "i" (ASI_INTR_R)
: "g1", "g2");
}
-/* $Id: pci.c,v 1.22 2001/02/28 05:59:45 davem Exp $
+/* $Id: pci.c,v 1.23 2001/03/14 04:17:14 davem Exp $
* pci.c: UltraSparc PCI controller support.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@redhat.com)
} pci_controller_table[] = {
{ "SUNW,sabre", sabre_init },
{ "pci108e,a000", sabre_init },
+ { "pci108e,a001", sabre_init },
{ "SUNW,psycho", psycho_init },
{ "pci108e,8000", psycho_init },
{ "SUNW,schizo", schizo_init },
-/* $Id: pci_iommu.c,v 1.12 2001/01/11 16:26:45 davem Exp $
+/* $Id: pci_iommu.c,v 1.13 2001/03/14 08:42:38 davem Exp $
* pci_iommu.c: UltraSparc PCI controller IOM/STC support.
*
* Copyright (C) 1999 David S. Miller (davem@redhat.com)
first_page += PAGE_SIZE;
}
- if (iommu->iommu_ctxflush) {
- pci_iommu_write(iommu->iommu_ctxflush, ctx);
- } else {
+ {
int i;
u32 daddr = *dma_addrp;
-/* $Id: pci_schizo.c,v 1.8 2001/03/01 08:05:32 davem Exp $
+/* $Id: pci_schizo.c,v 1.13 2001/03/21 00:29:58 davem Exp $
* pci_schizo.c: SCHIZO specific PCI controller support.
*
* Copyright (C) 2001 David S. Miller (davem@redhat.com)
#include <asm/pbm.h>
#include <asm/iommu.h>
#include <asm/irq.h>
+#include <asm/upa.h>
#include "pci_impl.h"
* EBUS devices and PCI controller internal error interrupts.
*/
static unsigned char schizo_pil_table[] = {
-/*0x00*/0, 0, 0, 0, /* PCI slot 0 Int A, B, C, D */
-/*0x04*/0, 0, 0, 0, /* PCI slot 1 Int A, B, C, D */
-/*0x08*/0, 0, 0, 0, /* PCI slot 2 Int A, B, C, D */
-/*0x0c*/0, 0, 0, 0, /* PCI slot 3 Int A, B, C, D */
-/*0x10*/0, 0, 0, 0, /* PCI slot 4 Int A, B, C, D */
-/*0x14*/0, 0, 0, 0, /* PCI slot 5 Int A, B, C, D */
-/*0x18*/0, 0, 0, 0, /* PCI slot 6 Int A, B, C, D */
+/*0x00*/0, 0, 0, 0, /* PCI slot 0 Int A, B, C, D */
+/*0x04*/0, 0, 0, 0, /* PCI slot 1 Int A, B, C, D */
+/*0x08*/0, 0, 0, 0, /* PCI slot 2 Int A, B, C, D */
+/*0x0c*/0, 0, 0, 0, /* PCI slot 3 Int A, B, C, D */
+/*0x10*/0, 0, 0, 0, /* PCI slot 4 Int A, B, C, D */
+/*0x14*/0, 0, 0, 0, /* PCI slot 5 Int A, B, C, D */
+/*0x18*/3, /* SCSI */
+/*0x19*/3, /* second SCSI */
+/*0x1a*/0, /* UNKNOWN */
+/*0x1b*/0, /* UNKNOWN */
/*0x1c*/8, /* Parallel */
-/*0x1d*/0, /* UNKNOWN */
-/*0x1e*/0, /* UNKNOWN */
-/*0x1f*/0, /* UNKNOWN */
+/*0x1d*/5, /* Ethernet */
+/*0x1e*/8, /* Firewire-1394 */
+/*0x1f*/9, /* USB */
/*0x20*/13, /* Audio Record */
/*0x21*/14, /* Audio Playback */
/*0x22*/12, /* Serial */
iclr = p->controller_regs + pbm_off + iclr_off;
iclr += 4;
- if ((ino & 0x20) == 0)
+ if (ino < 0x18)
inofixup = ino & 0x03;
bucket = __bucket(build_irq(pil, inofixup, iclr, imap));
static unsigned long stc_tag_buf[16];
static unsigned long stc_line_buf[16];
+static void schizo_clear_other_err_intr(int irq)
+{
+ struct ino_bucket *bucket = __bucket(irq);
+ unsigned long iclr = bucket->iclr;
+
+ iclr += (SCHIZO_PBM_B_REGS_OFF - SCHIZO_PBM_A_REGS_OFF);
+ upa_writel(ICLR_IDLE, iclr);
+}
+
#define SCHIZO_STC_ERR 0xb800UL /* --> 0xba00 */
#define SCHIZO_STC_TAG 0xba00UL /* --> 0xba80 */
#define SCHIZO_STC_LINE 0xbb00UL /* --> 0xbb80 */
/* Interrogate IOMMU for error status. */
schizo_check_iommu_error(p, UE_ERR);
+
+ schizo_clear_other_err_intr(irq);
}
#define SCHIZO_CE_AFSR 0x10040UL
if (!reported)
printk("(none)");
printk("]\n");
+
+ schizo_clear_other_err_intr(irq);
}
#define SCHIZO_PCI_AFSR 0x2010UL
if (error_bits & (SCHIZO_PCIAFSR_PPERR | SCHIZO_PCIAFSR_SPERR))
pci_scan_for_parity_error(p, pbm, pbm->pci_bus);
+
+ schizo_clear_other_err_intr(irq);
}
#define SCHIZO_SAFARI_ERRLOG 0x10018UL
if (!(errlog & SAFARI_ERROR_UNMAP)) {
printk("SCHIZO%d: Unexpected Safari error interrupt, errlog[%016lx]\n",
p->index, errlog);
+
+ schizo_clear_other_err_intr(irq);
return;
}
printk("SCHIZO%d: Safari interrupt, UNMAPPED error, interrogating IOMMUs.\n",
p->index);
schizo_check_iommu_error(p, SAFARI_ERR);
+
+ schizo_clear_other_err_intr(irq);
}
/* Nearly identical to PSYCHO equivalents... */
#define SCHIZO_PCIA_CTRL (SCHIZO_PBM_A_REGS_OFF + 0x2000UL)
#define SCHIZO_PCIB_CTRL (SCHIZO_PBM_B_REGS_OFF + 0x2000UL)
+#define SCHIZO_PCICTRL_BUNUS (1UL << 63UL)
+#define SCHIZO_PCICTRL_ESLCK (1UL << 51UL)
+#define SCHIZO_PCICTRL_TTO_ERR (1UL << 38UL)
+#define SCHIZO_PCICTRL_RTRY_ERR (1UL << 37UL)
+#define SCHIZO_PCICTRL_DTO_ERR (1UL << 36UL)
#define SCHIZO_PCICTRL_SBH_ERR (1UL << 35UL)
#define SCHIZO_PCICTRL_SERR (1UL << 34UL)
#define SCHIZO_PCICTRL_SBH_INT (1UL << 18UL)
#define SCHIZO_PCICTRL_EEN (1UL << 17UL)
-/* XXX It is not entirely clear if I need to enable the PCI controller interrupts
- * XXX in both PBMs, the documentation is very vague about this point. For now
- * XXX I'll just enable it on PBM A but this needs to be verified! -DaveM
- */
static void __init schizo_register_error_handlers(struct pci_controller_info *p)
{
- struct pci_pbm_info *pbm = &p->pbm_A; /* XXX verify me XXX */
+ struct pci_pbm_info *pbm_a = &p->pbm_A;
+ struct pci_pbm_info *pbm_b = &p->pbm_B;
unsigned long base = p->controller_regs;
unsigned int irq, portid = p->portid;
+ struct ino_bucket *bucket;
u64 tmp;
/* Build IRQs and register handlers. */
- irq = schizo_irq_build(pbm, NULL, (portid << 6) | SCHIZO_UE_INO);
+ irq = schizo_irq_build(pbm_a, NULL, (portid << 6) | SCHIZO_UE_INO);
if (request_irq(irq, schizo_ue_intr,
SA_SHIRQ, "SCHIZO UE", p) < 0) {
prom_printf("SCHIZO%d: Cannot register UE interrupt.\n",
p->index);
prom_halt();
}
+ bucket = __bucket(irq);
+ tmp = readl(bucket->imap);
+ upa_writel(tmp, (base + SCHIZO_PBM_B_REGS_OFF + schizo_imap_offset(SCHIZO_UE_INO) + 4));
- irq = schizo_irq_build(pbm, NULL, (portid << 6) | SCHIZO_CE_INO);
+ irq = schizo_irq_build(pbm_a, NULL, (portid << 6) | SCHIZO_CE_INO);
if (request_irq(irq, schizo_ce_intr,
SA_SHIRQ, "SCHIZO CE", p) < 0) {
prom_printf("SCHIZO%d: Cannot register CE interrupt.\n",
p->index);
prom_halt();
}
+ bucket = __bucket(irq);
+ tmp = upa_readl(bucket->imap);
+ upa_writel(tmp, (base + SCHIZO_PBM_B_REGS_OFF + schizo_imap_offset(SCHIZO_CE_INO) + 4));
- irq = schizo_irq_build(pbm, NULL, (portid << 6) | SCHIZO_PCIERR_A_INO);
+ irq = schizo_irq_build(pbm_a, NULL, (portid << 6) | SCHIZO_PCIERR_A_INO);
if (request_irq(irq, schizo_pcierr_intr,
- SA_SHIRQ, "SCHIZO PCIERR", &p->pbm_A) < 0) {
+ SA_SHIRQ, "SCHIZO PCIERR", pbm_a) < 0) {
prom_printf("SCHIZO%d(PBMA): Cannot register PciERR interrupt.\n",
p->index);
prom_halt();
}
+ bucket = __bucket(irq);
+ tmp = upa_readl(bucket->imap);
+ upa_writel(tmp, (base + SCHIZO_PBM_B_REGS_OFF + schizo_imap_offset(SCHIZO_PCIERR_A_INO) + 4));
- irq = schizo_irq_build(pbm, NULL, (portid << 6) | SCHIZO_PCIERR_B_INO);
+ irq = schizo_irq_build(pbm_a, NULL, (portid << 6) | SCHIZO_PCIERR_B_INO);
if (request_irq(irq, schizo_pcierr_intr,
- SA_SHIRQ, "SCHIZO PCIERR", &p->pbm_B) < 0) {
+ SA_SHIRQ, "SCHIZO PCIERR", pbm_b) < 0) {
prom_printf("SCHIZO%d(PBMB): Cannot register PciERR interrupt.\n",
p->index);
prom_halt();
}
+ bucket = __bucket(irq);
+ tmp = upa_readl(bucket->imap);
+ upa_writel(tmp, (base + SCHIZO_PBM_B_REGS_OFF + schizo_imap_offset(SCHIZO_PCIERR_B_INO) + 4));
- irq = schizo_irq_build(pbm, NULL, (portid << 6) | SCHIZO_SERR_INO);
+ irq = schizo_irq_build(pbm_a, NULL, (portid << 6) | SCHIZO_SERR_INO);
if (request_irq(irq, schizo_safarierr_intr,
SA_SHIRQ, "SCHIZO SERR", p) < 0) {
prom_printf("SCHIZO%d(PBMB): Cannot register SafariERR interrupt.\n",
p->index);
prom_halt();
}
+ bucket = __bucket(irq);
+ tmp = upa_readl(bucket->imap);
+ upa_writel(tmp, (base + SCHIZO_PBM_B_REGS_OFF + schizo_imap_offset(SCHIZO_SERR_INO) + 4));
/* Enable UE and CE interrupts for controller. */
schizo_write(base + SCHIZO_ECC_CTRL,
/* Enable PCI Error interrupts and clear error
* bits for each PBM.
- *
- * XXX More error bits should be cleared, this is
- * XXX just the stuff which is identical on Psycho. -DaveM
*/
tmp = schizo_read(base + SCHIZO_PCIA_CTRL);
- tmp |= (SCHIZO_PCICTRL_SBH_ERR |
+ tmp |= (SCHIZO_PCICTRL_BUNUS |
+ SCHIZO_PCICTRL_ESLCK |
+ SCHIZO_PCICTRL_TTO_ERR |
+ SCHIZO_PCICTRL_RTRY_ERR |
+ SCHIZO_PCICTRL_DTO_ERR |
+ SCHIZO_PCICTRL_SBH_ERR |
SCHIZO_PCICTRL_SERR |
SCHIZO_PCICTRL_SBH_INT |
SCHIZO_PCICTRL_EEN);
schizo_write(base + SCHIZO_PCIA_CTRL, tmp);
tmp = schizo_read(base + SCHIZO_PCIB_CTRL);
- tmp |= (SCHIZO_PCICTRL_SBH_ERR |
+ tmp |= (SCHIZO_PCICTRL_BUNUS |
+ SCHIZO_PCICTRL_ESLCK |
+ SCHIZO_PCICTRL_TTO_ERR |
+ SCHIZO_PCICTRL_RTRY_ERR |
+ SCHIZO_PCICTRL_DTO_ERR |
+ SCHIZO_PCICTRL_SBH_ERR |
SCHIZO_PCICTRL_SERR |
SCHIZO_PCICTRL_SBH_INT |
SCHIZO_PCICTRL_EEN);
schizo_write(base + SCHIZO_PCIB_CTRL, tmp);
+ schizo_write(base + SCHIZO_PBM_A_REGS_OFF + SCHIZO_PCI_AFSR,
+ (SCHIZO_PCIAFSR_PMA | SCHIZO_PCIAFSR_PTA |
+ SCHIZO_PCIAFSR_PRTRY | SCHIZO_PCIAFSR_PPERR |
+ SCHIZO_PCIAFSR_PTTO | SCHIZO_PCIAFSR_PUNUS |
+ SCHIZO_PCIAFSR_SMA | SCHIZO_PCIAFSR_STA |
+ SCHIZO_PCIAFSR_SRTRY | SCHIZO_PCIAFSR_SPERR |
+ SCHIZO_PCIAFSR_STTO | SCHIZO_PCIAFSR_SUNUS));
+ schizo_write(base + SCHIZO_PBM_B_REGS_OFF + SCHIZO_PCI_AFSR,
+ (SCHIZO_PCIAFSR_PMA | SCHIZO_PCIAFSR_PTA |
+ SCHIZO_PCIAFSR_PRTRY | SCHIZO_PCIAFSR_PPERR |
+ SCHIZO_PCIAFSR_PTTO | SCHIZO_PCIAFSR_PUNUS |
+ SCHIZO_PCIAFSR_SMA | SCHIZO_PCIAFSR_STA |
+ SCHIZO_PCIAFSR_SRTRY | SCHIZO_PCIAFSR_SPERR |
+ SCHIZO_PCIAFSR_STTO | SCHIZO_PCIAFSR_SUNUS));
+
/* Make all Safari error conditions fatal except unmapped errors
* which we make generate interrupts.
*/
/* Three OBP regs:
* 1) PBM controller regs
* 2) Schizo front-end controller regs (same for both PBMs)
- * 3) Unknown... (0x7ffec000000 and 0x7ffee000000 on Excalibur)
+ * 3) PBM PCI config space
*/
err = prom_getproperty(node, "reg",
(char *)&pr_regs[0],
-/* $Id: process.c,v 1.114 2001/02/13 01:16:44 davem Exp $
+/* $Id: process.c,v 1.116 2001/03/24 09:36:01 davem Exp $
* arch/sparc64/kernel/process.c
*
* Copyright (C) 1995, 1996 David S. Miller (davem@caip.rutgers.edu)
unsigned long pgd_cache;
if (pgd_none(*pgd0)) {
- pmd_t *page = get_pmd_fast();
+ pmd_t *page = pmd_alloc_one_fast();
if (!page)
- (void) get_pmd_slow(pgd0, 0);
- else
- pgd_set(pgd0, page);
+ page = pmd_alloc_one();
+ pgd_set(pgd0, page);
}
pgd_cache = pgd_val(*pgd0) << 11UL;
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* no outputs */
: "r" (pgd_cache),
"r" (TSB_REG),
pt_succ_return_linux(struct pt_regs *regs, unsigned long value, long *addr)
{
if (current->thread.flags & SPARC_FLAG_32BIT) {
- if(put_user(value, (unsigned int *)addr))
+ if (put_user(value, (unsigned int *)addr))
return pt_error_return(regs, EFAULT);
} else {
- if(put_user(value, addr))
+ if (put_user(value, addr))
return pt_error_return(regs, EFAULT);
}
regs->u_regs[UREG_I0] = 0;
s, request, pid, addr, data, addr2);
}
#endif
- if(request == PTRACE_TRACEME) {
+ if (request == PTRACE_TRACEME) {
/* are we already being traced? */
if (current->ptrace & PT_PTRACED) {
pt_error_return(regs, EPERM);
goto out;
}
#ifndef ALLOW_INIT_TRACING
- if(pid == 1) {
+ if (pid == 1) {
/* Can't dork with init. */
pt_error_return(regs, EPERM);
goto out;
#endif
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
read_unlock(&tasklist_lock);
- if(!child) {
+ if (!child) {
pt_error_return(regs, ESRCH);
goto out;
}
|| (current->personality != PER_SUNOS && request == PTRACE_ATTACH)) {
unsigned long flags;
- if(child == current) {
+ if (child == current) {
/* Try this under SunOS/Solaris, bwa haha
* You'll never be able to kill the process. ;-)
*/
pt_error_return(regs, EPERM);
- goto out;
+ goto out_tsk;
}
- if((!child->dumpable ||
- (current->uid != child->euid) ||
- (current->uid != child->uid) ||
- (current->uid != child->suid) ||
- (current->gid != child->egid) ||
- (current->gid != child->sgid) ||
- (!cap_issubset(child->cap_permitted, current->cap_permitted)) ||
- (current->gid != child->gid)) && !capable(CAP_SYS_PTRACE)) {
+ if ((!child->dumpable ||
+ (current->uid != child->euid) ||
+ (current->uid != child->uid) ||
+ (current->uid != child->suid) ||
+ (current->gid != child->egid) ||
+ (current->gid != child->sgid) ||
+ (!cap_issubset(child->cap_permitted, current->cap_permitted)) ||
+ (current->gid != child->gid)) && !capable(CAP_SYS_PTRACE)) {
pt_error_return(regs, EPERM);
- goto out;
+ goto out_tsk;
}
/* the same process cannot be attached many times */
if (child->ptrace & PT_PTRACED) {
pt_error_return(regs, EPERM);
- goto out;
+ goto out_tsk;
}
child->ptrace |= PT_PTRACED;
write_lock_irqsave(&tasklist_lock, flags);
- if(child->p_pptr != current) {
+ if (child->p_pptr != current) {
REMOVE_LINKS(child);
child->p_pptr = current;
SET_LINKS(child);
write_unlock_irqrestore(&tasklist_lock, flags);
send_sig(SIGSTOP, child, 1);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
if (!(child->ptrace & PT_PTRACED)) {
pt_error_return(regs, ESRCH);
- goto out;
+ goto out_tsk;
}
- if(child->state != TASK_STOPPED) {
- if(request != PTRACE_KILL) {
+ if (child->state != TASK_STOPPED) {
+ if (request != PTRACE_KILL) {
pt_error_return(regs, ESRCH);
- goto out;
+ goto out_tsk;
}
}
- if(child->p_pptr != current) {
+ if (child->p_pptr != current) {
pt_error_return(regs, ESRCH);
- goto out;
+ goto out_tsk;
}
- if(!(child->thread.flags & SPARC_FLAG_32BIT) &&
- ((request == PTRACE_READDATA64) ||
- (request == PTRACE_WRITEDATA64) ||
- (request == PTRACE_READTEXT64) ||
- (request == PTRACE_WRITETEXT64) ||
- (request == PTRACE_PEEKTEXT64) ||
- (request == PTRACE_POKETEXT64) ||
- (request == PTRACE_PEEKDATA64) ||
- (request == PTRACE_POKEDATA64))) {
+ if (!(child->thread.flags & SPARC_FLAG_32BIT) &&
+ ((request == PTRACE_READDATA64) ||
+ (request == PTRACE_WRITEDATA64) ||
+ (request == PTRACE_READTEXT64) ||
+ (request == PTRACE_WRITETEXT64) ||
+ (request == PTRACE_PEEKTEXT64) ||
+ (request == PTRACE_POKETEXT64) ||
+ (request == PTRACE_PEEKDATA64) ||
+ (request == PTRACE_POKEDATA64))) {
addr = regs->u_regs[UREG_G2];
addr2 = regs->u_regs[UREG_G3];
request -= 30; /* wheee... */
if (copied == sizeof(tmp64))
res = 0;
}
- if(res < 0)
+ if (res < 0)
pt_error_return(regs, -res);
else
pt_succ_return(regs, res);
__put_user(cregs->tnpc, (&pregs->npc)) ||
__put_user(cregs->y, (&pregs->y))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
- for(rval = 1; rval < 16; rval++)
+ for (rval = 1; rval < 16; rval++)
if (__put_user(cregs->u_regs[rval], (&pregs->u_regs[rval - 1]))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
pt_succ_return(regs, 0);
#ifdef DEBUG_PTRACE
printk ("PC=%lx nPC=%lx o7=%lx\n", cregs->tpc, cregs->tnpc, cregs->u_regs [15]);
#endif
- goto out;
+ goto out_tsk;
}
case PTRACE_GETREGS64: {
struct pt_regs *pregs = (struct pt_regs *) addr;
struct pt_regs *cregs = child->thread.kregs;
+ unsigned long tpc = cregs->tpc;
int rval;
+ if ((child->thread.flags & SPARC_FLAG_32BIT) != 0)
+ tpc &= 0xffffffff;
if (__put_user(cregs->tstate, (&pregs->tstate)) ||
- __put_user(cregs->tpc, (&pregs->tpc)) ||
+ __put_user(tpc, (&pregs->tpc)) ||
__put_user(cregs->tnpc, (&pregs->tnpc)) ||
__put_user(cregs->y, (&pregs->y))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
- for(rval = 1; rval < 16; rval++)
+ for (rval = 1; rval < 16; rval++)
if (__put_user(cregs->u_regs[rval], (&pregs->u_regs[rval - 1]))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
pt_succ_return(regs, 0);
#ifdef DEBUG_PTRACE
printk ("PC=%lx nPC=%lx o7=%lx\n", cregs->tpc, cregs->tnpc, cregs->u_regs [15]);
#endif
- goto out;
+ goto out_tsk;
}
case PTRACE_SETREGS: {
__get_user(npc, (&pregs->npc)) ||
__get_user(y, (&pregs->y))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
cregs->tstate &= ~(TSTATE_ICC);
cregs->tstate |= psr_to_tstate_icc(psr);
- if(!((pc | npc) & 3)) {
+ if (!((pc | npc) & 3)) {
cregs->tpc = pc;
cregs->tnpc = npc;
}
cregs->y = y;
- for(i = 1; i < 16; i++) {
+ for (i = 1; i < 16; i++) {
if (__get_user(cregs->u_regs[i], (&pregs->u_regs[i-1]))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
}
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_SETREGS64: {
__get_user(tnpc, (&pregs->tnpc)) ||
__get_user(y, (&pregs->y))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
+ }
+ if ((child->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ tpc &= 0xffffffff;
+ tnpc &= 0xffffffff;
}
tstate &= (TSTATE_ICC | TSTATE_XCC);
cregs->tstate &= ~(TSTATE_ICC | TSTATE_XCC);
cregs->tstate |= tstate;
- if(!((tpc | tnpc) & 3)) {
+ if (!((tpc | tnpc) & 3)) {
cregs->tpc = tpc;
cregs->tnpc = tnpc;
}
cregs->y = y;
- for(i = 1; i < 16; i++) {
+ for (i = 1; i < 16; i++) {
if (__get_user(cregs->u_regs[i], (&pregs->u_regs[i-1]))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
}
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_GETFPREGS: {
__put_user(0, (&fps->extra)) ||
clear_user(&fps->fpq[0], 32 * sizeof(unsigned int))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_GETFPREGS64: {
(64 * sizeof(unsigned int))) ||
__put_user(child->thread.xfsr[0], (&fps->fsr))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_SETFPREGS: {
(32 * sizeof(unsigned int))) ||
__get_user(fsr, (&fps->fsr))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
child->thread.xfsr[0] &= 0xffffffff00000000UL;
child->thread.xfsr[0] |= fsr;
child->thread.gsr[0] = 0;
child->thread.fpsaved[0] |= (FPRS_FEF | FPRS_DL);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_SETFPREGS64: {
(64 * sizeof(unsigned int))) ||
__get_user(child->thread.xfsr[0], (&fps->fsr))) {
pt_error_return(regs, EFAULT);
- goto out;
+ goto out_tsk;
}
if (!(child->thread.fpsaved[0] & FPRS_FEF))
child->thread.gsr[0] = 0;
child->thread.fpsaved[0] |= (FPRS_FEF | FPRS_DL | FPRS_DU);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_READTEXT:
case PTRACE_CONT: { /* restart after signal. */
if (data > _NSIG) {
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
if (addr != 1) {
+ unsigned long pc_mask = ~0UL;
+
+ if ((child->thread.flags & SPARC_FLAG_32BIT) != 0)
+ pc_mask = 0xffffffff;
+
if (addr & 3) {
pt_error_return(regs, EINVAL);
- goto out;
+ goto out_tsk;
}
#ifdef DEBUG_PTRACE
printk ("Original: %016lx %016lx\n", child->thread.kregs->tpc, child->thread.kregs->tnpc);
printk ("Continuing with %016lx %016lx\n", addr, addr+4);
#endif
- child->thread.kregs->tpc = addr;
- child->thread.kregs->tnpc = addr + 4;
+ child->thread.kregs->tpc = (addr & pc_mask);
+ child->thread.kregs->tnpc = ((addr + 4) & pc_mask);
}
if (request == PTRACE_SYSCALL)
#endif
wake_up_process(child);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
/*
case PTRACE_KILL: {
if (child->state == TASK_ZOMBIE) { /* already dead */
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
child->exit_code = SIGKILL;
wake_up_process(child);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
case PTRACE_SUNDETACH: { /* detach a process that was attached. */
if ((unsigned long) data > _NSIG) {
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
child->ptrace &= ~(PT_PTRACED|PT_TRACESYS);
child->exit_code = data;
wake_up_process(child);
pt_succ_return(regs, 0);
- goto out;
+ goto out_tsk;
}
/* PTRACE_DUMPCORE unsupported... */
default:
pt_error_return(regs, EIO);
- goto out;
+ goto out_tsk;
}
flush_and_out:
{
unsigned long va;
- for(va = 0; va < (PAGE_SIZE << 1); va += 32)
- spitfire_put_dcache_tag(va, 0x0);
- if (request == PTRACE_PEEKTEXT ||
- request == PTRACE_POKETEXT ||
- request == PTRACE_READTEXT ||
- request == PTRACE_WRITETEXT) {
- for(va = 0; va < (PAGE_SIZE << 1); va += 32)
- spitfire_put_icache_tag(va, 0x0);
- __asm__ __volatile__("flush %g6");
+
+ if (tlb_type == cheetah) {
+ for (va = 0; va < (1 << 16); va += (1 << 5))
+ spitfire_put_dcache_tag(va, 0x0);
+ /* No need to mess with I-cache on Cheetah. */
+ } else {
+ for (va = 0; va < (PAGE_SIZE << 1); va += 32)
+ spitfire_put_dcache_tag(va, 0x0);
+ if (request == PTRACE_PEEKTEXT ||
+ request == PTRACE_POKETEXT ||
+ request == PTRACE_READTEXT ||
+ request == PTRACE_WRITETEXT) {
+ for (va = 0; va < (PAGE_SIZE << 1); va += 32)
+ spitfire_put_icache_tag(va, 0x0);
+ __asm__ __volatile__("flush %g6");
+ }
}
}
+out_tsk:
+ if (child)
+ free_task_struct(child);
out:
unlock_kernel();
}
-/* $Id: rtrap.S,v 1.53 2000/08/06 05:20:35 davem Exp $
+/* $Id: rtrap.S,v 1.54 2001/03/08 22:08:51 davem Exp $
* rtrap.S: Preparing for return from trap on Sparc V9.
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
rd %fprs, %g5
wr %g5, FPRS_FEF, %fprs
- ldub [%o1 + %o0], %g5
+ ldx [%o1 + %o5], %g5
add %g6, AOFF_task_thread + AOFF_thread_xfsr, %o1
membar #StoreLoad | #LoadLoad
sll %o0, 8, %o2
-/* $Id: setup.c,v 1.62 2001/03/03 10:34:45 davem Exp $
+/* $Id: setup.c,v 1.63 2001/03/09 22:04:25 davem Exp $
* linux/arch/sparc64/kernel/setup.c
*
* Copyright (C) 1995,1996 David S. Miller (davem@caip.rutgers.edu)
#include <net/ipconfig.h>
#endif
-#undef PROM_DEBUG_CONSOLE
-
struct screen_info screen_info = {
0, 0, /* orig-x, orig-y */
0, /* unused */
/* Exported for mm/init.c:paging_init. */
unsigned long cmdline_memory_size = 0;
-#ifdef PROM_DEBUG_CONSOLE
static struct console prom_debug_console = {
name: "debug",
write: prom_console_write,
flags: CON_PRINTBUFFER,
index: -1,
};
-#endif
/* XXX Implement this at some point... */
void kernel_enter_debugger(void)
prom_printf("boot_flags_init: Halt!\n");
prom_halt();
break;
+ case 'p':
+ /* Use PROM debug console. */
+ register_console(&prom_debug_console);
+ break;
default:
printk("Unknown boot switch (-%c)\n", c);
break;
*cmdline_p = prom_getbootargs();
strcpy(saved_command_line, *cmdline_p);
-#ifdef PROM_DEBUG_CONSOLE
- register_console(&prom_debug_console);
-#endif
-
printk("ARCH: SUN4U\n");
#ifdef CONFIG_DUMMY_CONSOLE
-/* $Id: signal.c,v 1.55 2001/01/24 21:05:13 davem Exp $
+/* $Id: signal.c,v 1.56 2001/03/21 11:46:20 davem Exp $
* arch/sparc64/kernel/signal.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
recalc_sigpending(current);
spin_unlock_irq(¤t->sigmask_lock);
}
+ if ((tp->flags & SPARC_FLAG_32BIT) != 0) {
+ pc &= 0xffffffff;
+ npc &= 0xffffffff;
+ }
regs->tpc = pc;
regs->tnpc = npc;
err |= __get_user(regs->y, &((*grp)[MC_Y]));
grp = &mcp->mc_gregs;
/* Skip over the trap instruction, first. */
- regs->tpc = regs->tnpc;
- regs->tnpc += 4;
-
+ if ((tp->flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc = (regs->tnpc & 0xffffffff);
+ regs->tnpc = (regs->tnpc + 4) & 0xffffffff;
+ } else {
+ regs->tpc = regs->tnpc;
+ regs->tnpc += 4;
+ }
err = 0;
if (_NSIG_WORDS == 1)
err |= __put_user(current->blocked.sig[0],
recalc_sigpending(current);
spin_unlock_irq(¤t->sigmask_lock);
- regs->tpc = regs->tnpc;
- regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc = (regs->tnpc & 0xffffffff);
+ regs->tnpc = (regs->tnpc + 4) & 0xffffffff;
+ } else {
+ regs->tpc = regs->tnpc;
+ regs->tnpc += 4;
+ }
/* Condition codes and return value where set here for sigpause,
* and so got used by setup_frame, which again causes sigreturn()
recalc_sigpending(current);
spin_unlock_irq(¤t->sigmask_lock);
- regs->tpc = regs->tnpc;
- regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc = (regs->tnpc & 0xffffffff);
+ regs->tnpc = (regs->tnpc + 4) & 0xffffffff;
+ } else {
+ regs->tpc = regs->tnpc;
+ regs->tnpc += 4;
+ }
/* Condition codes and return value where set here for sigpause,
* and so got used by setup_frame, which again causes sigreturn()
err = get_user(tpc, &sf->regs.tpc);
err |= __get_user(tnpc, &sf->regs.tnpc);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ tpc &= 0xffffffff;
+ tnpc &= 0xffffffff;
+ }
err |= ((tpc | tnpc) & 3);
/* 2. Restore the state */
/* 5. signal handler */
regs->tpc = (unsigned long) ka->sa.sa_handler;
regs->tnpc = (regs->tpc + 4);
-
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
/* 4. return to kernel instructions */
regs->u_regs[UREG_I7] = (unsigned long)ka->ka_restorer;
return;
-/* $Id: signal32.c,v 1.68 2001/01/24 21:05:13 davem Exp $
+/* $Id: signal32.c,v 1.69 2001/03/21 11:46:20 davem Exp $
* arch/sparc64/kernel/signal32.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
regs->tpc = regs->tnpc;
regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
/* Condition codes and return value where set here for sigpause,
* and so got used by setup_frame, which again causes sigreturn()
regs->tpc = regs->tnpc;
regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
/* Condition codes and return value where set here for sigpause,
* and so got used by setup_frame, which again causes sigreturn()
if ((pc | npc) & 3)
goto segv;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ pc &= 0xffffffff;
+ npc &= 0xffffffff;
+ }
regs->tpc = pc;
regs->tnpc = npc;
recalc_sigpending(current);
spin_unlock_irq(¤t->sigmask_lock);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ pc &= 0xffffffff;
+ npc &= 0xffffffff;
+ }
regs->tpc = pc;
regs->tnpc = npc;
err = __get_user(regs->u_regs[UREG_FP], &scptr->sigc_sp);
if ((pc | npc) & 3)
goto segv;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ pc &= 0xffffffff;
+ npc &= 0xffffffff;
+ }
regs->tpc = pc;
regs->tnpc = npc;
#endif
unsigned psr;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ pc &= 0xffffffff;
+ npc &= 0xffffffff;
+ }
+
synchronize_user_stack();
save_and_clear_fpu();
regs->u_regs[UREG_FP] = (unsigned long) sframep;
regs->tpc = (unsigned long) sa->sa_handler;
regs->tnpc = (regs->tpc + 4);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
return;
sigsegv:
}
/* 2. Save the current process state */
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
err = put_user(regs->tpc, &sf->info.si_regs.pc);
err |= __put_user(regs->tnpc, &sf->info.si_regs.npc);
err |= __put_user(regs->y, &sf->info.si_regs.y);
/* 4. signal handler */
regs->tpc = (unsigned long) ka->sa.sa_handler;
regs->tnpc = (regs->tpc + 4);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
/* 5. return to kernel instructions */
- if (ka->ka_restorer)
+ if (ka->ka_restorer) {
regs->u_regs[UREG_I7] = (unsigned long)ka->ka_restorer;
- else {
+ } else {
/* Flush instruction space. */
unsigned long address = ((unsigned long)&(sf->insns[0]));
pgd_t *pgdp = pgd_offset(current->mm, address);
err |= __copy_to_user(&uc->sigmask, &setv, 2 * sizeof(unsigned));
/* Store registers */
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
err |= __put_user(regs->tpc, &((*gr) [SVR4_PC]));
err |= __put_user(regs->tnpc, &((*gr) [SVR4_NPC]));
psr = tstate_to_psr (regs->tstate);
regs->u_regs[UREG_FP] = (unsigned long) sfp;
regs->tpc = (unsigned long) sa->sa_handler;
regs->tnpc = (regs->tpc + 4);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
#ifdef DEBUG_SIGNALS
printk ("Solaris-frame: %x %x\n", (int) regs->tpc, (int) regs->tnpc);
err |= __copy_to_user(&uc->sigmask, &setv, 2 * sizeof(unsigned));
/* Store registers */
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
err |= __put_user(regs->tpc, &uc->mcontext.greg [SVR4_PC]);
err |= __put_user(regs->tnpc, &uc->mcontext.greg [SVR4_NPC]);
#if 1
spin_unlock_irq(¤t->sigmask_lock);
regs->tpc = pc;
regs->tnpc = npc | 1;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
err |= __get_user(regs->y, &((*gr) [SVR4_Y]));
err |= __get_user(psr, &((*gr) [SVR4_PSR]));
regs->tstate &= ~(TSTATE_ICC|TSTATE_XCC);
}
/* 2. Save the current process state */
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
err = put_user(regs->tpc, &sf->regs.pc);
err |= __put_user(regs->tnpc, &sf->regs.npc);
err |= __put_user(regs->y, &sf->regs.y);
/* 4. signal handler */
regs->tpc = (unsigned long) ka->sa.sa_handler;
regs->tnpc = (regs->tpc + 4);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
/* 5. return to kernel instructions */
if (ka->ka_restorer)
strcpy(buf, "State:\n");
for (i = 0; i < NR_CPUS; i++)
- if(cpu_present_map & (1UL << i))
+ if (cpu_present_map & (1UL << i))
len += sprintf(buf + len,
"CPU%d:\t\tonline\n", i);
return len;
int len = 0, i;
for (i = 0; i < NR_CPUS; i++)
- if(cpu_present_map & (1UL << i))
+ if (cpu_present_map & (1UL << i))
len += sprintf(buf + len,
"Cpu%dBogo\t: %lu.%02lu\n",
i, cpu_data[i].udelay_val / (500000/HZ),
cpu_data[id].pgd_cache = NULL;
cpu_data[id].idle_volume = 1;
- for(i = 0; i < 16; i++)
+ for (i = 0; i < 16; i++)
cpu_data[id].irq_worklists[i] = 0;
}
: /* no inputs */
: "g1", "g2");
+ if (SPARC64_USE_STICK) {
+ /* Let the user get at STICK too. */
+ __asm__ __volatile__("
+ sethi %%hi(0x80000000), %%g1
+ sllx %%g1, 32, %%g1
+ rd %%asr24, %%g2
+ andn %%g2, %%g1, %%g2
+ wr %%g2, 0, %%asr24"
+ : /* no outputs */
+ : /* no inputs */
+ : "g1", "g2");
+ }
+
/* Restore PSTATE_IE. */
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
: /* no outputs */
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
- while(!smp_processors_ready)
+ while (!smp_processors_ready)
membar("#LoadLoad");
}
smp_tune_scheduling();
init_idle();
- if(linux_num_cpus == 1)
+ if (linux_num_cpus == 1)
return;
- for(i = 0; i < NR_CPUS; i++) {
- if(i == boot_cpu_id)
+ for (i = 0; i < NR_CPUS; i++) {
+ if (i == boot_cpu_id)
continue;
- if(cpu_present_map & (1UL << i)) {
+ if (cpu_present_map & (1UL << i)) {
unsigned long entry = (unsigned long)(&sparc64_cpu_startup);
unsigned long cookie = (unsigned long)(&cpu_new_task);
struct task_struct *p;
cpu_new_task = p;
prom_startcpu(linux_cpus[no].prom_node,
entry, cookie);
- for(timeout = 0; timeout < 5000000; timeout++) {
- if(callin_flag)
+ for (timeout = 0; timeout < 5000000; timeout++) {
+ if (callin_flag)
break;
udelay(100);
}
- if(callin_flag) {
+ if (callin_flag) {
__cpu_number_map[i] = cpucount;
__cpu_logical_map[cpucount] = i;
prom_cpu_nodes[i] = linux_cpus[no].prom_node;
prom_printf("FAILED\n");
}
}
- if(!callin_flag) {
+ if (!callin_flag) {
cpu_present_map &= ~(1UL << i);
__cpu_number_map[i] = -1;
}
}
cpu_new_task = NULL;
- if(cpucount == 0) {
+ if (cpucount == 0) {
printk("Error: only one processor found.\n");
cpu_present_map = (1UL << smp_processor_id());
} else {
unsigned long bogosum = 0;
- for(i = 0; i < NR_CPUS; i++) {
- if(cpu_present_map & (1UL << i))
+ for (i = 0; i < NR_CPUS; i++) {
+ if (cpu_present_map & (1UL << i))
bogosum += cpu_data[i].udelay_val;
}
printk("Total of %d processors activated (%lu.%02lu BogoMIPS).\n",
membar("#StoreStore | #StoreLoad");
}
-/* #define XCALL_DEBUG */
-
-static inline void xcall_deliver(u64 data0, u64 data1, u64 data2, u64 pstate, unsigned long cpu)
+static void spitfire_xcall_helper(u64 data0, u64 data1, u64 data2, u64 pstate, unsigned long cpu)
{
u64 result, target;
int stuck, tmp;
}
target = (cpu << 14) | 0x70;
-#ifdef XCALL_DEBUG
- printk("CPU[%d]: xcall(data[%016lx:%016lx:%016lx],tgt[%016lx])\n",
- smp_processor_id(), data0, data1, data2, target);
-#endif
again:
/* Ok, this is the real Spitfire Errata #54.
* One must read back from a UDB internal register
ldxa [%%g1] 0x7f, %%g0
membar #Sync"
: "=r" (tmp)
- : "r" (pstate), "i" (PSTATE_IE), "i" (ASI_UDB_INTR_W),
+ : "r" (pstate), "i" (PSTATE_IE), "i" (ASI_INTR_W),
"r" (data0), "r" (data1), "r" (data2), "r" (target), "r" (0x10), "0" (tmp)
: "g1");
__asm__ __volatile__("ldxa [%%g0] %1, %0"
: "=r" (result)
: "i" (ASI_INTR_DISPATCH_STAT));
- if(result == 0) {
+ if (result == 0) {
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
: : "r" (pstate));
return;
}
stuck -= 1;
- if(stuck == 0)
+ if (stuck == 0)
break;
- } while(result & 0x1);
+ } while (result & 0x1);
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
: : "r" (pstate));
- if(stuck == 0) {
-#ifdef XCALL_DEBUG
+ if (stuck == 0) {
printk("CPU[%d]: mondo stuckage result[%016lx]\n",
smp_processor_id(), result);
-#endif
} else {
-#ifdef XCALL_DEBUG
- printk("CPU[%d]: Penguin %d NACK's master.\n", smp_processor_id(), cpu);
-#endif
udelay(2);
goto again;
}
}
-void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
+static __inline__ void spitfire_xcall_deliver(u64 data0, u64 data1, u64 data2, unsigned long mask)
{
- if(smp_processors_ready) {
- unsigned long mask = (cpu_present_map & ~(1UL<<smp_processor_id()));
- u64 pstate, data0 = (((u64)ctx)<<32 | (((u64)func) & 0xffffffff));
+ int ncpus = smp_num_cpus - 1;
+ int i;
+ u64 pstate;
+
+ __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));
+ for (i = 0; (i < NR_CPUS) && ncpus; i++) {
+ if (mask & (1UL << i)) {
+ spitfire_xcall_helper(data0, data1, data2, pstate, i);
+ ncpus--;
+ }
+ }
+}
+
+/* Cheetah now allows to send the whole 64-bytes of data in the interrupt
+ * packet, but we have no use for that. However we do take advantage of
+ * the new pipelining feature (ie. dispatch to multiple cpus simultaneously).
+ */
+#if NR_CPUS > 32
+#error Fixup cheetah_xcall_deliver Dave...
+#endif
+static void cheetah_xcall_deliver(u64 data0, u64 data1, u64 data2, unsigned long mask)
+{
+ u64 pstate;
+ int nack_busy_id;
+
+ if (!mask)
+ return;
+
+ __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));
+
+retry:
+ __asm__ __volatile__("wrpr %0, %1, %%pstate\n\t"
+ : : "r" (pstate), "i" (PSTATE_IE));
+
+ /* Setup the dispatch data registers. */
+ __asm__ __volatile__("stxa %0, [%3] %6\n\t"
+ "membar #Sync\n\t"
+ "stxa %1, [%4] %6\n\t"
+ "membar #Sync\n\t"
+ "stxa %2, [%5] %6\n\t"
+ "membar #Sync\n\t"
+ : /* no outputs */
+ : "r" (data0), "r" (data1), "r" (data2),
+ "r" (0x40), "r" (0x50), "r" (0x60),
+ "i" (ASI_INTR_W));
+
+ nack_busy_id = 0;
+ {
int i, ncpus = smp_num_cpus - 1;
- __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));
- for(i = 0; i < NR_CPUS; i++) {
- if(mask & (1UL << i)) {
- xcall_deliver(data0, data1, data2, pstate, i);
+ for (i = 0; (i < NR_CPUS) && ncpus; i++) {
+ if (mask & (1UL << i)) {
+ u64 target = (i << 14) | 0x70;
+
+ target |= (nack_busy_id++ << 24);
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync\n\t"
+ : /* no outputs */
+ : "r" (target), "i" (ASI_INTR_W));
ncpus--;
}
- if (!ncpus) break;
}
+ }
+
+ /* Now, poll for completion. */
+ {
+ u64 dispatch_stat;
+ long stuck;
+
+ stuck = 100000 * nack_busy_id;
+ do {
+ __asm__ __volatile__("ldxa [%%g0] %1, %0"
+ : "=r" (dispatch_stat)
+ : "i" (ASI_INTR_DISPATCH_STAT));
+ if (dispatch_stat == 0UL) {
+ __asm__ __volatile__("wrpr %0, 0x0, %%pstate"
+ : : "r" (pstate));
+ return;
+ }
+ if (!--stuck)
+ break;
+ } while (dispatch_stat & 0x5555555555555555UL);
+
+ __asm__ __volatile__("wrpr %0, 0x0, %%pstate"
+ : : "r" (pstate));
+
+ if ((stuck & ~(0x5555555555555555UL)) == 0) {
+ /* Busy bits will not clear, continue instead
+ * of freezing up on this cpu.
+ */
+ printk("CPU[%d]: mondo stuckage result[%016lx]\n",
+ smp_processor_id(), dispatch_stat);
+ } else {
+ int i, this_busy_nack = 0;
+
+ /* Delay some random time with interrupts enabled
+ * to prevent deadlock.
+ */
+ udelay(2 * nack_busy_id);
+
+ /* Clear out the mask bits for cpus which did not
+ * NACK us.
+ */
+ for (i = 0; i < NR_CPUS; i++) {
+ if (mask & (1UL << i)) {
+ if ((dispatch_stat & (0x2 << this_busy_nack)) == 0)
+ mask &= ~(1UL << i);
+ this_busy_nack += 2;
+ }
+ }
+
+ goto retry;
+ }
+ }
+}
+
+void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
+{
+ if (smp_processors_ready) {
+ unsigned long mask = (cpu_present_map & ~(1UL<<smp_processor_id()));
+ u64 data0 = (((u64)ctx)<<32 | (((u64)func) & 0xffffffff));
+
+ if (tlb_type == spitfire)
+ spitfire_xcall_deliver(data0, data1, data2, mask);
+ else
+ cheetah_xcall_deliver(data0, data1, data2, mask);
+
/* NOTE: Caller runs local copy on master. */
}
}
void smp_receive_signal(int cpu)
{
- if(smp_processors_ready &&
- (cpu_present_map & (1UL<<cpu)) != 0) {
- u64 pstate, data0 = (((u64)&xcall_receive_signal) & 0xffffffff);
- __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));
- xcall_deliver(data0, 0, 0, pstate, cpu);
+ if (smp_processors_ready) {
+ unsigned long mask = 1UL << cpu;
+
+ if ((cpu_present_map & mask) != 0) {
+ u64 data0 = (((u64)&xcall_receive_signal) & 0xffffffff);
+
+ if (tlb_type == spitfire)
+ spitfire_xcall_deliver(data0, 0, 0, mask);
+ else
+ cheetah_xcall_deliver(data0, 0, 0, mask);
+ }
}
}
int result = __atomic_add(1, &smp_capture_depth);
membar("#StoreStore | #LoadStore");
- if(result == 1) {
+ if (result == 1) {
int ncpus = smp_num_cpus;
#ifdef CAPTURE_DEBUG
membar("#StoreStore | #LoadStore");
atomic_inc(&smp_capture_registry);
smp_cross_call(&xcall_capture, 0, 0, 0);
- while(atomic_read(&smp_capture_registry) != ncpus)
+ while (atomic_read(&smp_capture_registry) != ncpus)
membar("#LoadLoad");
#ifdef CAPTURE_DEBUG
printk("done\n");
void smp_release(void)
{
- if(smp_processors_ready) {
- if(atomic_dec_and_test(&smp_capture_depth)) {
+ if (smp_processors_ready) {
+ if (atomic_dec_and_test(&smp_capture_depth)) {
#ifdef CAPTURE_DEBUG
printk("CPU[%d]: Giving pardon to imprisoned penguins\n",
smp_processor_id());
prom_world(1);
atomic_inc(&smp_capture_registry);
membar("#StoreLoad | #StoreStore");
- while(penguins_are_doing_time)
+ while (penguins_are_doing_time)
membar("#LoadLoad");
restore_alternate_globals(global_save);
atomic_dec(&smp_capture_registry);
/*
* Check for level 14 softint.
*/
- if (!(get_softint() & (1UL << 0))) {
- extern void handler_irq(int, struct pt_regs *);
+ {
+ unsigned long tick_mask;
- handler_irq(14, regs);
- return;
+ if (SPARC64_USE_STICK)
+ tick_mask = (1UL << 16);
+ else
+ tick_mask = (1UL << 0);
+
+ if (!(get_softint() & tick_mask)) {
+ extern void handler_irq(int, struct pt_regs *);
+
+ handler_irq(14, regs);
+ return;
+ }
+ clear_softint(tick_mask);
}
- clear_softint((1UL << 0));
do {
if (!user)
sparc64_do_profile(regs->tpc, regs->u_regs[UREG_RETPC]);
* that %tick is not prone to this bug, but I am not
* taking any chances.
*/
+ if (!SPARC64_USE_STICK) {
__asm__ __volatile__("rd %%tick_cmpr, %0\n\t"
"ba,pt %%xcc, 1f\n\t"
" add %0, %2, %0\n\t"
"mov %1, %1"
: "=&r" (compare), "=r" (tick)
: "r" (current_tick_offset));
+ } else {
+ __asm__ __volatile__("rd %%asr25, %0\n\t"
+ "add %0, %2, %0\n\t"
+ "wr %0, 0x0, %%asr25\n\t"
+ "rd %%asr24, %1\n\t"
+ : "=&r" (compare), "=r" (tick)
+ : "r" (current_tick_offset));
+ }
/* Restore PSTATE_IE. */
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
* at the start of an I-cache line, and perform a dummy
* read back from %tick_cmpr right after writing to it. -DaveM
*/
+ if (!SPARC64_USE_STICK) {
__asm__ __volatile__("
rd %%tick, %%g1
ba,pt %%xcc, 1f
: /* no outputs */
: "r" (current_tick_offset)
: "g1");
+ } else {
+ __asm__ __volatile__("
+ rd %%asr24, %%g1
+ add %%g1, %0, %%g1
+ wr %%g1, 0x0, %%asr25"
+ : /* no outputs */
+ : "r" (current_tick_offset)
+ : "g1");
+ }
/* Restore PSTATE_IE. */
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
boot_cpu_id = hard_smp_processor_id();
current_tick_offset = timer_tick_offset;
cpu_present_map = 0;
- for(i = 0; i < linux_num_cpus; i++)
+ for (i = 0; i < linux_num_cpus; i++)
cpu_present_map |= (1UL << linux_cpus[i].mid);
- for(i = 0; i < NR_CPUS; i++) {
+ for (i = 0; i < NR_CPUS; i++) {
__cpu_number_map[i] = -1;
__cpu_logical_map[i] = -1;
}
size = PAGE_ALIGN(size);
found = size;
base = (unsigned long) page_address(p);
- while(found != 0) {
+ while (found != 0) {
/* Failure. */
- if(p >= (mem_map + max_mapnr))
+ if (p >= (mem_map + max_mapnr))
return 0UL;
- if(PageReserved(p)) {
+ if (PageReserved(p)) {
found = size;
base = (unsigned long) page_address(p);
} else {
unsigned long flags;
int i;
- if((!multiplier) || (timer_tick_offset / multiplier) < 1000)
+ if ((!multiplier) || (timer_tick_offset / multiplier) < 1000)
return -EINVAL;
save_and_cli(flags);
- for(i = 0; i < NR_CPUS; i++) {
- if(cpu_present_map & (1UL << i))
+ for (i = 0; i < NR_CPUS; i++) {
+ if (cpu_present_map & (1UL << i))
prof_multiplier(i) = multiplier;
}
current_tick_offset = (timer_tick_offset / multiplier);
-/* $Id: sparc64_ksyms.c,v 1.100 2001/01/11 15:07:09 davem Exp $
+/* $Id: sparc64_ksyms.c,v 1.102 2001/03/24 09:36:01 davem Exp $
* arch/sparc64/kernel/sparc64_ksyms.c: Sparc64 specific ksyms support.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
EXPORT_SYMBOL(__flushw_user);
+EXPORT_SYMBOL(tlb_type);
+
EXPORT_SYMBOL(flush_icache_range);
EXPORT_SYMBOL(__flush_dcache_page);
/* Should really be in linux/kernel/ksyms.c */
EXPORT_SYMBOL(dump_thread);
EXPORT_SYMBOL(dump_fpu);
-EXPORT_SYMBOL(get_pmd_slow);
-EXPORT_SYMBOL(get_pte_slow);
+EXPORT_SYMBOL(pte_alloc_one);
#ifndef CONFIG_SMP
EXPORT_SYMBOL(pgt_quicklists);
#endif
-/* $Id: sys_sparc.c,v 1.48 2001/02/13 01:16:44 davem Exp $
+/* $Id: sys_sparc.c,v 1.50 2001/03/24 09:36:10 davem Exp $
* linux/arch/sparc64/kernel/sys_sparc.c
*
* This file contains various random system calls that
{
siginfo_t info;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
#ifdef DEBUG_SPARC_BREAKPOINT
printk ("TRAP: Entering kernel PC=%lx, nPC=%lx\n", regs->tpc, regs->tnpc);
#endif
regs->tpc = regs->tnpc;
regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
if(++count <= 5) {
printk ("For Solaris binary emulation you need solaris module loaded\n");
show_regs (regs);
regs->tpc = regs->tnpc;
regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
if(++count <= 20)
printk ("SunOS binary emulation not compiled in\n");
force_sig(SIGSEGV, current);
-/* $Id: sys_sparc32.c,v 1.173 2001/02/13 01:16:44 davem Exp $
+/* $Id: sys_sparc32.c,v 1.174 2001/03/24 09:36:10 davem Exp $
* sys_sparc32.c: Conversion between 32bit and 64bit native syscalls.
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
-/* $Id: sys_sunos32.c,v 1.57 2001/02/13 01:16:44 davem Exp $
+/* $Id: sys_sunos32.c,v 1.59 2001/03/24 09:36:11 davem Exp $
* sys_sunos32.c: SunOS binary compatability layer on sparc64.
*
* Copyright (C) 1995, 1996, 1997 David S. Miller (davem@caip.rutgers.edu)
static int cnt;
regs = current->thread.kregs;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
info.si_signo = SIGSYS;
info.si_errno = 0;
info.si_code = __SI_FAULT|0x100;
-/* $Id: time.c,v 1.33 2001/01/11 15:07:09 davem Exp $
+/* $Id: time.c,v 1.36 2001/03/15 08:51:24 anton Exp $
* time.c: UltraSparc timer and TOD clock support.
*
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
#include <linux/timex.h>
#include <linux/init.h>
#include <linux/ioport.h>
+#include <linux/mc146818rtc.h>
+#include <linux/delay.h>
#include <asm/oplib.h>
#include <asm/mostek.h>
extern rwlock_t xtime_lock;
spinlock_t mostek_lock = SPIN_LOCK_UNLOCKED;
+spinlock_t rtc_lock = SPIN_LOCK_UNLOCKED;
unsigned long mstk48t02_regs = 0UL;
+#ifdef CONFIG_PCI
+unsigned long ds1287_regs = 0UL;
+#endif
static unsigned long mstk48t08_regs = 0UL;
static unsigned long mstk48t59_regs = 0UL;
extern int rwlock_impl_begin, rwlock_impl_end;
extern int atomic_impl_begin, atomic_impl_end;
extern int __memcpy_begin, __memcpy_end;
+ extern int __bzero_begin, __bzero_end;
extern int __bitops_begin, __bitops_end;
if ((pc >= (unsigned long) &atomic_impl_begin &&
pc < (unsigned long) &rwlock_impl_end) ||
(pc >= (unsigned long) &__memcpy_begin &&
pc < (unsigned long) &__memcpy_end) ||
+ (pc >= (unsigned long) &__bzero_begin &&
+ pc < (unsigned long) &__bzero_end) ||
(pc >= (unsigned long) &__bitops_begin &&
pc < (unsigned long) &__bitops_end))
pc = o7;
* that %tick is not prone to this bug, but I am not
* taking any chances.
*/
+ if (!SPARC64_USE_STICK) {
__asm__ __volatile__("
rd %%tick_cmpr, %0
ba,pt %%xcc, 1f
mov %1, %1"
: "=&r" (timer_tick_compare), "=r" (ticks)
: "r" (timer_tick_offset));
+ } else {
+ __asm__ __volatile__("
+ rd %%asr25, %0
+ add %0, %2, %0
+ wr %0, 0, %%asr25
+ rd %%asr24, %1"
+ : "=&r" (timer_tick_compare), "=r" (ticks)
+ : "r" (timer_tick_offset));
+ }
/* Restore PSTATE_IE. */
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
/*
* Only keep timer_tick_offset uptodate, but don't set TICK_CMPR.
*/
+ if (!SPARC64_USE_STICK) {
__asm__ __volatile__("
rd %%tick_cmpr, %0
add %0, %1, %0"
: "=&r" (timer_tick_compare)
: "r" (timer_tick_offset));
+ } else {
+ __asm__ __volatile__("
+ rd %%asr25, %0
+ add %0, %1, %0"
+ : "=&r" (timer_tick_compare)
+ : "r" (timer_tick_offset));
+ }
timer_check_rtc();
return (data1 == data2); /* Was the write blocked? */
}
+#ifndef BCD_TO_BIN
+#define BCD_TO_BIN(val) (((val)&15) + ((val)>>4)*10)
+#endif
+
+#ifndef BIN_TO_BCD
+#define BIN_TO_BCD(val) ((((val)/10)<<4) + (val)%10)
+#endif
/* Probe for the real time clock chip. */
static void __init set_system_time(void)
{
unsigned int year, mon, day, hour, min, sec;
unsigned long mregs = mstk48t02_regs;
+#ifdef CONFIG_PCI
+ unsigned long dregs = ds1287_regs;
+#else
+ unsigned long dregs = 0UL;
+#endif
u8 tmp;
do_get_fast_time = do_gettimeofday;
- if(!mregs) {
+ if (!mregs && !dregs) {
prom_printf("Something wrong, clock regs not mapped yet.\n");
prom_halt();
}
- spin_lock_irq(&mostek_lock);
+ if (mregs) {
+ spin_lock_irq(&mostek_lock);
- tmp = mostek_read(mregs + MOSTEK_CREG);
- tmp |= MSTK_CREG_READ;
- mostek_write(mregs + MOSTEK_CREG, tmp);
+ /* Traditional Mostek chip. */
+ tmp = mostek_read(mregs + MOSTEK_CREG);
+ tmp |= MSTK_CREG_READ;
+ mostek_write(mregs + MOSTEK_CREG, tmp);
+
+ sec = MSTK_REG_SEC(mregs);
+ min = MSTK_REG_MIN(mregs);
+ hour = MSTK_REG_HOUR(mregs);
+ day = MSTK_REG_DOM(mregs);
+ mon = MSTK_REG_MONTH(mregs);
+ year = MSTK_CVT_YEAR( MSTK_REG_YEAR(mregs) );
+ } else {
+ int i;
+
+ /* Dallas 12887 RTC chip. */
+
+ /* Stolen from arch/i386/kernel/time.c, see there for
+ * credits and descriptive comments.
+ */
+ for (i = 0; i < 1000000; i++) {
+ if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP)
+ break;
+ udelay(10);
+ }
+ for (i = 0; i < 1000000; i++) {
+ if (!(CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP))
+ break;
+ udelay(10);
+ }
+ do {
+ sec = CMOS_READ(RTC_SECONDS);
+ min = CMOS_READ(RTC_MINUTES);
+ hour = CMOS_READ(RTC_HOURS);
+ day = CMOS_READ(RTC_DAY_OF_MONTH);
+ mon = CMOS_READ(RTC_MONTH);
+ year = CMOS_READ(RTC_YEAR);
+ } while (sec != CMOS_READ(RTC_SECONDS));
+ if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BCD_TO_BIN(sec);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(hour);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(year);
+ }
+ if ((year += 1900) < 1970)
+ year += 100;
+ }
- sec = MSTK_REG_SEC(mregs);
- min = MSTK_REG_MIN(mregs);
- hour = MSTK_REG_HOUR(mregs);
- day = MSTK_REG_DOM(mregs);
- mon = MSTK_REG_MONTH(mregs);
- year = MSTK_CVT_YEAR( MSTK_REG_YEAR(mregs) );
xtime.tv_sec = mktime(year, mon, day, hour, min, sec);
xtime.tv_usec = 0;
- tmp = mostek_read(mregs + MOSTEK_CREG);
- tmp &= ~MSTK_CREG_READ;
- mostek_write(mregs + MOSTEK_CREG, tmp);
+ if (mregs) {
+ tmp = mostek_read(mregs + MOSTEK_CREG);
+ tmp &= ~MSTK_CREG_READ;
+ mostek_write(mregs + MOSTEK_CREG, tmp);
- spin_unlock_irq(&mostek_lock);
+ spin_unlock_irq(&mostek_lock);
+ }
}
void __init clock_probe(void)
busnd = sbus_root->prom_node;
}
- if(busnd == -1) {
+ if (busnd == -1) {
prom_printf("clock_probe: problem, cannot find bus to search.\n");
prom_halt();
}
node = prom_getchild(busnd);
- while(1) {
+ while (1) {
if (!node)
model[0] = 0;
else
prom_getstring(node, "model", model, sizeof(model));
- if(strcmp(model, "mk48t02") &&
- strcmp(model, "mk48t08") &&
- strcmp(model, "mk48t59")) {
+ if (strcmp(model, "mk48t02") &&
+ strcmp(model, "mk48t08") &&
+ strcmp(model, "mk48t59") &&
+ strcmp(model, "ds1287")) {
if (node)
node = prom_getsibling(node);
#ifdef CONFIG_PCI
}
}
#endif
- if(node == 0) {
+ if (node == 0) {
prom_printf("clock_probe: Cannot find timer chip\n");
prom_halt();
}
prom_halt();
}
- mstk48t59_regs = edev->resource[0].start;
- mstk48t02_regs = mstk48t59_regs + MOSTEK_48T59_48T02;
+ if (!strcmp(model, "ds1287")) {
+ ds1287_regs = edev->resource[0].start;
+ } else {
+ mstk48t59_regs = edev->resource[0].start;
+ mstk48t02_regs = mstk48t59_regs + MOSTEK_48T59_48T02;
+ }
break;
}
#endif
break;
}
- /* Report a low battery voltage condition. */
- if (has_low_battery())
- prom_printf("NVRAM: Low battery voltage!\n");
+ if (mstk48t02_regs != 0UL) {
+ /* Report a low battery voltage condition. */
+ if (has_low_battery())
+ prom_printf("NVRAM: Low battery voltage!\n");
- /* Kick start the clock if it is completely stopped. */
- if (mostek_read(mstk48t02_regs + MOSTEK_SEC) & MSTK_STOP)
- kick_start_clock();
+ /* Kick start the clock if it is completely stopped. */
+ if (mostek_read(mstk48t02_regs + MOSTEK_SEC) & MSTK_STOP)
+ kick_start_clock();
+ }
set_system_time();
__restore_flags(flags);
}
-#ifndef BCD_TO_BIN
-#define BCD_TO_BIN(val) (((val)&15) + ((val)>>4)*10)
-#endif
-
-#ifndef BIN_TO_BCD
-#define BIN_TO_BCD(val) ((((val)/10)<<4) + (val)%10)
-#endif
-
extern void init_timers(void (*func)(int, void *, struct pt_regs *),
unsigned long *);
{
unsigned long ticks;
+ if (!SPARC64_USE_STICK) {
__asm__ __volatile__("
rd %%tick, %%g1
add %1, %%g1, %0
: "=r" (ticks)
: "r" (timer_tick_offset), "r" (timer_tick_compare)
: "g1", "g2");
+ } else {
+ __asm__ __volatile__("rd %%asr24, %%g1\n\t"
+ "add %1, %%g1, %0\n\t"
+ "sub %0, %2, %0\n\t"
+ : "=&r" (ticks)
+ : "r" (timer_tick_offset), "r" (timer_tick_compare)
+ : "g1");
+ }
return (ticks * timer_ticks_per_usec_quotient) >> 32UL;
}
static int set_rtc_mmss(unsigned long nowtime)
{
- int real_seconds, real_minutes, mostek_minutes;
- unsigned long regs = mstk48t02_regs;
+ int real_seconds, real_minutes, chip_minutes;
+ unsigned long mregs = mstk48t02_regs;
+#ifdef CONFIG_PCI
+ unsigned long dregs = ds1287_regs;
+#else
+ unsigned long dregs = 0UL;
+#endif
unsigned long flags;
u8 tmp;
* Not having a register set can lead to trouble.
* Also starfire doesnt have a tod clock.
*/
- if (!regs)
+ if (!mregs && !dregs)
return -1;
- spin_lock_irqsave(&mostek_lock, flags);
+ if (mregs) {
+ spin_lock_irqsave(&mostek_lock, flags);
- /* Read the current RTC minutes. */
- tmp = mostek_read(regs + MOSTEK_CREG);
- tmp |= MSTK_CREG_READ;
- mostek_write(regs + MOSTEK_CREG, tmp);
+ /* Read the current RTC minutes. */
+ tmp = mostek_read(mregs + MOSTEK_CREG);
+ tmp |= MSTK_CREG_READ;
+ mostek_write(mregs + MOSTEK_CREG, tmp);
- mostek_minutes = MSTK_REG_MIN(regs);
+ chip_minutes = MSTK_REG_MIN(mregs);
- tmp = mostek_read(regs + MOSTEK_CREG);
- tmp &= ~MSTK_CREG_READ;
- mostek_write(regs + MOSTEK_CREG, tmp);
+ tmp = mostek_read(mregs + MOSTEK_CREG);
+ tmp &= ~MSTK_CREG_READ;
+ mostek_write(mregs + MOSTEK_CREG, tmp);
- /*
- * since we're only adjusting minutes and seconds,
- * don't interfere with hour overflow. This avoids
- * messing with unknown time zones but requires your
- * RTC not to be off by more than 15 minutes
- */
- real_seconds = nowtime % 60;
- real_minutes = nowtime / 60;
- if (((abs(real_minutes - mostek_minutes) + 15)/30) & 1)
- real_minutes += 30; /* correct for half hour time zone */
- real_minutes %= 60;
+ /*
+ * since we're only adjusting minutes and seconds,
+ * don't interfere with hour overflow. This avoids
+ * messing with unknown time zones but requires your
+ * RTC not to be off by more than 15 minutes
+ */
+ real_seconds = nowtime % 60;
+ real_minutes = nowtime / 60;
+ if (((abs(real_minutes - chip_minutes) + 15)/30) & 1)
+ real_minutes += 30; /* correct for half hour time zone */
+ real_minutes %= 60;
- if (abs(real_minutes - mostek_minutes) < 30) {
- tmp = mostek_read(regs + MOSTEK_CREG);
- tmp |= MSTK_CREG_WRITE;
- mostek_write(regs + MOSTEK_CREG, tmp);
+ if (abs(real_minutes - chip_minutes) < 30) {
+ tmp = mostek_read(mregs + MOSTEK_CREG);
+ tmp |= MSTK_CREG_WRITE;
+ mostek_write(mregs + MOSTEK_CREG, tmp);
- MSTK_SET_REG_SEC(regs,real_seconds);
- MSTK_SET_REG_MIN(regs,real_minutes);
+ MSTK_SET_REG_SEC(mregs,real_seconds);
+ MSTK_SET_REG_MIN(mregs,real_minutes);
- tmp = mostek_read(regs + MOSTEK_CREG);
- tmp &= ~MSTK_CREG_WRITE;
- mostek_write(regs + MOSTEK_CREG, tmp);
+ tmp = mostek_read(mregs + MOSTEK_CREG);
+ tmp &= ~MSTK_CREG_WRITE;
+ mostek_write(mregs + MOSTEK_CREG, tmp);
+
+ spin_unlock_irqrestore(&mostek_lock, flags);
- spin_unlock_irqrestore(&mostek_lock, flags);
+ return 0;
+ } else {
+ spin_unlock_irqrestore(&mostek_lock, flags);
- return 0;
+ return -1;
+ }
} else {
- spin_unlock_irqrestore(&mostek_lock, flags);
+ int retval = 0;
+ unsigned char save_control, save_freq_select;
- return -1;
+ /* Stolen from arch/i386/kernel/time.c, see there for
+ * credits and descriptive comments.
+ */
+ spin_lock_irqsave(&rtc_lock, flags);
+ save_control = CMOS_READ(RTC_CONTROL); /* tell the clock it's being set */
+ CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
+
+ save_freq_select = CMOS_READ(RTC_FREQ_SELECT); /* stop and reset prescaler */
+ CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+
+ chip_minutes = CMOS_READ(RTC_MINUTES);
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+ BCD_TO_BIN(chip_minutes);
+ real_seconds = nowtime % 60;
+ real_minutes = nowtime / 60;
+ if (((abs(real_minutes - chip_minutes) + 15)/30) & 1)
+ real_minutes += 30;
+ real_minutes %= 60;
+
+ if (abs(real_minutes - chip_minutes) < 30) {
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BIN_TO_BCD(real_seconds);
+ BIN_TO_BCD(real_minutes);
+ }
+ CMOS_WRITE(real_seconds,RTC_SECONDS);
+ CMOS_WRITE(real_minutes,RTC_MINUTES);
+ } else {
+ printk(KERN_WARNING
+ "set_rtc_mmss: can't update from %d to %d\n",
+ chip_minutes, real_minutes);
+ retval = -1;
+ }
+
+ CMOS_WRITE(save_control, RTC_CONTROL);
+ CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
+ spin_unlock_irqrestore(&rtc_lock, flags);
+
+ return retval;
}
}
-/* $Id: trampoline.S,v 1.14 2001/03/04 18:31:00 davem Exp $
+/* $Id: trampoline.S,v 1.19 2001/03/22 09:54:26 davem Exp $
* trampoline.S: Jump start slave processors on sparc64.
*
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
nop
cheetah_startup:
- mov DCR_BPE | DCR_RPE | DCR_SI | DCR_MS, %g1
+ mov DCR_BPE | DCR_RPE | DCR_SI | DCR_IFPOE | DCR_MS, %g1
wr %g1, %asr18
sethi %uhi(DCU_ME | DCU_RE | DCU_PE | DCU_HPE | DCU_SPE | DCU_SL | DCU_WE), %g5
or %g5, %ulo(DCU_ME | DCU_RE | DCU_PE | DCU_HPE | DCU_SPE | DCU_SL | DCU_WE), %g5
sllx %g5, 32, %g5
- ldxa [%g0] ASI_DCU_CONTROL_REG, %g1
- or %g1, %g5, %g1
+ or %g5, DCU_DM | DCU_IM | DCU_DC | DCU_IC, %g5
+ ldxa [%g0] ASI_DCU_CONTROL_REG, %g3
+ or %g5, %g3, %g5
stxa %g5, [%g0] ASI_DCU_CONTROL_REG
membar #Sync
+ /* Disable STICK_INT interrupts. */
+ sethi %hi(0x80000000), %g5
+ sllx %g5, 32, %g5
+ wr %g5, %asr25
+
ba,pt %xcc, startup_continue
nop
mov %o2, %g6
wrpr %o1, PSTATE_MG, %pstate
-#define KERN_HIGHBITS ((_PAGE_VALID | _PAGE_SZ4MB) ^ 0xfffff80000000000)
+#define KERN_HIGHBITS ((_PAGE_VALID|_PAGE_SZ4MB)^0xfffff80000000000)
#define KERN_LOWBITS (_PAGE_CP | _PAGE_CV | _PAGE_P | _PAGE_W)
-#define VPTE_BASE_CHEETAH 0xffe0000000000000
#define VPTE_BASE_SPITFIRE 0xfffffffe00000000
+#if 1
+#define VPTE_BASE_CHEETAH VPTE_BASE_SPITFIRE
+#else
+#define VPTE_BASE_CHEETAH 0xffe0000000000000
+#endif
mov TSB_REG, %g1
stxa %g0, [%g1] ASI_DMMU
sethi %uhi(VPTE_BASE_CHEETAH), %g3
or %g3, %ulo(VPTE_BASE_CHEETAH), %g3
ba,pt %xcc, 2f
- sllx %g3, 32, %g3
+ sllx %g3, 32, %g3
1:
sethi %uhi(VPTE_BASE_SPITFIRE), %g3
or %g3, %ulo(VPTE_BASE_SPITFIRE), %g3
clr %g7
#undef KERN_HIGHBITS
#undef KERN_LOWBITS
-#undef VPTE_BASE
+#undef VPTE_BASE_SPITFIRE
+#undef VPTE_BASE_CHEETAH
/* Setup interrupt globals, we are always SMP. */
wrpr %o1, PSTATE_IG, %pstate
-/* $Id: traps.c,v 1.70 2001/02/09 05:46:44 davem Exp $
+/* $Id: traps.c,v 1.73 2001/03/22 07:26:03 davem Exp $
* arch/sparc64/kernel/traps.c
*
* Copyright (C) 1995,1997 David S. Miller (davem@caip.rutgers.edu)
#include <asm/uaccess.h>
#include <asm/fpumacro.h>
#include <asm/lsu.h>
+#include <asm/dcu.h>
#include <asm/psrcompat.h>
#ifdef CONFIG_KMOD
#include <linux/kmod.h>
}
if (regs->tstate & TSTATE_PRIV)
die_if_kernel ("Kernel bad trap", regs);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
info.si_signo = SIGILL;
info.si_errno = 0;
info.si_code = ILL_ILLTRP;
#endif
die_if_kernel("Iax", regs);
}
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
info.si_signo = SIGSEGV;
info.si_errno = 0;
info.si_code = SEGV_MAPERR;
#ifdef CONFIG_PCI
/* This is really pathetic... */
-/* #define DEBUG_PCI_POKES */
extern volatile int pci_poke_in_progress;
extern volatile int pci_poke_faulted;
#endif
/* When access exceptions happen, we must do this. */
-static __inline__ void clean_and_reenable_l1_caches(void)
+static void clean_and_reenable_l1_caches(void)
{
unsigned long va;
- /* Clean 'em. */
- for(va = 0; va < (PAGE_SIZE << 1); va += 32) {
- spitfire_put_icache_tag(va, 0x0);
- spitfire_put_dcache_tag(va, 0x0);
- }
+ if (tlb_type == spitfire) {
+ /* Clean 'em. */
+ for (va = 0; va < (PAGE_SIZE << 1); va += 32) {
+ spitfire_put_icache_tag(va, 0x0);
+ spitfire_put_dcache_tag(va, 0x0);
+ }
- /* Re-enable. */
- __asm__ __volatile__("flush %%g6\n\t"
- "membar #Sync\n\t"
- "stxa %0, [%%g0] %1\n\t"
- "membar #Sync"
- : /* no outputs */
- : "r" (LSU_CONTROL_IC | LSU_CONTROL_DC |
- LSU_CONTROL_IM | LSU_CONTROL_DM),
- "i" (ASI_LSU_CONTROL)
- : "memory");
+ /* Re-enable in LSU. */
+ __asm__ __volatile__("flush %%g6\n\t"
+ "membar #Sync\n\t"
+ "stxa %0, [%%g0] %1\n\t"
+ "membar #Sync"
+ : /* no outputs */
+ : "r" (LSU_CONTROL_IC | LSU_CONTROL_DC |
+ LSU_CONTROL_IM | LSU_CONTROL_DM),
+ "i" (ASI_LSU_CONTROL)
+ : "memory");
+ } else if (tlb_type == cheetah) {
+ /* Flush D-cache */
+ for (va = 0; va < (1 << 16); va += (1 << 5)) {
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
+ : /* no outputs */
+ : "r" (va), "i" (ASI_DCACHE_TAG));
+ }
+ }
}
void do_iae(struct pt_regs *regs)
void do_dae(struct pt_regs *regs)
{
#ifdef CONFIG_PCI
- if(pci_poke_in_progress) {
-#ifdef DEBUG_PCI_POKES
- prom_printf(" (POKE tpc[%016lx] tnpc[%016lx] ",
- regs->tpc, regs->tnpc);
-#endif
+ if (pci_poke_in_progress) {
+ clean_and_reenable_l1_caches();
+
pci_poke_faulted = 1;
- regs->tnpc = regs->tpc + 4;
+ /* Why the fuck did they have to change this? */
+ if (tlb_type == cheetah)
+ regs->tpc += 4;
-#ifdef DEBUG_PCI_POKES
- prom_printf("PCI) ");
- /* prom_halt(); */
-#endif
- clean_and_reenable_l1_caches();
+ regs->tnpc = regs->tpc + 4;
return;
}
#endif
unsigned long fsr = current->thread.xfsr[0];
siginfo_t info;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
info.si_signo = SIGFPE;
info.si_errno = 0;
info.si_addr = (void *)regs->tpc;
if(regs->tstate & TSTATE_PRIV)
die_if_kernel("Penguin overflow trap from kernel mode", regs);
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
info.si_signo = SIGEMT;
info.si_errno = 0;
info.si_code = EMT_TAGOVF;
{
siginfo_t info;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
info.si_signo = SIGFPE;
info.si_errno = 0;
info.si_code = FPE_INTDIV;
(rw->ins[6] + STACK_BIAS);
}
instruction_dump ((unsigned int *) regs->tpc);
- } else
+ } else {
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
user_instruction_dump ((unsigned int *) regs->tpc);
+ }
#ifdef CONFIG_SMP
smp_report_regs();
#endif
{
siginfo_t info;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
info.si_signo = SIGILL;
info.si_errno = 0;
info.si_code = ILL_PRVOPC;
regs->u_regs[UREG_I0] = tstate_to_psr(regs->tstate);
regs->tpc = regs->tnpc;
regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
}
void trap_init(void)
-/* $Id: ttable.S,v 1.31 2000/05/09 17:40:14 davem Exp $
+/* $Id: ttable.S,v 1.32 2001/03/23 07:56:30 davem Exp $
* ttable.S: Sparc V9 Trap Table(s) with SpitFire extensions.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
tl0_resv01e: BTRAP(0x1e) BTRAP(0x1f)
tl0_fpdis: TRAP_NOSAVE(do_fpdis)
tl0_fpieee: TRAP_SAVEFPU(do_fpieee)
-tl0_fpother: TRAP_SAVEFPU(do_fpother)
+tl0_fpother: TRAP_NOSAVE(do_fpother_check_fitos)
tl0_tof: TRAP(do_tof)
tl0_cwin: CLEAN_WINDOW
tl0_div0: TRAP(do_div0)
-/* $Id: unaligned.c,v 1.20 2000/04/29 08:05:21 anton Exp $
+/* $Id: unaligned.c,v 1.21 2001/03/21 11:46:20 davem Exp $
* unaligned.c: Unaligned load/store trap handling with special
* cases for the kernel to do them more quickly.
*
{
regs->tpc = regs->tnpc;
regs->tnpc += 4;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ regs->tnpc &= 0xffffffff;
+ }
}
static inline int floating_point_load_or_store_p(unsigned int insn)
-/* $Id: U3copy_in_user.S,v 1.3 2000/11/01 09:29:19 davem Exp $
+/* $Id: U3copy_in_user.S,v 1.4 2001/03/21 05:58:47 davem Exp $
* U3memcpy.S: UltraSparc-III optimized copy within userspace.
*
* Copyright (C) 1999, 2000 David S. Miller (davem@redhat.com)
.align 64
U3copy_in_user_begin:
- prefetch [%o1 + 0x000], #one_read ! MS Group1
- prefetch [%o1 + 0x040], #one_read ! MS Group2
+ prefetcha [%o1 + 0x000] %asi, #one_read ! MS Group1
+ prefetcha [%o1 + 0x040] %asi, #one_read ! MS Group2
andn %o2, (0x40 - 1), %o4 ! A0
- prefetch [%o1 + 0x080], #one_read ! MS Group3
+ prefetcha [%o1 + 0x080] %asi, #one_read ! MS Group3
cmp %o4, 0x140 ! A0
- prefetch [%o1 + 0x0c0], #one_read ! MS Group4
+ prefetcha [%o1 + 0x0c0] %asi, #one_read ! MS Group4
EX(ldda [%o1 + 0x000] %asi, %f0, add %o2, %g0) ! MS Group5 (%f0 results at G8)
bge,a,pt %icc, 1f ! BR
- prefetch [%o1 + 0x100], #one_read ! MS Group6
+ prefetcha [%o1 + 0x100] %asi, #one_read ! MS Group6
1: EX(ldda [%o1 + 0x008] %asi, %f2, add %o2, %g0) ! AX (%f2 results at G9)
cmp %o4, 0x180 ! A1
bge,a,pt %icc, 1f ! BR
- prefetch [%o1 + 0x140], #one_read ! MS Group7
+ prefetcha [%o1 + 0x140] %asi, #one_read ! MS Group7
1: EX(ldda [%o1 + 0x010] %asi, %f4, add %o2, %g0) ! AX (%f4 results at G10)
cmp %o4, 0x1c0 ! A1
bge,a,pt %icc, 1f ! BR
- prefetch [%o1 + 0x180], #one_read ! MS Group8
+ prefetcha [%o1 + 0x180] %asi, #one_read ! MS Group8
1: faligndata %f0, %f2, %f16 ! FGA Group9 (%f16 at G12)
EX(ldda [%o1 + 0x018] %asi, %f6, add %o2, %g0) ! AX (%f6 results at G12)
faligndata %f2, %f4, %f18 ! FGA Group10 (%f18 results at G13)
faligndata %f8, %f10, %f24 ! FGA Group16 (%f24 results at G19)
EXBLK1(ldda [%o1 + 0x040] %asi, %f0) ! AX (%f0 results at G19)
- prefetch [%o1 + 0x180], #one_read ! MS
+ prefetcha [%o1 + 0x180] %asi, #one_read ! MS
faligndata %f10, %f12, %f26 ! FGA Group17 (%f26 results at G20)
subcc %o4, 0x40, %o4 ! A0
add %o1, 0x40, %o1 ! A1
-/* $Id: VISbzero.S,v 1.10 1999/05/25 16:52:56 jj Exp $
+/* $Id: VISbzero.S,v 1.11 2001/03/15 08:51:24 anton Exp $
* VISbzero.S: High speed clear operations utilizing the UltraSparc
* Visual Instruction Set.
*
.text
.align 32
#ifdef __KERNEL__
+ .globl __bzero_begin
+__bzero_begin:
.globl __bzero, __bzero_noasi
__bzero_noasi:
rd %asi, %g5
ba,pt %xcc, VISbzerofixup_ret0
sub %o1, %g2, %o0
#endif
+ .globl __bzero_end
+__bzero_end:
-/* $Id: VISsave.S,v 1.4 1999/07/30 09:35:37 davem Exp $
+/* $Id: VISsave.S,v 1.5 2001/03/08 22:08:51 davem Exp $
* VISsave.S: Code for saving FPU register state for
* VIS routines. One should not call this directly,
* but use macros provided in <asm/visasm.h>.
clr %g1
ba,pt %xcc, 3f
- stb %g3, [%g6 + AOFF_task_thread + AOFF_thread_gsr]
+ stx %g3, [%g6 + AOFF_task_thread + AOFF_thread_gsr]
2: add %g6, %g1, %g3
cmp %o5, FPRS_DU
be,pn %icc, 6f
sll %g1, 3, %g1
stb %o5, [%g3 + AOFF_task_thread + AOFF_thread_fpsaved]
rd %gsr, %g2
- stb %g2, [%g3 + AOFF_task_thread + AOFF_thread_gsr]
+ add %g6, %g1, %g3
+ stx %g2, [%g3 + AOFF_task_thread + AOFF_thread_gsr]
add %g6, %g1, %g2
stx %fsr, [%g2 + AOFF_task_thread + AOFF_thread_xfsr]
stb %g2, [%g3 + AOFF_task_thread + AOFF_thread_fpsaved]
rd %gsr, %g2
- stb %g2, [%g3 + AOFF_task_thread + AOFF_thread_gsr]
+ add %g6, %g1, %g3
+ stx %g2, [%g3 + AOFF_task_thread + AOFF_thread_gsr]
add %g6, %g1, %g2
stx %fsr, [%g2 + AOFF_task_thread + AOFF_thread_xfsr]
sll %g1, 5, %g1
-/* $Id: blockops.S,v 1.27 2000/07/14 01:12:49 davem Exp $
+/* $Id: blockops.S,v 1.30 2001/03/22 13:10:10 davem Exp $
* blockops.S: UltraSparc block zero optimized routines.
*
* Copyright (C) 1996, 1998, 1999, 2000 David S. Miller (davem@redhat.com)
or %g2, %g3, %g2
add %o0, %o3, %o0
add %o0, %o1, %o1
+#define FIX_INSN_1 0x96102068 /* mov (13 << 3), %o3 */
+cheetah_patch_1:
mov TLBTEMP_ENT1, %o3
rdpr %pstate, %g3
wrpr %g3, PSTATE_IE, %pstate
/* Spitfire Errata #32 workaround */
mov 0x8, %o4
stxa %g0, [%o4] ASI_DMMU
- sethi %hi(empty_zero_page), %o4
- flush %o4
+ membar #Sync
ldxa [%o3] ASI_DTLB_TAG_READ, %o4
/* Spitfire Errata #32 workaround */
mov 0x8, %o5
stxa %g0, [%o5] ASI_DMMU
- sethi %hi(empty_zero_page), %o5
- flush %o5
+ membar #Sync
ldxa [%o3] ASI_DTLB_DATA_ACCESS, %o5
stxa %o0, [%o2] ASI_DMMU
/* Spitfire Errata #32 workaround */
mov 0x8, %g5
stxa %g0, [%g5] ASI_DMMU
- sethi %hi(empty_zero_page), %g5
- flush %g5
+ membar #Sync
ldxa [%o3] ASI_DTLB_TAG_READ, %g5
/* Spitfire Errata #32 workaround */
mov 0x8, %g7
stxa %g0, [%g7] ASI_DMMU
- sethi %hi(empty_zero_page), %g7
- flush %g7
+ membar #Sync
ldxa [%o3] ASI_DTLB_DATA_ACCESS, %g7
stxa %o1, [%o2] ASI_DMMU
bne,pn %xcc, copy_page_using_blkcommit
nop
+ rdpr %ver, %g3
+ sllx %g3, 16, %g3
+ srlx %g3, 32 + 16, %g3
+ cmp %g3, 0x14
+ bne,pt %icc, spitfire_copy_user_page
+ nop
+
+cheetah_copy_user_page:
+ mov 121, %o2 ! A0 Group
+ prefetch [%o1 + 0x000], #one_read ! MS
+ prefetch [%o1 + 0x040], #one_read ! MS Group
+ prefetch [%o1 + 0x080], #one_read ! MS Group
+ prefetch [%o1 + 0x0c0], #one_read ! MS Group
+ ldd [%o1 + 0x000], %f0 ! MS Group
+ prefetch [%o1 + 0x100], #one_read ! MS Group
+ ldd [%o1 + 0x008], %f2 ! AX
+ prefetch [%o1 + 0x140], #one_read ! MS Group
+ ldd [%o1 + 0x010], %f4 ! AX
+ prefetch [%o1 + 0x180], #one_read ! MS Group
+ fmovd %f0, %f32 ! FGA Group
+ ldd [%o1 + 0x018], %f6 ! AX
+ fmovd %f2, %f34 ! FGA Group
+ ldd [%o1 + 0x020], %f8 ! MS
+ fmovd %f4, %f36 ! FGA Group
+ ldd [%o1 + 0x028], %f10 ! AX
+ membar #StoreStore ! MS
+ fmovd %f6, %f38 ! FGA Group
+ ldd [%o1 + 0x030], %f12 ! MS
+ fmovd %f8, %f40 ! FGA Group
+ ldd [%o1 + 0x038], %f14 ! AX
+ fmovd %f10, %f42 ! FGA Group
+ ldd [%o1 + 0x040], %f16 ! MS
+1: ldd [%o1 + 0x048], %f2 ! AX (Group)
+ fmovd %f12, %f44 ! FGA
+ ldd [%o1 + 0x050], %f4 ! MS
+ fmovd %f14, %f46 ! FGA Group
+ stda %f32, [%o0] ASI_BLK_P ! MS
+ ldd [%o1 + 0x058], %f6 ! AX
+ fmovd %f16, %f32 ! FGA Group (8-cycle stall)
+ ldd [%o1 + 0x060], %f8 ! MS
+ fmovd %f2, %f34 ! FGA Group
+ ldd [%o1 + 0x068], %f10 ! AX
+ fmovd %f4, %f36 ! FGA Group
+ ldd [%o1 + 0x070], %f12 ! MS
+ fmovd %f6, %f38 ! FGA Group
+ ldd [%o1 + 0x078], %f14 ! AX
+ fmovd %f8, %f40 ! FGA Group
+ ldd [%o1 + 0x080], %f16 ! AX
+ prefetch [%o1 + 0x180], #one_read ! MS
+ fmovd %f10, %f42 ! FGA Group
+ subcc %o2, 1, %o2 ! A0
+ add %o0, 0x40, %o0 ! A1
+ bne,pt %xcc, 1b ! BR
+ add %o1, 0x40, %o1 ! A0 Group
+
+ mov 5, %o2 ! A0 Group
+1: ldd [%o1 + 0x048], %f2 ! AX
+ fmovd %f12, %f44 ! FGA
+ ldd [%o1 + 0x050], %f4 ! MS
+ fmovd %f14, %f46 ! FGA Group
+ stda %f32, [%o0] ASI_BLK_P ! MS
+ ldd [%o1 + 0x058], %f6 ! AX
+ fmovd %f16, %f32 ! FGA Group (8-cycle stall)
+ ldd [%o1 + 0x060], %f8 ! MS
+ fmovd %f2, %f34 ! FGA Group
+ ldd [%o1 + 0x068], %f10 ! AX
+ fmovd %f4, %f36 ! FGA Group
+ ldd [%o1 + 0x070], %f12 ! MS
+ fmovd %f6, %f38 ! FGA Group
+ ldd [%o1 + 0x078], %f14 ! AX
+ fmovd %f8, %f40 ! FGA Group
+ ldd [%o1 + 0x080], %f16 ! MS
+ fmovd %f10, %f42 ! FGA Group
+ subcc %o2, 1, %o2 ! A0
+ add %o0, 0x40, %o0 ! A1
+ bne,pt %xcc, 1b ! BR
+ add %o1, 0x40, %o1 ! A0 Group
+
+ ldd [%o1 + 0x048], %f2 ! AX
+ fmovd %f12, %f44 ! FGA
+ ldd [%o1 + 0x050], %f4 ! MS
+ fmovd %f14, %f46 ! FGA Group
+ stda %f32, [%o0] ASI_BLK_P ! MS
+ ldd [%o1 + 0x058], %f6 ! AX
+ fmovd %f16, %f32 ! FGA Group (8-cycle stall)
+ ldd [%o1 + 0x060], %f8 ! MS
+ fmovd %f2, %f34 ! FGA Group
+ ldd [%o1 + 0x068], %f10 ! AX
+ fmovd %f4, %f36 ! FGA Group
+ ldd [%o1 + 0x070], %f12 ! MS
+ fmovd %f6, %f38 ! FGA Group
+ add %o0, 0x40, %o0 ! A0
+ ldd [%o1 + 0x078], %f14 ! AX
+ fmovd %f8, %f40 ! FGA Group
+ fmovd %f10, %f42 ! FGA Group
+ fmovd %f12, %f44 ! FGA Group
+ fmovd %f14, %f46 ! FGA Group
+ stda %f32, [%o0] ASI_BLK_P ! MS
+ ba,a,pt %xcc, copy_user_page_continue
+
+spitfire_copy_user_page:
ldda [%o1] ASI_BLK_P, %f0
add %o1, 0x40, %o1
ldda [%o1] ASI_BLK_P, %f16
or %g3, (_PAGE_CP | _PAGE_CV | _PAGE_P | _PAGE_L | _PAGE_W), %g3
or %g1, %g3, %g1
add %o0, %o3, %o0
+#define FIX_INSN_2 0x96102070 /* mov (14 << 3), %o3 */
+cheetah_patch_2:
mov TLBTEMP_ENT2, %o3
rdpr %pstate, %g3
wrpr %g3, PSTATE_IE, %pstate
/* Spitfire Errata #32 workaround */
mov 0x8, %g5
stxa %g0, [%g5] ASI_DMMU
- sethi %hi(empty_zero_page), %g5
- flush %g5
+ membar #Sync
ldxa [%o3] ASI_DTLB_TAG_READ, %g5
/* Spitfire Errata #32 workaround */
mov 0x8, %g7
stxa %g0, [%g7] ASI_DMMU
- sethi %hi(empty_zero_page), %g7
- flush %g7
+ membar #Sync
ldxa [%o3] ASI_DTLB_DATA_ACCESS, %g7
stxa %o0, [%o2] ASI_DMMU
membar #Sync
jmpl %o7 + 0x8, %g0
wrpr %g3, 0x0, %pstate
+
+ /* We will write cheetah optimized versions later. */
+ .globl cheetah_patch_pgcopyops
+cheetah_patch_pgcopyops:
+ sethi %hi(FIX_INSN_1), %g1
+ or %g1, %lo(FIX_INSN_1), %g1
+ sethi %hi(cheetah_patch_1), %g2
+ or %g2, %lo(cheetah_patch_1), %g2
+ stw %g1, [%g2]
+ flush %g2
+ sethi %hi(FIX_INSN_2), %g1
+ or %g1, %lo(FIX_INSN_2), %g1
+ sethi %hi(cheetah_patch_2), %g2
+ or %g2, %lo(cheetah_patch_2), %g2
+ stw %g1, [%g2]
+ flush %g2
+ retl
+ nop
+
+#undef FIX_INSN1
+#undef FIX_INSN2
-/* $Id: fault.c,v 1.51 2000/09/14 06:22:32 anton Exp $
+/* $Id: fault.c,v 1.54 2001/03/24 09:36:11 davem Exp $
* arch/sparc64/mm/fault.c: Page fault handlers for the 64-bit Sparc.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
{
unsigned long g2;
unsigned char asi = ASI_P;
-
+
if (!insn) {
if (regs->tstate & TSTATE_PRIV) {
if (!regs->tpc || (regs->tpc & 0x3))
if (in_interrupt() || !mm)
goto handle_kernel_fault;
+ if ((current->thread.flags & SPARC_FLAG_32BIT) != 0) {
+ regs->tpc &= 0xffffffff;
+ address &= 0xffffffff;
+ }
+
down_read(&mm->mmap_sem);
vma = find_vma(mm, address);
if (!vma)
if (fault_code & FAULT_CODE_WRITE) {
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;
- if ((vma->vm_flags & VM_EXEC) != 0 &&
+
+ /* Spitfire has an icache which does not snoop
+ * processor stores. Later processors do...
+ */
+ if (tlb_type == spitfire &&
+ (vma->vm_flags & VM_EXEC) != 0 &&
vma->vm_file != NULL)
current->thread.use_blkcommit = 1;
} else {
-/* $Id: generic.c,v 1.14 2000/08/09 00:00:15 davem Exp $
+/* $Id: generic.c,v 1.15 2001/03/24 09:36:01 davem Exp $
* generic.c: Generic Sparc mm routines that are not dependent upon
* MMU type but are Sparc specific.
*
end = PGDIR_SIZE;
offset -= address;
do {
- pte_t * pte = pte_alloc(pmd, address);
+ pte_t * pte = pte_alloc(current->mm, pmd, address);
if (!pte)
return -ENOMEM;
spin_lock(¤t->mm->page_table_lock);
dir = pgd_offset(current->mm, from);
flush_cache_range(current->mm, beg, end);
while (from < end) {
- pmd_t *pmd = pmd_alloc(dir, from);
+ pmd_t *pmd = pmd_alloc(current->mm, dir, from);
error = -ENOMEM;
if (!pmd)
break;
-/* $Id: init.c,v 1.164 2001/03/03 10:34:45 davem Exp $
+/* $Id: init.c,v 1.172 2001/03/24 09:36:01 davem Exp $
* arch/sparc64/mm/init.c
*
* Copyright (C) 1996-1999 David S. Miller (davem@caip.rutgers.edu)
free_pgd_slow(get_pgd_fast()), freed++;
#endif
if (pte_quicklist[0])
- free_pte_slow(get_pte_fast(0)), freed++;
+ free_pte_slow(pte_alloc_one_fast(0)), freed++;
if (pte_quicklist[1])
- free_pte_slow(get_pte_fast(1)), freed++;
+ free_pte_slow(pte_alloc_one_fast(1 << (PAGE_SHIFT + 10))), freed++;
} while (pgtable_cache_size > low);
}
#ifndef CONFIG_SMP
if (VALID_PAGE(page) && page->mapping &&
test_bit(PG_dcache_dirty, &page->flags)) {
- __flush_dcache_page(page->virtual, 1);
+ __flush_dcache_page(page->virtual,
+ (tlb_type == spitfire));
clear_bit(PG_dcache_dirty, &page->flags);
}
__update_mmu_cache(vma, address, pte);
void flush_icache_range(unsigned long start, unsigned long end)
{
- unsigned long kaddr;
+ /* Cheetah has coherent I-cache. */
+ if (tlb_type == spitfire) {
+ unsigned long kaddr;
- for (kaddr = start; kaddr < end; kaddr += PAGE_SIZE)
- __flush_icache_page(__get_phys(kaddr));
+ for (kaddr = start; kaddr < end; kaddr += PAGE_SIZE)
+ __flush_icache_page(__get_phys(kaddr));
+ }
}
/*
break;
case cheetah:
- phys_page = cheetah_get_ldtlb_data(sparc64_highest_locked_tlbent());
+ phys_page = cheetah_get_litlb_data(sparc64_highest_locked_tlbent());
break;
};
remap_func((tlb_type == spitfire ?
(spitfire_get_dtlb_data(sparc64_highest_locked_tlbent()) & _PAGE_PADDR) :
- (cheetah_get_ldtlb_data(sparc64_highest_locked_tlbent()) & _PAGE_PADDR)),
+ (cheetah_get_litlb_data(sparc64_highest_locked_tlbent()) & _PAGE_PADDR)),
(unsigned long) &empty_zero_page,
prom_get_mmu_ihandle());
tag = spitfire_get_dtlb_tag(i);
if (((tag & ~(PAGE_MASK)) == 0) &&
((tag & (PAGE_MASK)) >= prom_reserved_base)) {
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* no outputs */
: "r" (TLB_TAG_ACCESS), "i" (ASI_DMMU));
- membar("#Sync");
spitfire_put_dtlb_data(i, 0x0UL);
- membar("#Sync");
}
}
} else if (tlb_type == cheetah) {
- for (i = 0; i < 511; i++) {
+ for (i = 0; i < 512; i++) {
unsigned long tag = cheetah_get_dtlb_tag(i);
if ((tag & ~PAGE_MASK) == 0 &&
(tag & PAGE_MASK) >= prom_reserved_base) {
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* no outputs */
: "r" (TLB_TAG_ACCESS), "i" (ASI_DMMU));
- membar("#Sync");
cheetah_put_dtlb_data(i, 0x0UL);
- membar("#Sync");
}
}
} else {
/* Install PROM world. */
for (i = 0; i < 16; i++) {
if (prom_dtlb[i].tlb_ent != -1) {
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: : "r" (prom_dtlb[i].tlb_tag), "r" (TLB_TAG_ACCESS),
"i" (ASI_DMMU));
- membar("#Sync");
if (tlb_type == spitfire)
spitfire_put_dtlb_data(prom_dtlb[i].tlb_ent,
prom_dtlb[i].tlb_data);
else if (tlb_type == cheetah)
cheetah_put_ldtlb_data(prom_dtlb[i].tlb_ent,
prom_dtlb[i].tlb_data);
- membar("#Sync");
}
if (prom_itlb[i].tlb_ent != -1) {
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: : "r" (prom_itlb[i].tlb_tag),
"r" (TLB_TAG_ACCESS),
"i" (ASI_IMMU));
- membar("#Sync");
if (tlb_type == spitfire)
spitfire_put_itlb_data(prom_itlb[i].tlb_ent,
prom_itlb[i].tlb_data);
else if (tlb_type == cheetah)
cheetah_put_litlb_data(prom_itlb[i].tlb_ent,
prom_itlb[i].tlb_data);
- membar("#Sync");
}
}
} else {
for (i = 0; i < 16; i++) {
if (prom_dtlb[i].tlb_ent != -1) {
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: : "r" (TLB_TAG_ACCESS), "i" (ASI_DMMU));
- membar("#Sync");
if (tlb_type == spitfire)
spitfire_put_dtlb_data(prom_dtlb[i].tlb_ent, 0x0UL);
else
cheetah_put_ldtlb_data(prom_dtlb[i].tlb_ent, 0x0UL);
- membar("#Sync");
}
if (prom_itlb[i].tlb_ent != -1) {
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: : "r" (TLB_TAG_ACCESS),
"i" (ASI_IMMU));
- membar("#Sync");
if (tlb_type == spitfire)
spitfire_put_itlb_data(prom_itlb[i].tlb_ent, 0x0UL);
else
cheetah_put_litlb_data(prom_itlb[i].tlb_ent, 0x0UL);
- membar("#Sync");
}
}
}
prom_dtlb[dtlb_seen].tlb_tag = tag;
prom_dtlb[dtlb_seen].tlb_data = data;
}
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: : "r" (TLB_TAG_ACCESS), "i" (ASI_DMMU));
- membar("#Sync");
spitfire_put_dtlb_data(i, 0x0UL);
- membar("#Sync");
dtlb_seen++;
if (dtlb_seen > 15)
prom_itlb[itlb_seen].tlb_tag = tag;
prom_itlb[itlb_seen].tlb_data = data;
}
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: : "r" (TLB_TAG_ACCESS), "i" (ASI_IMMU));
- membar("#Sync");
spitfire_put_itlb_data(i, 0x0UL);
- membar("#Sync");
itlb_seen++;
if (itlb_seen > 15)
prom_dtlb[dtlb_seen].tlb_tag = tag;
prom_dtlb[dtlb_seen].tlb_data = data;
}
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: : "r" (TLB_TAG_ACCESS), "i" (ASI_DMMU));
- membar("#Sync");
cheetah_put_ldtlb_data(i, 0x0UL);
- membar("#Sync");
dtlb_seen++;
if (dtlb_seen > 15)
prom_itlb[itlb_seen].tlb_tag = tag;
prom_itlb[itlb_seen].tlb_data = data;
}
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: : "r" (TLB_TAG_ACCESS), "i" (ASI_IMMU));
- membar("#Sync");
cheetah_put_litlb_data(i, 0x0UL);
- membar("#Sync");
itlb_seen++;
if (itlb_seen > 15)
for (i = 0; i < 16; i++) {
if (prom_dtlb[i].tlb_ent != -1) {
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: : "r" (prom_dtlb[i].tlb_tag), "r" (TLB_TAG_ACCESS),
"i" (ASI_DMMU));
- membar("#Sync");
if (tlb_type == spitfire)
spitfire_put_dtlb_data(prom_dtlb[i].tlb_ent,
prom_dtlb[i].tlb_data);
else if (tlb_type == cheetah)
cheetah_put_ldtlb_data(prom_dtlb[i].tlb_ent,
prom_dtlb[i].tlb_data);
- membar("#Sync");
}
if (prom_itlb[i].tlb_ent != -1) {
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: : "r" (prom_itlb[i].tlb_tag),
"r" (TLB_TAG_ACCESS),
"i" (ASI_IMMU));
- membar("#Sync");
if (tlb_type == spitfire)
spitfire_put_itlb_data(prom_itlb[i].tlb_ent,
prom_itlb[i].tlb_data);
else
cheetah_put_litlb_data(prom_itlb[i].tlb_ent,
prom_itlb[i].tlb_data);
- membar("#Sync");
}
}
}
void __flush_dcache_range(unsigned long start, unsigned long end)
{
unsigned long va;
- int n = 0;
- for (va = start; va < end; va += 32) {
- spitfire_put_dcache_tag(va & 0x3fe0, 0x0);
- if (++n >= 512)
- break;
+ if (tlb_type == spitfire) {
+ int n = 0;
+
+ for (va = start; va < end; va += 32) {
+ spitfire_put_dcache_tag(va & 0x3fe0, 0x0);
+ if (++n >= 512)
+ break;
+ }
+ } else {
+ start = __pa(start);
+ end = __pa(end);
+ for (va = start; va < end; va += 32)
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
+ : /* no outputs */
+ : "r" (va),
+ "i" (ASI_DCACHE_INVALIDATE));
}
}
void __flush_cache_all(void)
{
- unsigned long va;
+ /* Cheetah should be fine here too. */
+ if (tlb_type == spitfire) {
+ unsigned long va;
- flushw_all();
- for (va = 0; va < (PAGE_SIZE << 1); va += 32)
- spitfire_put_icache_tag(va, 0x0);
+ flushw_all();
+ for (va = 0; va < (PAGE_SIZE << 1); va += 32)
+ spitfire_put_icache_tag(va, 0x0);
+ __asm__ __volatile__("flush %g6");
+ }
}
/* If not locked, zap it. */
"r" (PRIMARY_CONTEXT), "i" (ASI_DMMU));
if (!(spitfire_get_dtlb_data(i) & _PAGE_L)) {
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* no outputs */
: "r" (TLB_TAG_ACCESS), "i" (ASI_DMMU));
- membar("#Sync");
spitfire_put_dtlb_data(i, 0x0UL);
- membar("#Sync");
}
/* Spitfire Errata #32 workaround */
"r" (PRIMARY_CONTEXT), "i" (ASI_DMMU));
if (!(spitfire_get_itlb_data(i) & _PAGE_L)) {
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* no outputs */
: "r" (TLB_TAG_ACCESS), "i" (ASI_IMMU));
- membar("#Sync");
spitfire_put_itlb_data(i, 0x0UL);
- membar("#Sync");
}
}
} else if (tlb_type == cheetah) {
struct pgtable_cache_struct pgt_quicklists;
#endif
-/* For PMDs we don't care about the color, writes are
- * only done via Dcache which is write-thru, so non-Dcache
- * reads will always see correct data.
- */
-pmd_t *get_pmd_slow(pgd_t *pgd, unsigned long offset)
-{
- pmd_t *pmd;
-
- pmd = (pmd_t *) __get_free_page(GFP_KERNEL);
- if (pmd) {
- memset(pmd, 0, PAGE_SIZE);
- pgd_set(pgd, pmd);
- return pmd + offset;
- }
- return NULL;
-}
-
/* OK, we have to color these pages because during DTLB
* protection faults we set the dirty bit via a non-Dcache
* enabled mapping in the VPTE area. The kernel can end
* 3) Process faults back in the page, the old pre-dirtied copy
* is provided and here is the corruption.
*/
-pte_t *get_pte_slow(pmd_t *pmd, unsigned long offset, unsigned long color)
+pte_t *pte_alloc_one(unsigned long address)
{
struct page *page = alloc_pages(GFP_KERNEL, 1);
+ unsigned long color = ((address >> (PAGE_SHIFT + 10)) & 1UL);
if (page) {
unsigned long *to_free;
pte_quicklist[color ^ 0x1] = to_free;
pgtable_cache_size++;
- pmd_set(pmd, pte);
- return pte + offset;
+ return pte;
}
return NULL;
}
-/* $Id: ultra.S,v 1.49 2001/03/02 03:12:00 davem Exp $
+/* $Id: ultra.S,v 1.54 2001/03/22 07:26:04 davem Exp $
* ultra.S: Don't expand these all over the place...
*
* Copyright (C) 1997, 2000 David S. Miller (davem@redhat.com)
#include <asm/page.h>
#include <asm/spitfire.h>
+ /* Basically, all this madness has to do with the
+ * fact that Cheetah does not support IMMU flushes
+ * out of the secondary context. Someone needs to
+ * throw a south lake birthday party for the folks
+ * in Microelectronics who refused to fix this shit.
+ */
+#define BRANCH_IF_CHEETAH(tmp1, tmp2, label) \
+ rdpr %ver, %tmp1; \
+ sethi %hi(0x003e0014), %tmp2; \
+ srlx %tmp1, 32, %tmp1; \
+ or %tmp2, %lo(0x003e0014), %tmp2; \
+ cmp %tmp1, %tmp2; \
+ be,pn %icc, label; \
+ nop; \
+ nop;
+
/* This file is meant to be read efficiently by the CPU, not humans.
* Staraj sie tego nikomu nie pierdolnac...
*/
.align 32
.globl __flush_tlb_page, __flush_tlb_mm, __flush_tlb_range
__flush_tlb_page: /* %o0=(ctx & 0x3ff), %o1=page&PAGE_MASK, %o2=SECONDARY_CONTEXT */
-/*IC1*/ ldxa [%o2] ASI_DMMU, %g2
+/*IC1*/ BRANCH_IF_CHEETAH(g2, g3, __cheetah_flush_tlb_page)
+__spitfire_flush_tlb_page:
+/*IC2*/ ldxa [%o2] ASI_DMMU, %g2
cmp %g2, %o0
- bne,pn %icc, __flush_tlb_page_slow
+ bne,pn %icc, __spitfire_flush_tlb_page_slow
or %o1, 0x10, %g3
stxa %g0, [%g3] ASI_DMMU_DEMAP
stxa %g0, [%g3] ASI_IMMU_DEMAP
retl
flush %g6
+__cheetah_flush_tlb_page:
+/*IC3*/ rdpr %pstate, %g5
+ andn %g5, PSTATE_IE, %g2
+ wrpr %g2, 0x0, %pstate
+ wrpr %g0, 1, %tl
+ mov PRIMARY_CONTEXT, %o2
+ ldxa [%o2] ASI_DMMU, %g2
+ stxa %o0, [%o2] ASI_DMMU
+ stxa %g0, [%o1] ASI_DMMU_DEMAP
+/*IC4*/ stxa %g0, [%o1] ASI_IMMU_DEMAP
+ stxa %g2, [%o2] ASI_DMMU
+ flush %g6
+ wrpr %g0, 0, %tl
+ retl
+ wrpr %g5, 0x0, %pstate
+ nop
+ nop
__flush_tlb_mm: /* %o0=(ctx & 0x3ff), %o1=SECONDARY_CONTEXT */
-/*IC2*/ ldxa [%o1] ASI_DMMU, %g2
+/*IC5*/ BRANCH_IF_CHEETAH(g2, g3, __cheetah_flush_tlb_mm)
+__spitfire_flush_tlb_mm:
+/*IC6*/ ldxa [%o1] ASI_DMMU, %g2
cmp %g2, %o0
- bne,pn %icc, __flush_tlb_mm_slow
+ bne,pn %icc, __spitfire_flush_tlb_mm_slow
mov 0x50, %g3
stxa %g0, [%g3] ASI_DMMU_DEMAP
stxa %g0, [%g3] ASI_IMMU_DEMAP
retl
flush %g6
+__cheetah_flush_tlb_mm:
+/*IC7*/ rdpr %pstate, %g5
+ andn %g5, PSTATE_IE, %g2
+ wrpr %g2, 0x0, %pstate
+ wrpr %g0, 1, %tl
+ mov PRIMARY_CONTEXT, %o2
+ mov 0x40, %g3
+ ldxa [%o2] ASI_DMMU, %g2
+ stxa %o0, [%o2] ASI_DMMU
+/*IC8*/ stxa %g0, [%g3] ASI_DMMU_DEMAP
+ stxa %g0, [%g3] ASI_IMMU_DEMAP
+ stxa %g2, [%o2] ASI_DMMU
+ flush %g6
+ wrpr %g0, 0, %tl
+ retl
+ wrpr %g5, 0x0, %pstate
+ nop
__flush_tlb_range: /* %o0=(ctx&0x3ff), %o1=start&PAGE_MASK, %o2=SECONDARY_CONTEXT,
* %o3=end&PAGE_MASK, %o4=PAGE_SIZE, %o5=(end - start)
*/
+/*IC9*/ BRANCH_IF_CHEETAH(g2, g3, __cheetah_flush_tlb_range)
+__spitfire_flush_tlb_range:
#define TLB_MAGIC 207 /* Students, do you know how I calculated this? -DaveM */
-/*IC3*/ cmp %o5, %o4
+/*IC10*/cmp %o5, %o4
bleu,pt %xcc, __flush_tlb_page
srlx %o5, 13, %g5
cmp %g5, TLB_MAGIC
- bgeu,pn %icc, __flush_tlb_range_constant_time
+ bgeu,pn %icc, __spitfire_flush_tlb_range_constant_time
or %o1, 0x10, %g5
ldxa [%o2] ASI_DMMU, %g2
cmp %g2, %o0
-__flush_tlb_range_page_by_page:
-/*IC4*/ bne,pn %icc, __flush_tlb_range_pbp_slow
+__spitfire_flush_tlb_range_page_by_page:
+/*IC11*/bne,pn %icc, __spitfire_flush_tlb_range_pbp_slow
sub %o5, %o4, %o5
1: stxa %g0, [%g5 + %o5] ASI_DMMU_DEMAP
stxa %g0, [%g5 + %o5] ASI_IMMU_DEMAP
sub %o5, %o4, %o5
retl
flush %g6
-__flush_tlb_range_constant_time: /* %o0=ctx, %o1=start, %o3=end */
-/*IC5*/ rdpr %pstate, %g1
+__spitfire_flush_tlb_range_constant_time: /* %o0=ctx, %o1=start, %o3=end */
+/*IC12*/rdpr %pstate, %g1
wrpr %g1, PSTATE_IE, %pstate
mov TLB_TAG_ACCESS, %g3
/* XXX Spitfire dependency... */
and %o4, 0x3ff, %o5
cmp %o5, %o0
bne,pt %icc, 2f
-/*IC6*/ andn %o4, 0x3ff, %o4
+/*IC13*/ andn %o4, 0x3ff, %o4
cmp %o4, %o1
blu,pt %xcc, 2f
cmp %o4, %o3
2: ldxa [%g2] ASI_DTLB_TAG_READ, %o4
and %o4, 0x3ff, %o5
cmp %o5, %o0
-/*IC7*/ andn %o4, 0x3ff, %o4
+/*IC14*/andn %o4, 0x3ff, %o4
bne,pt %icc, 3f
cmp %o4, %o1
blu,pt %xcc, 3f
blu,pn %xcc, 5f
nop
3: brnz,pt %g2, 1b
-/*IC8*/ sub %g2, (1 << 3), %g2
+/*IC15*/ sub %g2, (1 << 3), %g2
retl
wrpr %g1, 0x0, %pstate
4: stxa %g0, [%g3] ASI_IMMU
nop
5: stxa %g0, [%g3] ASI_DMMU
-/*IC9*/ stxa %g0, [%g2] ASI_DTLB_DATA_ACCESS
+/*IC16*/stxa %g0, [%g2] ASI_DTLB_DATA_ACCESS
flush %g6
/* Spitfire Errata #32 workaround. */
nop
.align 32
-__flush_tlb_mm_slow:
-/*IC10*/rdpr %pstate, %g1
+__cheetah_flush_tlb_range:
+ cmp %o5, %o4
+ bleu,pt %xcc, __cheetah_flush_tlb_page
+ nop
+/*IC17*/rdpr %pstate, %g5
+ andn %g5, PSTATE_IE, %g2
+ wrpr %g2, 0x0, %pstate
+ wrpr %g0, 1, %tl
+ mov PRIMARY_CONTEXT, %o2
+ sub %o5, %o4, %o5
+ ldxa [%o2] ASI_DMMU, %g2
+ stxa %o0, [%o2] ASI_DMMU
+
+/*IC18*/
+1: stxa %g0, [%o1 + %o5] ASI_DMMU_DEMAP
+ stxa %g0, [%o1 + %o5] ASI_IMMU_DEMAP
+ membar #Sync
+ brnz,pt %o5, 1b
+ sub %o5, %o4, %o5
+
+ stxa %g2, [%o2] ASI_DMMU
+ flush %g6
+ wrpr %g0, 0, %tl
+ retl
+/*IC19*/ wrpr %g5, 0x0, %pstate
+
+__spitfire_flush_tlb_mm_slow:
+ rdpr %pstate, %g1
wrpr %g1, PSTATE_IE, %pstate
stxa %o0, [%o1] ASI_DMMU
stxa %g0, [%g3] ASI_DMMU_DEMAP
stxa %g0, [%g3] ASI_IMMU_DEMAP
flush %g6
stxa %g2, [%o1] ASI_DMMU
- flush %g6
-/*IC11*/retl
+/*IC18*/flush %g6
+ retl
wrpr %g1, 0, %pstate
- .align 32
-__flush_tlb_page_slow:
-/*IC12*/rdpr %pstate, %g1
+__spitfire_flush_tlb_page_slow:
+ rdpr %pstate, %g1
wrpr %g1, PSTATE_IE, %pstate
stxa %o0, [%o2] ASI_DMMU
stxa %g0, [%g3] ASI_DMMU_DEMAP
stxa %g0, [%g3] ASI_IMMU_DEMAP
- flush %g6
+/*IC20*/flush %g6
stxa %g2, [%o2] ASI_DMMU
flush %g6
-/*IC13*/retl
+ retl
wrpr %g1, 0, %pstate
- .align 32
-__flush_tlb_range_pbp_slow:
-/*IC13*/rdpr %pstate, %g1
+__spitfire_flush_tlb_range_pbp_slow:
+ rdpr %pstate, %g1
wrpr %g1, PSTATE_IE, %pstate
stxa %o0, [%o2] ASI_DMMU
+/*IC21*/
2: stxa %g0, [%g5 + %o5] ASI_DMMU_DEMAP
stxa %g0, [%g5 + %o5] ASI_IMMU_DEMAP
brnz,pt %o5, 2b
sub %o5, %o4, %o5
flush %g6
-/*IC14*/stxa %g2, [%o2] ASI_DMMU
+ stxa %g2, [%o2] ASI_DMMU
flush %g6
retl
- wrpr %g1, 0x0, %pstate
+/*IC22*/ wrpr %g1, 0x0, %pstate
.align 32
.globl __flush_icache_page
.globl __flush_dcache_page
__flush_dcache_page: /* %o0=kaddr, %o1=flush_icache */
sub %o0, %g4, %o0
+
+ rdpr %ver, %g1
+ sethi %hi(0x003e0014), %g2
+ srlx %g1, 32, %g1
+ or %g2, %lo(0x003e0014), %g2
+ cmp %g1, %g2
+ bne,pt %icc, flush_dcpage_spitfire
+ nop
+
+flush_dcpage_cheetah:
+ sethi %hi(8192), %o4
+1: subcc %o4, (1 << 5), %o4
+ stxa %g0, [%o0 + %o4] ASI_DCACHE_INVALIDATE
+ membar #Sync
+ bne,pt %icc, 1b
+ nop
+ /* I-cache flush never needed on Cheetah, see callers. */
+ retl
+ nop
+
+flush_dcpage_spitfire:
clr %o4
srlx %o0, 11, %o0
sethi %hi(1 << 14), %o2
.align 32
.globl xcall_flush_tlb_page, xcall_flush_tlb_mm, xcall_flush_tlb_range
xcall_flush_tlb_page:
- mov SECONDARY_CONTEXT, %g2
- or %g1, 0x10, %g4
+ mov PRIMARY_CONTEXT, %g2
ldxa [%g2] ASI_DMMU, %g3
stxa %g5, [%g2] ASI_DMMU
- stxa %g0, [%g4] ASI_DMMU_DEMAP
- stxa %g0, [%g4] ASI_IMMU_DEMAP
+ stxa %g0, [%g1] ASI_DMMU_DEMAP
+ stxa %g0, [%g1] ASI_IMMU_DEMAP
stxa %g3, [%g2] ASI_DMMU
retry
+ nop
xcall_flush_tlb_mm:
- mov SECONDARY_CONTEXT, %g2
- mov 0x50, %g4
+ mov PRIMARY_CONTEXT, %g2
+ mov 0x40, %g4
ldxa [%g2] ASI_DMMU, %g3
stxa %g5, [%g2] ASI_DMMU
stxa %g0, [%g4] ASI_DMMU_DEMAP
andn %g7, %g2, %g7
sub %g7, %g1, %g3
add %g2, 1, %g2
- orcc %g1, 0x10, %g1
srlx %g3, 13, %g4
-
cmp %g4, 96
+
bgu,pn %icc, xcall_flush_tlb_mm
- mov SECONDARY_CONTEXT, %g4
+ mov PRIMARY_CONTEXT, %g4
ldxa [%g4] ASI_DMMU, %g7
sub %g3, %g2, %g3
stxa %g5, [%g4] ASI_DMMU
nop
nop
+ nop
1: stxa %g0, [%g1 + %g3] ASI_DMMU_DEMAP
stxa %g0, [%g1 + %g3] ASI_IMMU_DEMAP
+ membar #Sync
brnz,pt %g3, 1b
sub %g3, %g2, %g3
stxa %g7, [%g4] ASI_DMMU
/* These two are not performance critical... */
.globl xcall_flush_tlb_all
xcall_flush_tlb_all:
-
+ BRANCH_IF_CHEETAH(g2, g3, __cheetah_xcall_flush_tlb_all)
+__spitfire_xcall_flush_tlb_all:
/* Spitfire Errata #32 workaround. */
sethi %hi(errata32_hwbug), %g4
stx %g0, [%g4 + %lo(errata32_hwbug)]
stx %g0, [%g4 + %lo(errata32_hwbug)]
2: add %g2, 1, %g2
- /* XXX Spitfire dependency... */
cmp %g2, 63
ble,pt %icc, 1b
sll %g2, 3, %g3
flush %g6
retry
+__cheetah_xcall_flush_tlb_all:
+ mov 0x80, %g2
+ stxa %g0, [%g2] ASI_DMMU_DEMAP
+ stxa %g0, [%g2] ASI_IMMU_DEMAP
+ retry
+
.globl xcall_flush_cache_all
xcall_flush_cache_all:
+ BRANCH_IF_CHEETAH(g2, g3, __cheetah_xcall_flush_cache_all)
+__spitfire_xcall_flush_cache_all:
sethi %hi(16383), %g2
or %g2, %lo(16383), %g2
clr %g3
1: stxa %g0, [%g3] ASI_IC_TAG
+ membar #Sync
add %g3, 32, %g3
cmp %g3, %g2
bleu,pt %xcc, 1b
flush %g6
retry
+ /* Cheetah's caches are fully coherent in the sense that
+ * caches are flushed here. We need to verify this and
+ * really just not even send out the xcall at the top level.
+ */
+__cheetah_xcall_flush_cache_all:
+ retry
+
.globl xcall_call_function
xcall_call_function:
mov TLB_TAG_ACCESS, %g5 ! wheee...
-/* $Id: misc.c,v 1.31 2000/12/14 22:57:25 davem Exp $
+/* $Id: misc.c,v 1.32 2001/03/24 09:36:11 davem Exp $
* misc.c: Miscelaneous syscall emulation for Solaris
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
#endif /* NS_DEBUG_SPINLOCKS */
-/* Version definition *********************************************************/
-/*
-#include <linux/version.h>
-char kernel_version[] = UTS_RELEASE;
-*/
-
/* Function declarations ******************************************************/
static u32 ns_read_sram(ns_dev *card, u32 sram_address);
u32d[0] = NS_RCTE_RAWCELLINTEN;
#else
u32d[0] = 0x00000000;
-#endif RCQ_SUPPORT
+#endif /* RCQ_SUPPORT */
u32d[1] = 0x00000000;
u32d[2] = 0x00000000;
u32d[3] = 0xFFFFFFFF;
-/* $Id: ffb_drv.c,v 1.7 2000/11/12 10:01:41 davem Exp $
+/* $Id: ffb_drv.c,v 1.9 2001/03/23 07:58:39 davem Exp $
* ffb_drv.c: Creator/Creator3D direct rendering driver.
*
* Copyright (C) 2000 David S. Miller (davem@redhat.com)
};
}
-static int __init ffb_init_one(int prom_node, int instance)
+static void __init ffb_apply_upa_parent_ranges(int parent, struct linux_prom64_registers *regs)
+{
+ struct linux_prom64_ranges ranges[PROMREG_MAX];
+ char name[128];
+ int len, i;
+
+ prom_getproperty(parent, "name", name, sizeof(name));
+ if (strcmp(name, "upa") != 0)
+ return;
+
+ len = prom_getproperty(parent, "ranges", (void *) ranges, sizeof(ranges));
+ if (len <= 0)
+ return;
+
+ len /= sizeof(struct linux_prom64_ranges);
+ for (i = 0; i < len; i++) {
+ struct linux_prom64_ranges *rng = &ranges[i];
+ u64 phys_addr = regs->phys_addr;
+
+ if (phys_addr >= rng->ot_child_base &&
+ phys_addr < (rng->ot_child_base + rng->or_size)) {
+ regs->phys_addr -= rng->ot_child_base;
+ regs->phys_addr += rng->ot_parent_base;
+ return;
+ }
+ }
+
+ return;
+}
+
+static int __init ffb_init_one(int prom_node, int parent_node, int instance)
{
struct linux_prom64_registers regs[2*PROMREG_MAX];
drm_device_t *dev;
kfree(dev);
return -EINVAL;
}
+ ffb_apply_upa_parent_ranges(parent_node, ®s[0]);
ffb_priv->card_phys_base = regs[0].phys_addr;
ffb_priv->regs = (ffb_fbcPtr)
(regs[0].phys_addr + 0x00600000UL);
return 0;
}
+static int __init ffb_count_siblings(int root)
+{
+ int node, child, count = 0;
+
+ child = prom_getchild(root);
+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb"))
+ count++;
+
+ return count;
+}
+
static int __init ffb_init_dev_table(void)
{
- int root, node;
- int total = 0;
+ int root, total;
+ total = ffb_count_siblings(prom_root_node);
root = prom_getchild(prom_root_node);
- for (node = prom_searchsiblings(root, "SUNW,ffb"); node;
- node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb"))
- total++;
+ for (root = prom_searchsiblings(root, "upa"); root;
+ root = prom_searchsiblings(prom_getsibling(root), "upa"))
+ total += ffb_count_siblings(root);
+
+ if (!total)
+ return -ENODEV;
ffb_dev_table = kmalloc(sizeof(drm_device_t *) * total, GFP_KERNEL);
if (!ffb_dev_table)
return 0;
}
+static int __init ffb_scan_siblings(int root, int instance)
+{
+ int node, child;
+
+ child = prom_getchild(root);
+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) {
+ ffb_init_one(node, root, instance);
+ instance++;
+ }
+
+ return instance;
+}
+
int __init ffb_init(void)
{
- int root, node, instance, ret;
+ int root, instance, ret;
ret = ffb_init_dev_table();
if (ret)
return ret;
- instance = 0;
+ instance = ffb_scan_siblings(prom_root_node, 0);
+
root = prom_getchild(prom_root_node);
- for (node = prom_searchsiblings(root, "SUNW,ffb"); node;
- node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) {
- ret = ffb_init_one(node, instance);
- if (ret)
- return ret;
- instance++;
- }
+ for (root = prom_searchsiblings(root, "upa"); root;
+ root = prom_searchsiblings(prom_getsibling(root), "upa"))
+ instance = ffb_scan_siblings(root, instance);
return 0;
}
Driver Web site: http://sourceforge.net/projects/gkernel/
-
-
- Based on:
- Intel 82802AB/82802AC Firmware Hub (FWH) Datasheet
- May 1999 Order Number: 290658-002 R
-
- Intel 82802 Firmware Hub: Random Number Generator
- Programmer's Reference Manual
- December 1999 Order Number: 298029-001 R
-
- Intel 82802 Firmware HUB Random Number Generator Driver
- Copyright (c) 2000 Matt Sottek <msottek@quiknet.com>
-
- Special thanks to Matt Sottek. I did the "guts", he
- did the "brains" and all the testing. (Anybody wanna send
- me an i810 or i820?)
+ Please read Documentation/i810_rng.txt for details on use.
----------------------------------------------------------
This software may be used and distributed according to the terms
of the GNU General Public License, incorporated herein by reference.
- ----------------------------------------------------------
-
- From the firmware hub datasheet:
-
- The Firmware Hub integrates a Random Number Generator (RNG)
- using thermal noise generated from inherently random quantum
- mechanical properties of silicon. When not generating new random
- bits the RNG circuitry will enter a low power state. Intel will
- provide a binary software driver to give third party software
- access to our RNG for use as a security feature. At this time,
- the RNG is only to be used with a system in an OS-present state.
-
- ----------------------------------------------------------
-
- Theory of operation:
-
- This driver has TWO modes of operation:
-
- Mode 1
- ------
- Character driver. Using the standard open()
- and read() system calls, you can read random data from
- the i810 RNG device. This data is NOT CHECKED by any
- fitness tests, and could potentially be bogus (if the
- hardware is faulty or has been tampered with).
-
- /dev/intel_rng is char device major 10, minor 183.
-
-
- Mode 2
- ------
- Injection of entropy into the kernel entropy pool via a
- timer function.
-
- A timer is run at rng_timer_len intervals, reading 8 bits
- of data from the RNG. If the RNG has previously passed a
- FIPS test, then the data will be added to the /dev/random
- entropy pool. Then, those 8 bits are added to an internal
- test data pool. When that pool is full, a FIPS test is
- run to verify that the last N bytes read are decently random.
-
- Thus, the RNG will never be enabled until it passes a
- FIPS test. And, data will stop flowing into the system
- entropy pool if the data is determined to be non-random.
-
- Finally, note that the timer defaults to OFF. This ensures
- that the system entropy pool will not be polluted with
- RNG-originated data unless a conscious decision is made
- by the user.
-
- HOWEVER NOTE THAT UP TO 2499 BYTES OF DATA CAN BE BOGUS
- BEFORE THE SYSTEM WILL NOTICE VIA THE FIPS TEST.
-
- ----------------------------------------------------------
-
- Driver notes:
-
- * You may enable and disable the RNG timer via sysctl:
-
- # disable RNG
- echo 0 > /proc/sys/dev/i810_rng_timer
-
- # enable RNG
- echo 1 > /proc/sys/dev/i810_rng_timer
-
- * The default number of entropy bits added by default is
- the full 8 bits. If you wish to reduce this value for
- paranoia's sake, you can do so via sysctl as well:
-
- # Add only 4 bits of entropy to /dev/random
- echo 4 > /proc/sys/dev/i810_rng_entropy
-
- * The default number of entropy bits can also be set via
- a module parameter "rng_entropy" at module load time.
-
- * When the RNG timer is enabled, the driver reads 1 byte
- from the hardware RNG every N jiffies. By default, every
- half-second. If you would like to change the timer interval,
- do so via another sysctl:
-
- echo 200 > /proc/sys/dev/i810_rng_interval
-
- NOTE THIS VALUE IS IN JIFFIES, NOT SECONDS OR MILLISECONDS.
- Minimum interval is 1 jiffy, maximum interval is 24 hours.
-
- * In order to unload the i810_rng module, you must first
- disable the hardware via sysctl i810_rng_timer, as shown above,
- and make sure all users of the character device have closed
-
- * The timer and the character device may be used simultaneously,
- if desired.
-
- * FIXME: support poll(2)
-
- * FIXME: It is possible for the timer function to read,
- and shove into the kernel entropy pool, 2499 bytes of data
- before the internal FIPS test notices that the data is bad.
- The kernel should handle this (I think???), but we should use a
- 2500-byte array, and re-run the FIPS test for every byte read.
- This will slow things down but guarantee that bad data is
- never passed upstream.
-
- * FIXME: module unload is racy. To fix this, struct ctl_table
- needs an owner member a la struct file_operations.
-
- * FIXME: Timer interval should not be in jiffies, but in a more
- user-understandable value like milliseconds.
-
- * Since the RNG is accessed from a timer as well as normal
- kernel code, but not from interrupts, we use spin_lock_bh
- in regular code, and spin_lock in the timer function, to
- serialize access to the RNG hardware area.
-
- NOTE: request_mem_region was removed, for two reasons:
- 1) Only one RNG is supported by this driver, 2) The location
- used by the RNG is a fixed location in MMIO-addressable memory,
- 3) users with properly working BIOS e820 handling will always
- have the region in which the RNG is located reserved, so
- request_mem_region calls always fail for proper setups.
- However, for people who use mem=XX, BIOS e820 information is
- -not- in /proc/iomem, and request_mem_region(RNG_ADDR) can
- succeed.
-
- ----------------------------------------------------------
-
- Change history:
-
- Version 0.6.2:
- * Clean up spinlocks. Since we don't have any interrupts
- to worry about, but we do have a timer to worry about,
- we use spin_lock_bh everywhere except the timer function
- itself.
- * Fix module load/unload.
- * Fix timer function and h/w enable/disable logic
- * New timer interval sysctl
- * Clean up sysctl names
-
- Version 0.9.0:
- * Don't register a pci_driver, because we are really
- using PCI bridge vendor/device ids, and someone
- may want to register a driver for the bridge. (bug fix)
- * Don't let the usage count go negative (bug fix)
- * Clean up spinlocks (bug fix)
- * Enable PCI device, if necessary (bug fix)
- * iounmap on module unload (bug fix)
- * If RNG chrdev is already in use when open(2) is called,
- sleep until it is available.
- * Remove redundant globals rng_allocated, rng_use_count
- * Convert numeric globals to unsigned
- * Module unload cleanup
-
- Version 0.9.1:
- * Support i815 chipsets too (Matt Sottek)
- * Fix reference counting when statically compiled (prumpf)
- * Rewrite rng_dev_read (prumpf)
- * Make module races less likely (prumpf)
- * Small miscellaneous bug fixes (prumpf)
- * Use pci table for PCI id list
-
- Version 0.9.2:
- * Simplify open blocking logic
-
- Version 0.9.3:
- * Clean up rng_read a bit.
- * Update i810_rng driver Web site URL.
- * Increase default timer interval to 4 samples per second.
- * Abort if mem region is not available.
- * BSS zero-initialization cleanup.
- * Call misc_register() from rng_init_one.
- * Fix O_NONBLOCK to occur before we schedule.
-
- Version 0.9.4:
- * Fix: Remove request_mem_region
- * Fix: Horrible bugs in FIPS calculation and test execution
-
*/
#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/random.h>
-#include <linux/sysctl.h>
#include <linux/miscdevice.h>
#include <linux/smp_lock.h>
#include <linux/mm.h>
/*
* core module and version information
*/
-#define RNG_VERSION "0.9.4"
+#define RNG_VERSION "0.9.5"
#define RNG_MODULE_NAME "i810_rng"
#define RNG_DRIVER_NAME RNG_MODULE_NAME " hardware driver " RNG_VERSION
#define PFX RNG_MODULE_NAME ": "
/*
- * prototypes
- */
-static void rng_fips_test_store (int rng_data);
-static void rng_run_fips_test (void);
-
-
-/*
* RNG registers (offsets from rng_mem)
*/
#define RNG_HW_STATUS 0
/*
- * Frequency that data is added to kernel entropy pool
- * HZ>>1 == every quarter-second
- */
-#define RNG_DEF_TIMER_LEN (HZ >> 2)
-
-
-/*
* number of bytes required for a FIPS test.
* do not alter unless you really, I mean
* REALLY know what you are doing.
* as we only support a single RNG device
*/
static int rng_hw_enabled; /* is the RNG h/w enabled? */
-static int rng_timer_enabled; /* is the RNG timer enabled? */
-static int rng_trusted; /* does FIPS trust out data? */
-static int rng_enabled_sysctl; /* sysctl for enabling/disabling RNG */
-static unsigned int rng_entropy = 8; /* number of entropy bits we submit to /dev/random */
-static unsigned int rng_entropy_sysctl; /* sysctl for changing entropy bits */
-static unsigned int rng_interval_sysctl; /* sysctl for changing timer interval */
-static unsigned int rng_fips_counter; /* size of internal FIPS test data pool */
-static unsigned int rng_timer_len = RNG_DEF_TIMER_LEN; /* timer interval, in jiffies */
static void *rng_mem; /* token to our ioremap'd RNG register area */
-static spinlock_t rng_lock = SPIN_LOCK_UNLOCKED; /* hardware lock */
-static struct timer_list rng_timer; /* kernel timer for RNG hardware reads and tests */
static struct pci_dev *rng_pdev; /* Firmware Hub PCI device found during PCI probe */
static struct semaphore rng_open_sem; /* Semaphore for serializing rng_open/release */
static inline int rng_data_present (void)
{
assert (rng_mem != NULL);
- assert (rng_hw_enabled == 1);
+ assert (rng_hw_enabled > 0);
return (readb (rng_mem + RNG_STATUS) & RNG_DATA_PRESENT) ? 1 : 0;
}
static inline int rng_data_read (void)
{
assert (rng_mem != NULL);
- assert (rng_hw_enabled == 1);
+ assert (rng_hw_enabled > 0);
return readb (rng_mem + RNG_DATA);
}
/*
- * rng_timer_ticker - executes every rng_timer_len jiffies,
- * adds a single byte to system entropy
- * and internal FIPS test pools
- */
-static void rng_timer_tick (unsigned long data)
-{
- int rng_data;
-
- spin_lock (&rng_lock);
-
- if (rng_data_present ()) {
- /* gimme some thermal noise, baby */
- rng_data = rng_data_read ();
-
- spin_unlock (&rng_lock);
-
- /*
- * if RNG has been verified in the past, add
- * data just read to the /dev/random pool,
- * with the entropy specified by the user
- * via sysctl (defaults to 8 bits)
- */
- if (rng_trusted)
- batch_entropy_store (rng_data, jiffies, rng_entropy);
-
- /* fitness testing via FIPS, if we have enough data */
- rng_fips_test_store (rng_data);
- rng_fips_counter++;
- if (rng_fips_counter == RNG_FIPS_TEST_THRESHOLD) {
- rng_run_fips_test ();
- rng_fips_counter = 0;
- }
- } else {
- spin_unlock (&rng_lock);
- }
-
- /* run the timer again, if enabled */
- if (rng_timer_enabled) {
- rng_timer.expires = jiffies + rng_timer_len;
- add_timer (&rng_timer);
- }
-}
-
-
-/*
* rng_enable - enable or disable the RNG hardware
*/
static int rng_enable (int enable)
DPRINTK ("ENTER\n");
- spin_lock_bh (&rng_lock);
-
hw_status = rng_hwstatus ();
if (enable) {
new_status = rng_hwstatus ();
- spin_unlock_bh (&rng_lock);
-
- if (action == 1)
- printk (KERN_INFO PFX "RNG h/w enabled\n");
- else if (action == 2)
- printk (KERN_INFO PFX "RNG h/w disabled\n");
-
- /* too bad C doesn't have ^^ */
- if ((!enable) != (!(new_status & RNG_ENABLED))) {
- printk (KERN_ERR PFX "Unable to %sable the RNG\n",
- enable ? "en" : "dis");
- rc = -EIO;
+ if (action == 1) {
+ if (new_status & RNG_ENABLED)
+ printk (KERN_INFO PFX "RNG h/w enabled\n");
+ else
+ printk (KERN_ERR PFX "Unable to enable the RNG\n");
+ } else if (action == 2) {
+ if ((new_status & RNG_ENABLED) == 0)
+ printk (KERN_INFO PFX "RNG h/w disabled\n");
+ else
+ printk (KERN_ERR PFX "Unable to disable the RNG\n");
}
DPRINTK ("EXIT, returning %d\n", rc);
}
-/*
- * rng_handle_sysctl_enable - handle a read or write of our enable/disable sysctl
- */
-
-static int rng_handle_sysctl_enable (ctl_table * table, int write, struct file *filp,
- void *buffer, size_t * lenp)
-{
- int enabled_save, rc;
-
- DPRINTK ("ENTER\n");
-
- MOD_INC_USE_COUNT;
- spin_lock_bh (&rng_lock);
- rng_enabled_sysctl = enabled_save = rng_timer_enabled;
- spin_unlock_bh (&rng_lock);
-
- rc = proc_dointvec (table, write, filp, buffer, lenp);
- if (rc)
- return rc;
-
- spin_lock_bh (&rng_lock);
- if (enabled_save != rng_enabled_sysctl) {
- rng_timer_enabled = rng_enabled_sysctl;
- spin_unlock_bh (&rng_lock);
-
- /* enable/disable hardware */
- rng_enable (rng_enabled_sysctl);
-
- /* enable/disable timer */
- if (rng_enabled_sysctl) {
- rng_timer.expires = jiffies + rng_timer_len;
- add_timer (&rng_timer);
- } else {
- del_timer_sync (&rng_timer);
- }
- } else {
- spin_unlock_bh (&rng_lock);
- }
-
- /* This needs to be in a higher layer */
- MOD_DEC_USE_COUNT;
-
- DPRINTK ("EXIT, returning 0\n");
- return 0;
-}
-
-
-/*
- * rng_handle_sysctl_entropy - handle a read or write of our entropy bits sysctl
- */
-
-static int rng_handle_sysctl_entropy (ctl_table * table, int write, struct file *filp,
- void *buffer, size_t * lenp)
-{
- int entropy_bits_save, rc;
-
- DPRINTK ("ENTER\n");
-
- spin_lock_bh (&rng_lock);
- rng_entropy_sysctl = entropy_bits_save = rng_entropy;
- spin_unlock_bh (&rng_lock);
-
- rc = proc_dointvec (table, write, filp, buffer, lenp);
- if (rc)
- return rc;
-
- if (entropy_bits_save == rng_entropy_sysctl)
- goto out;
-
- if ((rng_entropy_sysctl >= 0) &&
- (rng_entropy_sysctl <= 8)) {
- spin_lock_bh (&rng_lock);
- rng_entropy = rng_entropy_sysctl;
- spin_unlock_bh (&rng_lock);
-
- printk (KERN_INFO PFX "entropy bits now %d\n", rng_entropy_sysctl);
- } else {
- printk (KERN_INFO PFX "ignoring invalid entropy setting (%d)\n",
- rng_entropy_sysctl);
- }
-
-out:
- DPRINTK ("EXIT, returning 0\n");
- return 0;
-}
-
-/*
- * rng_handle_sysctl_interval - handle a read or write of our timer interval len sysctl
- */
-
-static int rng_handle_sysctl_interval (ctl_table * table, int write, struct file *filp,
- void *buffer, size_t * lenp)
-{
- int timer_len_save, rc;
-
- DPRINTK ("ENTER\n");
-
- spin_lock_bh (&rng_lock);
- rng_interval_sysctl = timer_len_save = rng_timer_len;
- spin_unlock_bh (&rng_lock);
-
- rc = proc_dointvec (table, write, filp, buffer, lenp);
- if (rc)
- return rc;
-
- if (timer_len_save == rng_interval_sysctl)
- goto out;
-
- if ((rng_interval_sysctl > 0) &&
- (rng_interval_sysctl < (HZ*86400))) {
- spin_lock_bh (&rng_lock);
- rng_timer_len = rng_interval_sysctl;
- spin_unlock_bh (&rng_lock);
-
- printk (KERN_INFO PFX "timer interval now %d\n", rng_interval_sysctl);
- } else {
- printk (KERN_INFO PFX "ignoring invalid timer interval (%d)\n",
- rng_interval_sysctl);
- }
-
-out:
- DPRINTK ("EXIT, returning 0\n");
- return 0;
-}
-
-
-/*
- * rng_sysctl - add or remove the rng sysctl
- */
-static void rng_sysctl (int add)
-{
-#define DEV_I810_TIMER 1
-#define DEV_I810_ENTROPY 2
-#define DEV_I810_INTERVAL 3
-
- /* Definition of the sysctl */
- /* FIXME: use new field:value style of struct initialization */
- static ctl_table rng_sysctls[] = {
- {DEV_I810_TIMER, /* ID */
- RNG_MODULE_NAME "_timer", /* name in /proc */
- &rng_enabled_sysctl,
- sizeof (rng_enabled_sysctl), /* data ptr, data size */
- 0644, /* mode */
- 0, /* child */
- rng_handle_sysctl_enable, /* proc handler */
- 0, /* strategy */
- 0, /* proc control block */
- 0, 0}
- ,
- {DEV_I810_ENTROPY, /* ID */
- RNG_MODULE_NAME "_entropy", /* name in /proc */
- &rng_entropy_sysctl,
- sizeof (rng_entropy_sysctl), /* data ptr, data size */
- 0644, /* mode */
- 0, /* child */
- rng_handle_sysctl_entropy, /* proc handler */
- 0, /* strategy */
- 0, /* proc control block */
- 0, 0}
- ,
- {DEV_I810_INTERVAL, /* ID */
- RNG_MODULE_NAME "_interval", /* name in /proc */
- &rng_interval_sysctl,
- sizeof (rng_interval_sysctl), /* data ptr, data size */
- 0644, /* mode */
- 0, /* child */
- rng_handle_sysctl_interval, /* proc handler */
- 0, /* strategy */
- 0, /* proc control block */
- 0, 0}
- ,
- {0}
- };
-
- /* Define the parent file : /proc/sys/dev */
- static ctl_table sysctls_root[] = {
- {CTL_DEV,
- "dev",
- NULL, 0,
- 0555,
- rng_sysctls},
- {0}
- };
- static struct ctl_table_header *sysctls_root_header = NULL;
-
- if (add) {
- if (!sysctls_root_header)
- sysctls_root_header = register_sysctl_table (sysctls_root, 0);
- } else if (sysctls_root_header) {
- unregister_sysctl_table (sysctls_root_header);
- sysctls_root_header = NULL;
- }
-}
-
-
static int rng_dev_open (struct inode *inode, struct file *filp)
{
if ((filp->f_mode & FMODE_READ) == 0)
static ssize_t rng_dev_read (struct file *filp, char *buf, size_t size,
loff_t * offp)
{
+ static spinlock_t rng_lock = SPIN_LOCK_UNLOCKED;
int have_data;
u8 data = 0;
ssize_t ret = 0;
while (size) {
- spin_lock_bh (&rng_lock);
+ spin_lock (&rng_lock);
have_data = 0;
if (rng_data_present ()) {
have_data = 1;
}
- spin_unlock_bh (&rng_lock);
+ spin_unlock (&rng_lock);
if (have_data) {
if (put_user (data, buf++)) {
if (filp->f_flags & O_NONBLOCK)
return ret ? : -EAGAIN;
- if (current->need_resched)
- schedule ();
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(1);
if (signal_pending (current))
return ret ? : -ERESTARTSYS;
DPRINTK ("ENTER\n");
- if (pci_enable_device (dev))
- return -EIO;
-
rc = misc_register (&rng_miscdev);
if (rc) {
printk (KERN_ERR PFX "cannot register misc device\n");
goto err_out_free_map;
}
- if (rng_entropy < 0 || rng_entropy > RNG_MAX_ENTROPY)
- rng_entropy = RNG_MAX_ENTROPY;
-
- /* init core RNG timer, but do not add it */
- init_timer (&rng_timer);
- rng_timer.function = rng_timer_tick;
-
/* turn RNG h/w off, if it's on */
rc = rng_enable (0);
if (rc) {
goto err_out_free_map;
}
- /* add sysctls */
- rng_sysctl (1);
-
DPRINTK ("EXIT, returning 0\n");
return 0;
* register a pci_driver, because someone else might one day
* want to register another driver on the same PCI id.
*/
-const static struct pci_device_id rng_pci_tbl[] __initdata = {
+static struct pci_device_id rng_pci_tbl[] __initdata = {
{ 0x8086, 0x2418, PCI_ANY_ID, PCI_ANY_ID, },
{ 0x8086, 0x2428, PCI_ANY_ID, PCI_ANY_ID, },
{ 0x8086, 0x1130, PCI_ANY_ID, PCI_ANY_ID, },
MODULE_DEVICE_TABLE (pci, rng_pci_tbl);
-MODULE_AUTHOR("Jeff Garzik, Matt Sottek");
+MODULE_AUTHOR("Jeff Garzik, Philipp Rumpf, Matt Sottek");
MODULE_DESCRIPTION("Intel i8xx chipset Random Number Generator (RNG) driver");
-MODULE_PARM(rng_entropy, "1i");
-MODULE_PARM_DESC(rng_entropy, "Bits of entropy to add to random pool per RNG byte (range: 0-8, default 8)");
/*
{
DPRINTK ("ENTER\n");
- assert (rng_timer_enabled == 0);
assert (rng_hw_enabled == 0);
misc_deregister (&rng_miscdev);
- rng_sysctl (0);
-
iounmap (rng_mem);
rng_pdev = NULL;
module_init (rng_init);
module_exit (rng_cleanup);
-
-
-
-
-/* These are the startup tests suggested by the FIPS 140-1 spec section
-* 4.11.1 (http://csrc.nist.gov/fips/fips1401.htm)
-* The Monobit, Poker, Runs, and Long Runs tests are implemented below.
-* This test is run at periodic intervals to verify
-* data is sufficiently random. If the tests are failed the RNG module
-* will no longer submit data to the entropy pool, but the tests will
-* continue to run at the given interval. If at a later time the RNG
-* passes all tests it will be re-enabled for the next period.
-* The reason for this is that it is not unlikely that at some time
-* during normal operation one of the tests will fail. This does not
-* necessarily mean the RNG is not operating properly, it is just a
-* statistically rare event. In that case we don't want to forever
-* disable the RNG, we will just leave it disabled for the period of
-* time until the tests are rerun and passed.
-*
-* For argument sake I tested /dev/urandom with these tests and it
-* took 142,095 tries before I got a failure, and urandom isn't as
-* random as random :)
-*/
-
-static int poker[16], runs[12];
-static int ones, rlength = -1, current_bit, rng_test;
-
-
-/*
- * rng_fips_test_store - store 8 bits of entropy in FIPS
- * internal test data pool
- */
-static void rng_fips_test_store (int rng_data)
-{
- int j;
- static int last_bit = 0;
-
- DPRINTK ("ENTER, rng_data = %d\n", rng_data);
-
- poker[rng_data >> 4]++;
- poker[rng_data & 15]++;
-
- /* Note in the loop below rlength is always one less than the actual
- run length. This makes things easier. */
- last_bit = (rng_data & 128) >> 7;
- for (j = 7; j >= 0; j--) {
- ones += current_bit = (rng_data & 1 << j) >> j;
- if (current_bit != last_bit) {
- /* If runlength is 1-6 count it in correct bucket. 0's go in
- runs[0-5] 1's go in runs[6-11] hence the 6*current_bit below */
- if (rlength < 5) {
- runs[rlength +
- (6 * current_bit)]++;
- } else {
- runs[5 + (6 * current_bit)]++;
- }
-
- /* Check if we just failed longrun test */
- if (rlength >= 33)
- rng_test |= 8;
- rlength = 0;
- /* flip the current run type */
- last_bit = current_bit;
- } else {
- rlength++;
- }
- }
-
- DPRINTK ("EXIT\n");
-}
-
-
-/*
- * now that we have some data, run a FIPS test
- */
-static void rng_run_fips_test (void)
-{
- int j, i;
-
- DPRINTK ("ENTER\n");
-
- /* add in the last (possibly incomplete) run */
- if (rlength < 5)
- runs[rlength + (6 * current_bit)]++;
- else {
- runs[5 + (6 * current_bit)]++;
- if (rlength >= 33)
- rng_test |= 8;
- }
- /* Ones test */
- if ((ones >= 10346) || (ones <= 9654))
- rng_test |= 1;
- /* Poker calcs */
- for (i = 0, j = 0; i < 16; i++)
- j += poker[i] * poker[i];
- if ((j >= 1580457) || (j <= 1562821))
- rng_test |= 2;
- if ((runs[0] < 2267) || (runs[0] > 2733) ||
- (runs[1] < 1079) || (runs[1] > 1421) ||
- (runs[2] < 502) || (runs[2] > 748) ||
- (runs[3] < 223) || (runs[3] > 402) ||
- (runs[4] < 90) || (runs[4] > 223) ||
- (runs[5] < 90) || (runs[5] > 223) ||
- (runs[6] < 2267) || (runs[6] > 2733) ||
- (runs[7] < 1079) || (runs[7] > 1421) ||
- (runs[8] < 502) || (runs[8] > 748) ||
- (runs[9] < 223) || (runs[9] > 402) ||
- (runs[10] < 90) || (runs[10] > 223) ||
- (runs[11] < 90) || (runs[11] > 223)) {
- rng_test |= 4;
- }
-
- rng_test = !rng_test;
- DPRINTK ("FIPS test %sed\n", rng_test ? "pass" : "fail");
-
- /* enable/disable RNG with results of the tests */
- if (rng_test && !rng_trusted)
- printk (KERN_WARNING PFX "FIPS test passed, enabling RNG\n");
- else if (!rng_test && rng_trusted)
- printk (KERN_WARNING PFX "FIPS test failed, disabling RNG\n");
-
- rng_trusted = rng_test;
-
- /* finally, clear out FIPS variables for start of next run */
- memset (poker, 0, sizeof (poker));
- memset (runs, 0, sizeof (runs));
- ones = 0;
- rlength = -1;
- current_bit = 0;
- rng_test = 0;
-
- DPRINTK ("EXIT\n");
-}
extern int ds1286_init(void);
extern int dsp56k_init(void);
extern int radio_init(void);
-extern int pc110pad_init(void);
extern int pmu_device_init(void);
extern int qpmouse_init(void);
extern int tosh_init(void);
#if defined CONFIG_82C710_MOUSE
qpmouse_init();
#endif
-#ifdef CONFIG_PC110_PAD
- pc110pad_init();
-#endif
#ifdef CONFIG_MVME16x
rtc_MK48T08_init();
#endif
#define ENABLE_PCI
#endif
-#define NEW_MODULES
-
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/major.h>
#define _INLINE_ inline
-#ifndef NEW_MODULES
-/*
- * NB. we must include the kernel idenfication string in to install the module.
- */
-/*static*/ char kernel_version[] = UTS_RELEASE;
-#endif
-
static struct r_port *rp_table[MAX_RP_PORTS];
static struct tty_struct *rocket_table[MAX_RP_PORTS];
static unsigned int xmit_flags[NUM_BOARDS];
#include <linux/module.h>
#include <linux/kbd_kern.h>
-#if defined(CONFIG_X86) || defined(CONFIG_IA64) || defined(__alpha__) || defined(__mips__)
+#if defined(CONFIG_X86) || defined(CONFIG_IA64) || defined(__alpha__) || defined(__mips__) || defined(CONFIG_SPARC64)
static int x86_sysrq_alt = 0;
+#ifdef CONFIG_SPARC64
+static int sparc_l1_a_state = 0;
+extern void batten_down_hatches(void);
+#endif
static unsigned short x86_keycodes[256] =
{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
return 0;
}
+#ifdef CONFIG_SPARC64
+ if (keycode == KEY_A && sparc_l1_a_state) {
+ sparc_l1_a_state = 0;
+ batten_down_hatches();
+ }
+#endif
+
if (x86_keycodes[keycode] & 0x100)
handle_scancode(0xe0, 1);
if (keycode == KEY_LEFTALT || keycode == KEY_RIGHTALT)
x86_sysrq_alt = down;
+#ifdef CONFIG_SPARC64
+ if (keycode == KEY_STOP)
+ sparc_l1_a_state = down;
+#endif
return 0;
}
for (i = 0; i < count; i++) {
- c = buffer[i];
+ if (get_user(c, &buffer[i]))
+ return -EFAULT;
if (c == mousedev_genius_seq[list->genseq]) {
if (++list->genseq == MOUSEDEV_GENIUS_LEN) {
/*
- * $Id: b1.c,v 1.20.6.1 2001/02/13 11:43:29 kai Exp $
+ * $Id: b1.c,v 1.20.6.3 2001/03/21 08:52:20 kai Exp $
*
* Common module for AVM B1 cards.
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: b1.c,v $
+ * Revision 1.20.6.3 2001/03/21 08:52:20 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.20.6.2 2001/03/15 15:11:23 kai
+ * *** empty log message ***
+ *
* Revision 1.20.6.1 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
#include "capicmd.h"
#include "capiutil.h"
-static char *revision = "$Revision: 1.20.6.1 $";
+static char *revision = "$Revision: 1.20.6.3 $";
/* ------------------------------------------------------------- */
static int __init b1_init(void)
{
char *p;
- char rev[10];
+ char rev[32];
- if ((p = strchr(revision, ':'))) {
- strncpy(rev, p + 1, sizeof(rev));
- p = strchr(rev, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(rev, p + 2, sizeof(rev));
+ rev[sizeof(rev)-1] = 0;
+ if ((p = strchr(rev, '$')) != 0 && p > rev)
+ *(p-1) = 0;
} else
strcpy(rev, "1.0");
/*
- * $Id: b1dma.c,v 1.11.6.1 2001/02/13 11:43:29 kai Exp $
+ * $Id: b1dma.c,v 1.11.6.3 2001/03/21 08:52:21 kai Exp $
*
* Common module for AVM B1 cards that support dma with AMCC
*
* (c) Copyright 2000 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: b1dma.c,v $
+ * Revision 1.11.6.3 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.11.6.2 2001/03/15 15:11:23 kai
+ * *** empty log message ***
+ *
* Revision 1.11.6.1 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
#include "capicmd.h"
#include "capiutil.h"
-static char *revision = "$Revision: 1.11.6.1 $";
+static char *revision = "$Revision: 1.11.6.3 $";
/* ------------------------------------------------------------- */
int b1dma_init(void)
{
char *p;
- char rev[10];
+ char rev[32];
- if ((p = strchr(revision, ':'))) {
- strncpy(rev, p + 1, sizeof(rev));
- p = strchr(rev, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(rev, p + 2, sizeof(rev));
+ rev[sizeof(rev)-1] = 0;
+ if ((p = strchr(rev, '$')) != 0 && p > rev)
+ *(p-1) = 0;
} else
strcpy(rev, "1.0");
/*
- * $Id: b1isa.c,v 1.10.6.2 2001/02/16 16:43:23 kai Exp $
+ * $Id: b1isa.c,v 1.10.6.4 2001/03/21 08:52:21 kai Exp $
*
* Module for AVM B1 ISA-card.
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: b1isa.c,v $
+ * Revision 1.10.6.4 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.10.6.3 2001/03/15 15:11:23 kai
+ * *** empty log message ***
+ *
* Revision 1.10.6.2 2001/02/16 16:43:23 kai
* Changes from -ac16, little bug fixes, typos and the like
*
#include "capilli.h"
#include "avmcard.h"
-static char *revision = "$Revision: 1.10.6.2 $";
+static char *revision = "$Revision: 1.10.6.4 $";
/* ------------------------------------------------------------- */
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strncpy(driver->revision, p + 1, sizeof(driver->revision));
- p = strchr(driver->revision, '$');
- *p = 0;
- }
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(driver->revision, p + 2, sizeof(driver->revision));
+ driver->revision[sizeof(driver->revision)-1] = 0;
+ if ((p = strchr(driver->revision, '$')) != 0 && p > driver->revision)
+ *(p-1) = 0;
+ }
printk(KERN_INFO "%s: revision %s\n", driver->name, driver->revision);
/*
- * $Id: b1pci.c,v 1.29.6.1 2000/11/28 12:02:45 kai Exp $
+ * $Id: b1pci.c,v 1.29.6.2 2001/03/21 08:52:21 kai Exp $
*
* Module for AVM B1 PCI-card.
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: b1pci.c,v $
+ * Revision 1.29.6.2 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
* Revision 1.29.6.1 2000/11/28 12:02:45 kai
* MODULE_DEVICE_TABLE for 2.4
*
#include "capilli.h"
#include "avmcard.h"
-static char *revision = "$Revision: 1.29.6.1 $";
+static char *revision = "$Revision: 1.29.6.2 $";
/* ------------------------------------------------------------- */
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strncpy(driver->revision, p + 1, sizeof(driver->revision));
- p = strchr(driver->revision, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(driver->revision, p + 2, sizeof(driver->revision));
+ driver->revision[sizeof(driver->revision)-1] = 0;
+ if ((p = strchr(driver->revision, '$')) != 0 && p > driver->revision)
+ *(p-1) = 0;
+ }
#ifdef CONFIG_ISDN_DRV_AVMB1_B1PCIV4
- p = strchr(revision, ':');
- strncpy(driverv4->revision, p + 1, sizeof(driverv4->revision));
- p = strchr(driverv4->revision, '$');
- *p = 0;
-#endif
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(driverv4->revision, p + 2, sizeof(driverv4->revision));
+ driverv4->revision[sizeof(driverv4->revision)-1] = 0;
+ if ((p = strchr(driverv4->revision, '$')) != 0 && p > driverv4->revision)
+ *(p-1) = 0;
}
+#endif
printk(KERN_INFO "%s: revision %s\n", driver->name, driver->revision);
/*
- * $Id: b1pcmcia.c,v 1.12.6.2 2001/02/16 16:43:23 kai Exp $
+ * $Id: b1pcmcia.c,v 1.12.6.3 2001/03/21 08:52:21 kai Exp $
*
* Module for AVM B1/M1/M2 PCMCIA-card.
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: b1pcmcia.c,v $
+ * Revision 1.12.6.3 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
* Revision 1.12.6.2 2001/02/16 16:43:23 kai
* Changes from -ac16, little bug fixes, typos and the like
*
#include "capilli.h"
#include "avmcard.h"
-static char *revision = "$Revision: 1.12.6.2 $";
+static char *revision = "$Revision: 1.12.6.3 $";
/* ------------------------------------------------------------- */
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strncpy(driver->revision, p + 1, sizeof(driver->revision));
- p = strchr(driver->revision, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(driver->revision, p + 2, sizeof(driver->revision));
+ driver->revision[sizeof(driver->revision)-1] = 0;
+ if ((p = strchr(driver->revision, '$')) != 0 && p > driver->revision)
+ *(p-1) = 0;
}
printk(KERN_INFO "%s: revision %s\n", driver->name, driver->revision);
/*
- * $Id: c4.c,v 1.20.6.3 2001/02/16 16:43:23 kai Exp $
+ * $Id: c4.c,v 1.20.6.5 2001/03/21 08:52:21 kai Exp $
*
* Module for AVM C4 card.
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: c4.c,v $
+ * Revision 1.20.6.5 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.20.6.4 2001/03/15 15:11:23 kai
+ * *** empty log message ***
+ *
* Revision 1.20.6.3 2001/02/16 16:43:23 kai
* Changes from -ac16, little bug fixes, typos and the like
*
#include "capilli.h"
#include "avmcard.h"
-static char *revision = "$Revision: 1.20.6.3 $";
+static char *revision = "$Revision: 1.20.6.5 $";
#undef CONFIG_C4_DEBUG
#undef CONFIG_C4_POLLDEBUG
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strncpy(driver->revision, p + 1, sizeof(driver->revision));
- p = strchr(driver->revision, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(driver->revision, p + 2, sizeof(driver->revision));
+ driver->revision[sizeof(driver->revision)-1] = 0;
+ if ((p = strchr(driver->revision, '$')) != 0 && p > driver->revision)
+ *(p-1) = 0;
}
printk(KERN_INFO "%s: revision %s\n", driver->name, driver->revision);
/*
- * $Id: capi.c,v 1.44.6.5 2001/02/13 11:43:29 kai Exp $
+ * $Id: capi.c,v 1.44.6.8 2001/03/21 08:52:21 kai Exp $
*
* CAPI 2.0 Interface for Linux
*
* Copyright 1996 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: capi.c,v $
+ * Revision 1.44.6.8 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.44.6.7 2001/03/15 15:11:24 kai
+ * *** empty log message ***
+ *
+ * Revision 1.44.6.6 2001/03/13 16:17:07 kai
+ * spelling fixes from 2.4.3-pre
+ *
* Revision 1.44.6.5 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
#include "capifs.h"
#endif
-static char *revision = "$Revision: 1.44.6.5 $";
+static char *revision = "$Revision: 1.44.6.8 $";
MODULE_AUTHOR("Carsten Paeth (calle@calle.in-berlin.de)");
callback: lower_callback,
};
-static char rev[10];
+static char rev[32];
static int __init capi_init(void)
{
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strcpy(rev, p + 2);
- p = strchr(rev, '$');
- *(p-1) = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(rev, p + 2, sizeof(rev));
+ rev[sizeof(rev)-1] = 0;
+ if ((p = strchr(rev, '$')) != 0 && p > rev)
+ *(p-1) = 0;
} else
- strcpy(rev, "???");
+ strcpy(rev, "1.0");
if (devfs_register_chrdev(capi_major, "capi20", &capi_fops)) {
printk(KERN_ERR "capi20: unable to get major %d\n", capi_major);
}
#endif
(void) detach_capi_interface(&cuser);
- printk(KERN_NOTICE "capi: Rev%s: unloaded\n", rev);
+ printk(KERN_NOTICE "capi: Rev %s: unloaded\n", rev);
}
module_init(capi_init);
/*
- * $Id: capidrv.c,v 1.39.6.2 2001/02/13 11:43:29 kai Exp $
+ * $Id: capidrv.c,v 1.39.6.4 2001/03/21 08:52:21 kai Exp $
*
* ISDN4Linux Driver, using capi20 interface (kernelcapi)
*
* Copyright 1997 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: capidrv.c,v $
+ * Revision 1.39.6.4 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.39.6.3 2001/03/13 16:17:07 kai
+ * spelling fixes from 2.4.3-pre
+ *
* Revision 1.39.6.2 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
#include "capicmd.h"
#include "capidrv.h"
-static char *revision = "$Revision: 1.39.6.2 $";
+static char *revision = "$Revision: 1.39.6.4 $";
static int debugmode = 0;
MODULE_AUTHOR("Carsten Paeth <calle@calle.in-berlin.de>");
{
struct capi_register_params rparam;
capi_profile profile;
- char rev[10];
+ char rev[32];
char *p;
__u32 ncontr, contr;
__u16 errcode;
return -EIO;
}
- if ((p = strchr(revision, ':'))) {
- strcpy(rev, p + 1);
- p = strchr(rev, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(rev, p + 2, sizeof(rev));
+ rev[sizeof(rev)-1] = 0;
+ if ((p = strchr(rev, '$')) != 0 && p > rev)
+ *(p-1) = 0;
} else
- strcpy(rev, " ??? ");
+ strcpy(rev, "1.0");
rparam.level3cnt = -2; /* number of bchannels twice */
rparam.datablkcnt = 16;
}
proc_init();
- printk(KERN_NOTICE "capidrv: Rev%s: loaded\n", rev);
+ printk(KERN_NOTICE "capidrv: Rev %s: loaded\n", rev);
MOD_DEC_USE_COUNT;
return 0;
/*
- * $Id: capifs.c,v 1.14.6.3 2001/02/13 11:43:29 kai Exp $
+ * $Id: capifs.c,v 1.14.6.5 2001/03/21 08:52:21 kai Exp $
*
* (c) Copyright 2000 by Carsten Paeth (calle@calle.de)
*
* Heavily based on devpts filesystem from H. Peter Anvin
*
* $Log: capifs.c,v $
+ * Revision 1.14.6.5 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.14.6.4 2001/03/15 15:11:24 kai
+ * *** empty log message ***
+ *
* Revision 1.14.6.3 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
MODULE_AUTHOR("Carsten Paeth <calle@calle.de>");
-static char *revision = "$Revision: 1.14.6.3 $";
+static char *revision = "$Revision: 1.14.6.5 $";
struct capifs_ncci {
struct inode *inode;
static int __init capifs_init(void)
{
- char rev[10];
+ char rev[32];
char *p;
int err;
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strcpy(rev, p + 1);
- p = strchr(rev, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(rev, p + 2, sizeof(rev));
+ rev[sizeof(rev)-1] = 0;
+ if ((p = strchr(rev, '$')) != 0 && p > rev)
+ *(p-1) = 0;
} else
strcpy(rev, "1.0");
return err;
}
#ifdef MODULE
- printk(KERN_NOTICE "capifs: Rev%s: loaded\n", rev);
+ printk(KERN_NOTICE "capifs: Rev %s: loaded\n", rev);
#else
- printk(KERN_NOTICE "capifs: Rev%s: started\n", rev);
+ printk(KERN_NOTICE "capifs: Rev %s: started\n", rev);
#endif
MOD_DEC_USE_COUNT;
return 0;
/*
- * $Id: capiutil.c,v 1.13.6.1 2001/02/13 11:43:29 kai Exp $
+ * $Id: capiutil.c,v 1.13.6.2 2001/03/15 15:11:24 kai Exp $
*
* CAPI 2.0 convert capi message to capi message struct
*
* Rewritten for Linux 1996 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: capiutil.c,v $
+ * Revision 1.13.6.2 2001/03/15 15:11:24 kai
+ * *** empty log message ***
+ *
* Revision 1.13.6.1 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
#include <linux/init.h>
#include <asm/segment.h>
#include <linux/config.h>
-
#include "capiutil.h"
/* from CAPI2.0 DDK AVM Berlin GmbH */
/*
- * $Id: kcapi.c,v 1.21.6.2 2001/02/13 11:43:29 kai Exp $
+ * $Id: kcapi.c,v 1.21.6.5 2001/03/21 08:52:21 kai Exp $
*
* Kernel CAPI 2.0 Module
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: kcapi.c,v $
+ * Revision 1.21.6.5 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.21.6.4 2001/03/15 15:11:24 kai
+ * *** empty log message ***
+ *
+ * Revision 1.21.6.3 2001/03/13 16:17:08 kai
+ * spelling fixes from 2.4.3-pre
+ *
* Revision 1.21.6.2 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
#include <linux/b1lli.h>
#endif
-static char *revision = "$Revision: 1.21.6.2 $";
+static char *revision = "$Revision: 1.21.6.5 $";
/* ------------------------------------------------------------- */
static int __init kcapi_init(void)
{
char *p;
- char rev[10];
+ char rev[32];
MOD_INC_USE_COUNT;
proc_capi_init();
- if ((p = strchr(revision, ':'))) {
- strcpy(rev, p + 1);
- p = strchr(rev, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(rev, p + 2, sizeof(rev));
+ rev[sizeof(rev)-1] = 0;
+ if ((p = strchr(rev, '$')) != 0 && p > rev)
+ *(p-1) = 0;
} else
strcpy(rev, "1.0");
#ifdef MODULE
- printk(KERN_NOTICE "CAPI-driver Rev%s: loaded\n", rev);
+ printk(KERN_NOTICE "CAPI-driver Rev %s: loaded\n", rev);
#else
- printk(KERN_NOTICE "CAPI-driver Rev%s: started\n", rev);
+ printk(KERN_NOTICE "CAPI-driver Rev %s: started\n", rev);
#endif
MOD_DEC_USE_COUNT;
return 0;
/*
- * $Id: t1isa.c,v 1.16.6.2 2001/02/16 16:43:24 kai Exp $
+ * $Id: t1isa.c,v 1.16.6.4 2001/03/21 08:52:21 kai Exp $
*
* Module for AVM T1 HEMA-card.
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: t1isa.c,v $
+ * Revision 1.16.6.4 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
+ * Revision 1.16.6.3 2001/03/15 15:11:24 kai
+ * *** empty log message ***
+ *
* Revision 1.16.6.2 2001/02/16 16:43:24 kai
* Changes from -ac16, little bug fixes, typos and the like
*
#include "capilli.h"
#include "avmcard.h"
-static char *revision = "$Revision: 1.16.6.2 $";
+static char *revision = "$Revision: 1.16.6.4 $";
/* ------------------------------------------------------------- */
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strncpy(driver->revision, p + 1, sizeof(driver->revision));
- p = strchr(driver->revision, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(driver->revision, p + 2, sizeof(driver->revision));
+ driver->revision[sizeof(driver->revision)-1] = 0;
+ if ((p = strchr(driver->revision, '$')) != 0 && p > driver->revision)
+ *(p-1) = 0;
}
printk(KERN_INFO "%s: revision %s\n", driver->name, driver->revision);
/*
- * $Id: t1pci.c,v 1.13.6.2 2001/02/13 11:43:29 kai Exp $
+ * $Id: t1pci.c,v 1.13.6.3 2001/03/21 08:52:21 kai Exp $
*
* Module for AVM T1 PCI-card.
*
* (c) Copyright 1999 by Carsten Paeth (calle@calle.in-berlin.de)
*
* $Log: t1pci.c,v $
+ * Revision 1.13.6.3 2001/03/21 08:52:21 kai
+ * merge from main branch: fix buffer for revision string (calle)
+ *
* Revision 1.13.6.2 2001/02/13 11:43:29 kai
* more compatility changes for 2.2.19
*
#include "capilli.h"
#include "avmcard.h"
-static char *revision = "$Revision: 1.13.6.2 $";
+static char *revision = "$Revision: 1.13.6.3 $";
#undef CONFIG_T1PCI_DEBUG
#undef CONFIG_T1PCI_POLLDEBUG
MOD_INC_USE_COUNT;
- if ((p = strchr(revision, ':'))) {
- strncpy(driver->revision, p + 1, sizeof(driver->revision));
- p = strchr(driver->revision, '$');
- *p = 0;
+ if ((p = strchr(revision, ':')) != 0 && p[1]) {
+ strncpy(driver->revision, p + 2, sizeof(driver->revision));
+ driver->revision[sizeof(driver->revision)-1] = 0;
+ if ((p = strchr(driver->revision, '$')) != 0 && p > driver->revision)
+ *(p-1) = 0;
}
printk(KERN_INFO "%s: revision %s\n", driver->name, driver->revision);
======================================================================*/
-#include <pcmcia/config.h>
-#include <pcmcia/k_compat.h>
-
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
/* 3c509.c: A 3c509 EtherLink3 ethernet driver for linux. */
/*
- Written 1993-1998 by Donald Becker.
+ Written 1993-2000 by Donald Becker.
- Copyright 1994-1998 by Donald Becker.
+ Copyright 1994-2000 by Donald Becker.
Copyright 1993 United States Government as represented by the
Director, National Security Agency. This software may be used and
distributed according to the terms of the GNU General Public License,
v1.14 10/15/97 Avoided waiting..discard message for fast machines -djb
v1.15 1/31/98 Faster recovery for Tx errors. -djb
v1.16 2/3/98 Different ID port handling to avoid sound cards. -djb
+ v1.18 12Mar2001 Andrew Morton <andrewm@uow.edu.au>
+ - Avoid bogus detect of 3c590's (Andrzej Krzysztofowicz)
+ - Reviewed against 1.18 from scyld.com
*/
-static char *version = "3c509.c:1.16 (2.2) 2/3/98 becker@cesdis.gsfc.nasa.gov.\n";
/* A few values that may be tweaked. */
/* Time in jiffies before concluding the transmitter is hung. */
#include <linux/in.h>
#include <linux/slab.h>
#include <linux/ioport.h>
+#include <linux/init.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <asm/io.h>
#include <asm/irq.h>
+static char versionA[] __initdata = "3c509.c:1.18 12Mar2001 becker@scyld.com\n";
+static char versionB[] __initdata = "http://www.scyld.com/network/3c509.html\n";
+
#ifdef EL3_DEBUG
static int el3_debug = EL3_DEBUG;
#else
struct sk_buff *queue[SKB_QUEUE_SIZE];
char mca_slot;
};
-static int id_port = 0x110; /* Start with 0x110 to avoid new sound cards.*/
+static int id_port __initdata = 0x110; /* Start with 0x110 to avoid new sound cards.*/
static struct net_device *el3_root_dev = NULL;
static ushort id_read_eeprom(int index);
int id;
};
-static struct el3_mca_adapters_struct el3_mca_adapters[] = {
+static struct el3_mca_adapters_struct el3_mca_adapters[] __initdata = {
{ "3Com 3c529 EtherLink III (10base2)", 0x627c },
{ "3Com 3c529 EtherLink III (10baseT)", 0x627d },
{ "3Com 3c529 EtherLink III (test mode)", 0x62db },
#endif /* CONFIG_MCA */
#ifdef CONFIG_ISAPNP
-static struct isapnp_device_id el3_isapnp_adapters[] = {
+static struct isapnp_device_id el3_isapnp_adapters[] __initdata = {
{ ISAPNP_ANY_ID, ISAPNP_ANY_ID,
ISAPNP_VENDOR('T', 'C', 'M'), ISAPNP_FUNCTION(0x5090),
(long) "3Com Etherlink III (TP)" },
static int nopnp;
#endif /* CONFIG_ISAPNP */
-int el3_probe(struct net_device *dev)
+int __init el3_probe(struct net_device *dev)
{
struct el3_private *lp;
short lrs_state = 0xff, i;
if (EISA_bus) {
static int eisa_addr = 0x1000;
while (eisa_addr < 0x9000) {
+ int device_id;
+
ioaddr = eisa_addr;
eisa_addr += 0x1000;
if (inw(ioaddr + 0xC80) != 0x6d50)
continue;
+ /* Avoid conflict with 3c590, 3c592, 3c597, etc */
+ device_id = (inb(ioaddr + 0xC82)<<8) + inb(ioaddr + 0xC83);
+ if ((device_id & 0xFF00) == 0x5900) {
+ continue;
+ }
+
/* Change the register set to the configuration window 0. */
outw(SelectWindow | 0, ioaddr + 0xC80 + EL3_CMD);
{
const char *if_names[] = {"10baseT", "AUI", "undefined", "BNC"};
- printk("%s: 3c509 at %#3.3lx, %s port, address ",
+ printk("%s: 3c5x9 at %#3.3lx, %s port, address ",
dev->name, dev->base_addr, if_names[dev->if_port]);
}
el3_root_dev = dev;
if (el3_debug > 0)
- printk(version);
+ printk(KERN_INFO "%s" KERN_INFO "%s", versionA, versionB);
/* The EL3-specific entries in the device structure. */
dev->open = &el3_open;
/* Read a word from the EEPROM using the regular EEPROM access register.
Assume that we are in register window zero.
*/
-static ushort read_eeprom(int ioaddr, int index)
+static ushort __init read_eeprom(int ioaddr, int index)
{
outw(EEPROM_READ + index, ioaddr + 10);
/* Pause for at least 162 us. for the read to take place. */
}
/* Read a word from the EEPROM when in the ISA ID probe state. */
-static ushort id_read_eeprom(int index)
+static ushort __init id_read_eeprom(int index)
{
int bit, word = 0;
problem by having an MMIO register write be immediately followed by
an MMIO register read.
-2) The RTL-8129 is only supported in Donald Becker's rtl8139 driver.
-
*/
#include <linux/config.h>
#include <asm/io.h>
-#define RTL8139_VERSION "0.9.15"
+#define RTL8139_VERSION "0.9.15c"
#define MODNAME "8139too"
#define RTL8139_DRIVER_NAME MODNAME " Fast Ethernet driver " RTL8139_VERSION
#define PFX MODNAME ": "
{0x1500, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DELTA8139 },
{0x4033, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ADDTRON8139 },
{0x1186, 0x1300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DFE538TX },
+
+#ifdef CONFIG_8139TOO_8129
{0x10ec, 0x8129, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8129 },
+#endif
/* some crazy cards report invalid vendor ids like
* 0x0001 here. The other ids are valid and constant,
Cfg1_VPD_Enable = 0x02,
Cfg1_PIO = 0x04,
Cfg1_MMIO = 0x08,
- Cfg1_LWAKE = 0x10,
+ LWAKE = 0x10, /* not on 8139, 8139A */
Cfg1_Driver_Load = 0x20,
Cfg1_LED0 = 0x40,
Cfg1_LED1 = 0x80,
+ SLEEP = (1 << 1), /* only on 8139, 8139A */
+ PWRDN = (1 << 0), /* only on 8139, 8139A */
+};
+
+/* Bits in Config4 */
+enum Config4Bits {
+ LWPTN = (1 << 2), /* not on 8139, 8139A */
};
enum RxConfigBits {
CH_8139C,
} chip_t;
+enum chip_flags {
+ HasPwrDn = (1 << 0),
+ HasLWake = (1 << 1),
+};
+
/* directly indexed by chip_t, above */
const static struct {
const char *name;
u8 version; /* from RTL8139C docs */
u32 RxConfigMask; /* should clear the bits supported by this chip */
+ u32 flags;
} rtl_chip_info[] = {
{ "RTL-8139",
0x40,
0xf0fe0040, /* XXX copied from RTL8139A, verify */
+ HasPwrDn,
},
{ "RTL-8139 rev K",
0x60,
0xf0fe0040,
+ HasPwrDn,
},
{ "RTL-8139A",
0x70,
0xf0fe0040,
+ 0,
},
{ "RTL-8139B",
0x78,
- 0xf0fc0040
+ 0xf0fc0040,
+ HasLWake,
},
{ "RTL-8130",
0x7C,
0xf0fe0040, /* XXX copied from RTL8139A, verify */
+ HasLWake,
},
{ "RTL-8139C",
0x74,
0xf0fc0040, /* XXX copied from RTL8139B, verify */
+ HasLWake,
},
};
(RX_DMA_BURST << RxCfgDMAShift);
+static void __rtl8139_cleanup_dev (struct net_device *dev)
+{
+ struct rtl8139_private *tp;
+ struct pci_dev *pdev;
+
+ assert (dev != NULL);
+ assert (dev->priv != NULL);
+
+ tp = dev->priv;
+ assert (tp->pci_dev != NULL);
+ pdev = tp->pci_dev;
+
+#ifndef USE_IO_OPS
+ if (tp->mmio_addr)
+ iounmap (tp->mmio_addr);
+#endif /* !USE_IO_OPS */
+
+ /* it's ok to call this even if we have no regions to free */
+ pci_release_regions (pdev);
+
+#ifndef RTL8139_NDEBUG
+ /* poison memory before freeing */
+ memset (dev, 0xBC,
+ sizeof (struct net_device) +
+ sizeof (struct rtl8139_private));
+#endif /* RTL8139_NDEBUG */
+
+ kfree (dev);
+
+ pci_set_drvdata (pdev, NULL);
+}
+
+
static int __devinit rtl8139_init_board (struct pci_dev *pdev,
- struct net_device **dev_out,
- void **ioaddr_out)
+ struct net_device **dev_out)
{
- void *ioaddr = NULL;
+ void *ioaddr;
struct net_device *dev;
struct rtl8139_private *tp;
u8 tmp8;
DPRINTK ("ENTER\n");
assert (pdev != NULL);
- assert (ioaddr_out != NULL);
- *ioaddr_out = NULL;
*dev_out = NULL;
- /* dev zeroed in init_etherdev */
- dev = init_etherdev (NULL, sizeof (*tp));
+ /* dev and dev->priv zeroed in alloc_etherdev */
+ dev = alloc_etherdev (sizeof (*tp));
if (dev == NULL) {
- printk (KERN_ERR PFX "unable to alloc new ethernet\n");
+ printk (KERN_ERR PFX "%s: Unable to alloc new net device\n", pdev->slot_name);
DPRINTK ("EXIT, returning -ENOMEM\n");
return -ENOMEM;
}
SET_MODULE_OWNER(dev);
tp = dev->priv;
+ tp->pci_dev = pdev;
/* enable device (incl. PCI PM wakeup and hotplug setup) */
rc = pci_enable_device (pdev);
/* make sure PCI base addr 0 is PIO */
if (!(pio_flags & IORESOURCE_IO)) {
- printk (KERN_ERR PFX "region #0 not a PIO resource, aborting\n");
+ printk (KERN_ERR PFX "%s: region #0 not a PIO resource, aborting\n", pdev->slot_name);
rc = -ENODEV;
goto err_out;
}
/* make sure PCI base addr 1 is MMIO */
if (!(mmio_flags & IORESOURCE_MEM)) {
- printk (KERN_ERR PFX "region #1 not an MMIO resource, aborting\n");
+ printk (KERN_ERR PFX "%s: region #1 not an MMIO resource, aborting\n", pdev->slot_name);
rc = -ENODEV;
goto err_out;
}
/* check for weird/broken PCI region reporting */
if ((pio_len < RTL_MIN_IO_SIZE) ||
(mmio_len < RTL_MIN_IO_SIZE)) {
- printk (KERN_ERR PFX "Invalid PCI region size(s), aborting\n");
+ printk (KERN_ERR PFX "%s: Invalid PCI region size(s), aborting\n", pdev->slot_name);
rc = -ENODEV;
goto err_out;
}
- rc = pci_request_regions (pdev, dev->name);
+ rc = pci_request_regions (pdev, "8139too");
if (rc)
goto err_out;
#ifdef USE_IO_OPS
ioaddr = (void *) pio_start;
+ dev->base_addr = pio_start;
#else
/* ioremap MMIO region */
ioaddr = ioremap (mmio_start, mmio_len);
if (ioaddr == NULL) {
- printk (KERN_ERR PFX "cannot remap MMIO, aborting\n");
+ printk (KERN_ERR PFX "%s: cannot remap MMIO, aborting\n", pdev->slot_name);
rc = -EIO;
- goto err_out_free_res;
+ goto err_out;
}
+ dev->base_addr = (long) ioaddr;
+ tp->mmio_addr = ioaddr;
#endif /* USE_IO_OPS */
+ /* Bring the chip out of low-power mode. */
+ if (rtl_chip_info[tp->chipset].flags & HasPwrDn) {
+ tmp8 = RTL_R8 (Config1);
+ if (tmp8 & (SLEEP|PWRDN)) {
+ RTL_W8_F (Cfg9346, Cfg9346_Unlock);
+ RTL_W8 (Config1, tmp8 & ~(SLEEP|PWRDN));
+ RTL_W8_F (Cfg9346, Cfg9346_Lock);
+ }
+ } else {
+ u8 new_tmp8 = tmp8 = RTL_R8 (Config1);
+ if ((rtl_chip_info[tp->chipset].flags & HasLWake) &&
+ (tmp8 & LWAKE))
+ new_tmp8 &= ~LWAKE;
+ new_tmp8 |= Cfg1_PM_Enable;
+ if (new_tmp8 != tmp8) {
+ RTL_W8_F (Cfg9346, Cfg9346_Unlock);
+ RTL_W8 (Config1, tmp8);
+ RTL_W8_F (Cfg9346, Cfg9346_Lock);
+ }
+ if (rtl_chip_info[tp->chipset].flags & HasLWake) {
+ tmp8 = RTL_R8 (Config4);
+ if (tmp8 & LWPTN)
+ RTL_W8 (Config4, tmp8 & ~LWPTN);
+ }
+ }
+
/* Soft reset the chip. */
RTL_W8 (ChipCmd, (RTL_R8 (ChipCmd) & ChipCmdClear) | CmdReset);
/* Check that the chip has finished the reset. */
- for (i = 1000; i > 0; i--)
+ for (i = 1000; i > 0; i--) {
+ barrier();
+ udelay (10);
if ((RTL_R8 (ChipCmd) & CmdReset) == 0)
break;
- else
- udelay (10);
-
- /* Bring the chip out of low-power mode. */
- if (tp->chipset == CH_8139B) {
- RTL_W8 (Config1, RTL_R8 (Config1) & ~(1<<4));
- RTL_W8 (Config4, RTL_R8 (Config4) & ~(1<<2));
- } else {
- /* handle RTL8139A and RTL8139 cases */
- /* XXX from becker driver. is this right?? */
- RTL_W8 (Config1, 0);
}
/* make sure chip thinks PIO and MMIO are enabled */
tmp8 = RTL_R8 (Config1);
if ((tmp8 & Cfg1_PIO) == 0) {
- printk (KERN_ERR PFX "PIO not enabled, Cfg1=%02X, aborting\n", tmp8);
+ printk (KERN_ERR PFX "%s: PIO not enabled, Cfg1=%02X, aborting\n",
+ pdev->slot_name, tmp8);
rc = -EIO;
- goto err_out_iounmap;
+ goto err_out;
}
if ((tmp8 & Cfg1_MMIO) == 0) {
- printk (KERN_ERR PFX "MMIO not enabled, Cfg1=%02X, aborting\n", tmp8);
+ printk (KERN_ERR PFX "%s: MMIO not enabled, Cfg1=%02X, aborting\n",
+ pdev->slot_name, tmp8);
rc = -EIO;
- goto err_out_iounmap;
+ goto err_out;
}
/* identify chip attached to board */
}
/* if unknown chip, assume array element #0, original RTL-8139 in this case */
- printk (KERN_DEBUG PFX "PCI device %s: unknown chip version, assuming RTL-8139\n",
+ printk (KERN_DEBUG PFX "%s: unknown chip version, assuming RTL-8139\n",
pdev->slot_name);
- printk (KERN_DEBUG PFX "PCI device %s: TxConfig = 0x%lx\n", pdev->slot_name, RTL_R32 (TxConfig));
+ printk (KERN_DEBUG PFX "%s: TxConfig = 0x%lx\n", pdev->slot_name, RTL_R32 (TxConfig));
tp->chipset = 0;
match:
rtl_chip_info[tp->chipset].name);
DPRINTK ("EXIT, returning 0\n");
- *ioaddr_out = ioaddr;
*dev_out = dev;
return 0;
-err_out_iounmap:
- assert (ioaddr > 0);
-#ifndef USE_IO_OPS
- iounmap (ioaddr);
-err_out_free_res:
-#endif /* !USE_IO_OPS */
- pci_release_regions (pdev);
err_out:
- unregister_netdev (dev);
- kfree (dev);
+ __rtl8139_cleanup_dev (dev);
DPRINTK ("EXIT, returning %d\n", rc);
return rc;
}
struct net_device *dev = NULL;
struct rtl8139_private *tp;
int i, addr_len, option;
- void *ioaddr = NULL;
+ void *ioaddr;
static int board_idx = -1;
static int printed_version;
- u8 tmp;
DPRINTK ("ENTER\n");
printed_version = 1;
}
- i = rtl8139_init_board (pdev, &dev, &ioaddr);
+ i = rtl8139_init_board (pdev, &dev);
if (i < 0) {
DPRINTK ("EXIT, returning %d\n", i);
return i;
}
tp = dev->priv;
+ ioaddr = tp->mmio_addr;
assert (ioaddr != NULL);
assert (dev != NULL);
dev->watchdog_timeo = TX_TIMEOUT;
dev->irq = pdev->irq;
- dev->base_addr = (unsigned long) ioaddr;
/* dev->priv/tp zeroed and aligned in init_etherdev */
tp = dev->priv;
/* note: tp->chipset set in rtl8139_init_board */
tp->drv_flags = board_info[ent->driver_data].hw_flags;
- tp->pci_dev = pdev;
tp->mmio_addr = ioaddr;
spin_lock_init (&tp->lock);
init_waitqueue_head (&tp->thr_wait);
init_MUTEX_LOCKED (&tp->thr_exited);
- pci_set_drvdata(pdev, dev);
+ /* dev is fully set up and ready to use now */
+ DPRINTK("about to register device named %s (%p)...\n", dev->name, dev);
+ i = register_netdev (dev);
+ if (i) goto err_out;
+
+ pci_set_drvdata (pdev, dev);
printk (KERN_INFO "%s: %s at 0x%lx, "
"%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x, "
/* Find the connected MII xcvrs.
Doing this in open() would allow detecting external xcvrs later, but
takes too much time. */
+#ifdef CONFIG_8139TOO_8129
if (tp->drv_flags & HAS_MII_XCVR) {
int phy, phy_idx = 0;
for (phy = 0; phy < 32 && phy_idx < sizeof(tp->phys); phy++) {
tp->phys[0] = 32;
}
} else
+#endif
tp->phys[0] = 32;
/* Put the chip into low-power mode. */
- RTL_W8_F (Cfg9346, Cfg9346_Unlock);
-
- tmp = RTL_R8 (Config1) & Config1Clear;
- tmp |= (tp->chipset == CH_8139B) ? 3 : 1; /* Enable PM/VPD */
- RTL_W8_F (Config1, tmp);
-
- RTL_W8_F (HltClk, 'H'); /* 'R' would leave the clock running. */
+ if (rtl_chip_info[tp->chipset].flags & HasPwrDn) {
+ RTL_W8_F (Cfg9346, Cfg9346_Unlock);
+ RTL_W8_F (Config1, RTL_R8 (Config1) | PWRDN);
+ RTL_W8_F (HltClk, 'H'); /* 'R' would leave the clock running. */
+ RTL_W8_F (Cfg9346, Cfg9346_Lock);
+ }
/* The lower four bits are the media type. */
option = (board_idx >= MAX_UNITS) ? 0 : media[board_idx];
DPRINTK ("EXIT - returning 0\n");
return 0;
+
+err_out:
+ __rtl8139_cleanup_dev (dev);
+ DPRINTK ("EXIT - returning %d\n", i);
+ return i;
}
DPRINTK ("ENTER\n");
assert (dev != NULL);
-
- np = (struct rtl8139_private *) (dev->priv);
+ np = dev->priv;
assert (np != NULL);
unregister_netdev (dev);
-#ifndef USE_IO_OPS
- iounmap (np->mmio_addr);
-#endif /* !USE_IO_OPS */
-
- pci_release_regions (pdev);
-
-#ifndef RTL8139_NDEBUG
- /* poison memory before freeing */
- memset (dev, 0xBC,
- sizeof (struct net_device) +
- sizeof (struct rtl8139_private));
-#endif /* RTL8139_NDEBUG */
-
- kfree (dev);
-
- pci_set_drvdata (pdev, NULL);
+ __rtl8139_cleanup_dev (dev);
DPRINTK ("EXIT\n");
}
return location < 8 && mii_2_8139_map[location] ?
readw (tp->mmio_addr + mii_2_8139_map[location]) : 0;
}
+
+#ifdef CONFIG_8139TOO_8129
mdio_sync (mdio_addr);
/* Shift the read command bits out. */
for (i = 15; i >= 0; i--) {
writeb (MDIO_CLK, mdio_addr);
mdio_delay (mdio_addr);
}
+#endif
DPRINTK ("EXIT, returning %d\n", (retval >> 1) & 0xffff);
return (retval >> 1) & 0xffff;
RTL_W16_F (mii_2_8139_map[location], value);
return;
}
+
+#ifdef CONFIG_8139TOO_8129
mdio_sync (mdio_addr);
/* Shift the command bits out. */
writeb (MDIO_CLK, mdio_addr);
mdio_delay (mdio_addr);
}
- return;
+#endif
}
tp->full_duplex ? "full" : "half", mii_reg5);
}
- if (tp->chipset >= CH_8139A) {
- tmp = RTL_R8 (Config1) & Config1Clear;
- tmp |= Cfg1_Driver_Load;
- tmp |= (tp->chipset == CH_8139B) ? 3 : 1; /* Enable PM/VPD */
- RTL_W8_F (Config1, tmp);
- } else {
- u8 foo = RTL_R8 (Config1) & Config1Clear;
- RTL_W8 (Config1, tp->full_duplex ? (foo|0x60) : (foo|0x20));
- }
+ RTL_W8 (Config1, RTL_R8 (Config1) | Cfg1_Driver_Load);
if (tp->chipset >= CH_8139B) {
tmp = RTL_R8 (Config4) & ~(1<<2);
RTL_W8 (Config4, tmp);
/* disable magic packet scanning, which is enabled
- * when PM is enabled above (Config1) */
+ * when PM is enabled in Config1 */
RTL_W8 (Config3, RTL_R8 (Config3) & ~(1<<5));
}
" partner ability of %4.4x.\n", dev->name,
tp->full_duplex ? "full" : "half",
tp->phys[0], mii_reg5);
+#if 0
RTL_W8 (Cfg9346, Cfg9346_Unlock);
RTL_W8 (Config1, tp->full_duplex ? 0x60 : 0x20);
RTL_W8 (Cfg9346, Cfg9346_Lock);
+#endif
}
}
|| tp->duplex_lock;
if (tp->full_duplex != duplex) {
tp->full_duplex = duplex;
+#if 0
RTL_W8 (Cfg9346, Cfg9346_Unlock);
RTL_W8 (Config1, tp->full_duplex ? 0x60 : 0x20);
RTL_W8 (Cfg9346, Cfg9346_Lock);
+#endif
}
status &= ~RxUnderrun;
}
rtl8139_weird_interrupt (dev, tp, ioaddr,
status, link_changed);
- if (status & (RxOK | RxUnderrun | RxOverflow | RxFIFOOver)) /* Rx interrupt */
+ if (netif_running (dev) &&
+ status & (RxOK | RxUnderrun | RxOverflow | RxFIFOOver)) /* Rx interrupt */
rtl8139_rx_interrupt (dev, tp, ioaddr);
- if (status & (TxOK | TxErr)) {
+ if (netif_running (dev) &&
+ status & (TxOK | TxErr)) {
spin_lock (&tp->lock);
rtl8139_tx_interrupt (dev, tp, ioaddr);
spin_unlock (&tp->lock);
/* Green! Put the chip in low-power mode. */
RTL_W8 (Cfg9346, Cfg9346_Unlock);
- RTL_W8 (Config1, 0x03);
- RTL_W8 (HltClk, 'H'); /* 'R' would leave the clock running. */
+
+ if (rtl_chip_info[tp->chipset].flags & HasPwrDn) {
+ RTL_W8 (Config1, 0x03);
+ RTL_W8 (HltClk, 'H'); /* 'R' would leave the clock running. */
+ }
DPRINTK ("EXIT\n");
return 0;
void *ioaddr = tp->mmio_addr;
unsigned long flags;
+ if (!netif_running (dev))
+ return;
+
netif_device_detach (dev);
spin_lock_irqsave (&tp->lock, flags);
{
struct net_device *dev = pci_get_drvdata (pdev);
+ if (!netif_running (dev))
+ return;
netif_device_attach (dev);
rtl8139_hw_start (dev);
}
dep_tristate ' Novell/Eagle/Microdyne NE3210 EISA support (EXPERIMENTAL)' CONFIG_NE3210 $CONFIG_EISA $CONFIG_EXPERIMENTAL
dep_tristate ' Racal-Interlan EISA ES3210 support (EXPERIMENTAL)' CONFIG_ES3210 $CONFIG_EISA $CONFIG_EXPERIMENTAL
dep_tristate ' RealTek RTL-8139 PCI Fast Ethernet Adapter support' CONFIG_8139TOO $CONFIG_PCI
+ dep_mbool ' Use PIO instead of MMIO' CONFIG_8139TOO_PIO $CONFIG_8139TOO
+ dep_mbool ' Support for automatic channel equalization (EXPERIMENTAL)' CONFIG_8139TOO_TUNE_TWISTER $CONFIG_8139TOO $CONFIG_EXPERIMENTAL
+ dep_mbool ' Support for older RTL-8129/8130 boards' CONFIG_8139TOO_8129 $CONFIG_8139TOO
dep_tristate ' SiS 900/7016 PCI Fast Ethernet Adapter support' CONFIG_SIS900 $CONFIG_PCI
dep_tristate ' SMC EtherPower II' CONFIG_EPIC100 $CONFIG_PCI
dep_tristate ' Sundance Alta support' CONFIG_SUNDANCE $CONFIG_PCI
obj-$(CONFIG_SUNQE) += sunqe.o
obj-$(CONFIG_SUNBMAC) += sunbmac.o
obj-$(CONFIG_MYRI_SBUS) += myri_sbus.o
+obj-$(CONFIG_SUNGEM) += sungem.o
obj-$(CONFIG_MACE) += mace.o
obj-$(CONFIG_BMAC) += bmac.o
request_region(isa_ioaddr, AIRONET4X00_IO_SIZE, "aironet4x00 ioaddr");
if (!dev) {
- dev = init_etherdev(dev, 0 );
+ dev = init_etherdev(NULL, 0);
+ if (!dev) {
+ release_region(isa_ioaddr, AIRONET4X00_IO_SIZE);
+ isapnp_cfg_begin(logdev->PNP_BUS->PNP_BUS_NUMBER,
+ logdev->PNP_DEV_NUMBER);
+ isapnp_deactivate(logdev->PNP_DEV_NUMBER);
+ isapnp_cfg_end();
+ return -ENOMEM;
+ }
}
dev->priv = kmalloc(sizeof(struct awc_private),GFP_KERNEL );
memset(dev->priv,0,sizeof(struct awc_private));
printk(KERN_WARNING " Use aironet4500_pnp if any problems(i.e. card malfunctioning). \n");
printk(KERN_WARNING " Note that this isa probe is not friendly... must give exact parameters \n");
- while (irq[card] !=0){
+ while (irq[card] != 0){
isa_ioaddr = io[card];
isa_irq_line = irq[card];
request_region(isa_ioaddr, AIRONET4X00_IO_SIZE, "aironet4x00 ioaddr");
if (!dev) {
- dev = init_etherdev(dev, 0 );
+ dev = init_etherdev(NULL, 0);
+ if (!dev) {
+ release_region(isa_ioaddr, AIRONET4X00_IO_SIZE);
+ return (card == 0) ? -ENOMEM : 0;
+ }
}
dev->priv = kmalloc(sizeof(struct awc_private),GFP_KERNEL );
memset(dev->priv,0,sizeof(struct awc_private));
static void __devinit dfx_bus_init(struct net_device *dev)
{
- DFX_board_t *bp = (DFX_board_t *)dev->priv;
+ DFX_board_t *bp = dev->priv;
u8 val; /* used for I/O read/writes */
DBG_printk("In dfx_bus_init...\n");
static int __devinit dfx_driver_init(struct net_device *dev)
{
- DFX_board_t *bp = (DFX_board_t *)dev->priv;
+ DFX_board_t *bp = dev->priv;
int alloc_size; /* total buffer size needed */
char *top_v, *curr_v; /* virtual addrs into memory block */
u32 top_p, curr_p; /* physical addrs into memory block */
static int dfx_open(struct net_device *dev)
{
int ret;
- DFX_board_t *bp = (DFX_board_t *)dev->priv;
+ DFX_board_t *bp = dev->priv;
DBG_printk("In dfx_open...\n");
static int dfx_close(struct net_device *dev)
{
- DFX_board_t *bp = (DFX_board_t *)dev->priv;
+ DFX_board_t *bp = dev->priv;
DBG_printk("In dfx_close...\n");
static void dfx_int_common(struct net_device *dev)
{
- DFX_board_t *bp = (DFX_board_t *) dev->priv;
+ DFX_board_t *bp = dev->priv;
PI_UINT32 port_status; /* Port Status register */
/* Process xmt interrupts - frequent case, so always call this routine */
static void dfx_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
- struct net_device *dev = (struct net_device *) dev_id;
+ struct net_device *dev = dev_id;
DFX_board_t *bp; /* private board structure pointer */
u8 tmp; /* used for disabling/enabling ints */
/* Get board pointer only if device structure is valid */
- bp = (DFX_board_t *) dev->priv;
+ bp = dev->priv;
spin_lock(&bp->lock);
static struct net_device_stats *dfx_ctl_get_stats(struct net_device *dev)
{
- DFX_board_t *bp = (DFX_board_t *)dev->priv;
+ DFX_board_t *bp = dev->priv;
/* Fill the bp->stats structure with driver-maintained counters */
static void dfx_ctl_set_multicast_list(struct net_device *dev)
{
- DFX_board_t *bp = (DFX_board_t *)dev->priv;
+ DFX_board_t *bp = dev->priv;
int i; /* used as index in for loop */
struct dev_mc_list *dmi; /* ptr to multicast addr entry */
static int dfx_ctl_set_mac_address(struct net_device *dev, void *addr)
{
- DFX_board_t *bp = (DFX_board_t *)dev->priv;
+ DFX_board_t *bp = dev->priv;
struct sockaddr *p_sockaddr = (struct sockaddr *)addr;
/* Copy unicast address to driver-maintained structs and update count */
)
{
- DFX_board_t *bp = (DFX_board_t *) dev->priv;
+ DFX_board_t *bp = dev->priv;
u8 prod; /* local transmit producer index */
PI_XMT_DESCR *p_xmt_descr; /* ptr to transmit descriptor block entry */
XMT_DRIVER_DESCR *p_xmt_drv_descr; /* ptr to transmit driver descriptor */
static void __devexit dfx_remove_one_pci_or_eisa(struct pci_dev *pdev, struct net_device *dev)
{
- DFX_board_t *bp = (DFX_board_t*)dev->priv;
+ DFX_board_t *bp = dev->priv;
unregister_netdev(dev);
release_region(dev->base_addr, pdev ? PFI_K_CSR_IO_LEN : PI_ESIC_K_CSR_IO_LEN );
pci_set_master(pdev);
- dev = init_etherdev(NULL, sizeof (*ep));
+ dev = alloc_etherdev(sizeof (*ep));
if (!dev) {
printk (KERN_ERR "card %d: no memory for eth device\n", card_idx);
return -ENOMEM;
}
SET_MODULE_OWNER(dev);
- if (pci_request_regions(pdev, dev->name))
+ if (pci_request_regions(pdev, "epic100"))
goto err_out_free_netdev;
#ifdef USE_IO_OPS
spin_lock_init (&ep->lock);
- printk(KERN_INFO "%s: %s at %#lx, IRQ %d, ",
- dev->name, pci_id_tbl[chip_idx].name, ioaddr, dev->irq);
-
/* Bring the chip out of low-power mode. */
outl(0x4200, ioaddr + GENCTL);
/* Magic?! If we don't set this bit the MII interface won't work. */
for (i = 0; i < 3; i++)
((u16 *)dev->dev_addr)[i] = le16_to_cpu(inw(ioaddr + LAN0 + i*4));
- for (i = 0; i < 5; i++)
- printk("%2.2x:", dev->dev_addr[i]);
- printk("%2.2x.\n", dev->dev_addr[i]);
-
if (debug > 2) {
- printk(KERN_DEBUG "%s: EEPROM contents\n", dev->name);
+ printk(KERN_DEBUG "epic100(%s): EEPROM contents\n",
+ pdev->slot_name);
for (i = 0; i < 64; i++)
printk(" %4.4x%s", read_eeprom(ioaddr, i),
i % 16 == 15 ? "\n" : "");
int mii_status = mdio_read(dev, phy, 1);
if (mii_status != 0xffff && mii_status != 0x0000) {
ep->phys[phy_idx++] = phy;
- printk(KERN_INFO "%s: MII transceiver #%d control "
+ printk(KERN_INFO "epic100(%s): MII transceiver #%d control "
"%4.4x status %4.4x.\n",
- dev->name, phy, mdio_read(dev, phy, 0), mii_status);
+ pdev->slot_name, phy, mdio_read(dev, phy, 0), mii_status);
}
}
ep->mii_phy_cnt = phy_idx;
if (phy_idx != 0) {
phy = ep->phys[0];
ep->advertising = mdio_read(dev, phy, 4);
- printk(KERN_INFO "%s: Autonegotiation advertising %4.4x link "
+ printk(KERN_INFO "epic100(%s): Autonegotiation advertising %4.4x link "
"partner %4.4x.\n",
- dev->name, ep->advertising, mdio_read(dev, phy, 5));
+ pdev->slot_name, ep->advertising, mdio_read(dev, phy, 5));
} else if ( ! (ep->chip_flags & NO_MII)) {
- printk(KERN_WARNING "%s: ***WARNING***: No MII transceiver found!\n",
- dev->name);
+ printk(KERN_WARNING "epic100(%s): ***WARNING***: No MII transceiver found!\n",
+ pdev->slot_name);
/* Use the known PHY address of the EPII. */
ep->phys[0] = 3;
}
/* The lower four bits are the media type. */
if (duplex) {
ep->duplex_lock = ep->full_duplex = 1;
- printk(KERN_INFO "%s: Forced full duplex operation requested.\n",
- dev->name);
+ printk(KERN_INFO "epic100(%s): Forced full duplex operation requested.\n",
+ pdev->slot_name);
}
dev->if_port = ep->default_port = option;
if (ep->default_port)
dev->watchdog_timeo = TX_TIMEOUT;
dev->tx_timeout = &epic_tx_timeout;
+ i = register_netdev(dev);
+ if (i)
+ goto err_out_unmap_tx;
+
+ printk(KERN_INFO "%s: %s at %#lx, IRQ %d, ",
+ dev->name, pci_id_tbl[chip_idx].name, ioaddr, dev->irq);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x.\n", dev->dev_addr[i]);
+
return 0;
err_out_unmap_tx:
#endif
pci_release_regions(pdev);
err_out_free_netdev:
- unregister_netdev(dev);
kfree(dev);
return -ENODEV;
}
outl(read_cmd, ioaddr + MIICtrl);
/* Typical operation takes 25 loops. */
- for (i = 400; i > 0; i--)
+ for (i = 400; i > 0; i--) {
+ barrier();
if ((inl(ioaddr + MIICtrl) & MII_READOP) == 0) {
/* Work around read failure bug. */
if (phy_id == 1 && location < 6
}
return inw(ioaddr + MIIData);
}
+ }
return 0xffff;
}
outw(value, ioaddr + MIIData);
outl((phy_id << 9) | (loc << 4) | MII_WRITEOP, ioaddr + MIICtrl);
- for (i = 10000; i > 0; i--) {
+ for (i = 10000; i > 0; i--) {
+ barrier();
if ((inl(ioaddr + MIICtrl) & MII_WRITEOP) == 0)
break;
}
after the Tx thread. */
static void epic_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
{
- struct net_device *dev = (struct net_device *)dev_instance;
+ struct net_device *dev = dev_instance;
struct epic_private *ep = dev->priv;
long ioaddr = dev->base_addr;
int status, boguscnt = max_interrupt_work;
static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
- struct epic_private *ep = (void *)dev->priv;
+ struct epic_private *ep = dev->priv;
long ioaddr = dev->base_addr;
u16 *data = (u16 *)&rq->ifr_data;
* Enable mii_ioctl. Added interrupt coalescing parameter adjustment.
* 2/19/99 Pete Wyckoff <wyckoff@ca.sandia.gov>
*/
-#define HAVE_PRIVATE_IOCTL
/* play with 64-bit addrlen; seems to be a teensy bit slower --pw */
/* #define ADDRLEN 64 */
static int mdio_read(long ioaddr, int phy_id, int location);
static void mdio_write(long ioaddr, int phy_id, int location, int value);
static int hamachi_open(struct net_device *dev);
-#ifdef HAVE_PRIVATE_IOCTL
static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
-#endif
static void hamachi_timer(unsigned long data);
static void hamachi_tx_timeout(struct net_device *dev);
static void hamachi_init_ring(struct net_device *dev);
pci_set_master(pdev);
+ i = pci_request_regions(pdev, "hamachi");
+ if (i) return i;
+
irq = pdev->irq;
ioaddr = (long) ioremap(ioaddr, 0x400);
- if (!ioaddr)
+ if (!ioaddr) {
+ pci_release_regions(pdev);
return -ENOMEM;
+ }
- dev = init_etherdev(NULL, sizeof(struct hamachi_private));
+ dev = alloc_etherdev(sizeof(struct hamachi_private));
if (!dev) {
+ pci_release_regions(pdev);
iounmap((char *)ioaddr);
return -ENOMEM;
}
dev->hard_header_len += 8; /* for cksum tag */
#endif
- printk(KERN_INFO "%s: %s type %x at 0x%lx, ",
- dev->name, chip_tbl[chip_id].name, readl(ioaddr + ChipRev),
- ioaddr);
-
for (i = 0; i < 6; i++)
dev->dev_addr[i] = 1 ? read_eeprom(ioaddr, 4 + i)
: readb(ioaddr + StationAddr + i);
- for (i = 0; i < 5; i++)
- printk("%2.2x:", dev->dev_addr[i]);
- printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
#if ! defined(final_version)
if (hamachi_debug > 4)
read_eeprom(ioaddr, i), i % 16 != 15 ? " " : "\n");
#endif
-#if 0 /* Moving this until after the force 32 check and reset. */
- i = readb(ioaddr + PCIClkMeas);
- printk(KERN_INFO "%s: %d-bit %d Mhz PCI bus (%d), Virtual Jumpers "
- "%2.2x, LPA %4.4x.\n",
- dev->name, readw(ioaddr + MiscStatus) & 1 ? 64 : 32,
- i ? 2000/(i&0x7f) : 0, i&0x7f, (int)readb(ioaddr + VirtualJumpers),
- readw(ioaddr + ANLinkPartnerAbility));
-#endif
-
hmp = dev->priv;
spin_lock_init(&hmp->lock);
i = readb(ioaddr + PCIClkMeas);
}
- printk(KERN_INFO "%s: %d-bit %d Mhz PCI bus (%d), Virtual Jumpers "
- "%2.2x, LPA %4.4x.\n",
- dev->name, readw(ioaddr + MiscStatus) & 1 ? 64 : 32,
- i ? 2000/(i&0x7f) : 0, i&0x7f, (int)readb(ioaddr + VirtualJumpers),
- readw(ioaddr + ANLinkPartnerAbility));
-
dev->base_addr = ioaddr;
dev->irq = irq;
dev->stop = &hamachi_close;
dev->get_stats = &hamachi_get_stats;
dev->set_multicast_list = &set_rx_mode;
-#ifdef HAVE_PRIVATE_IOCTL
dev->do_ioctl = &mii_ioctl;
-#endif
dev->tx_timeout = &hamachi_tx_timeout;
dev->watchdog_timeo = TX_TIMEOUT;
if (mtu)
dev->mtu = mtu;
+ i = register_netdev(dev);
+ if (i) {
+ kfree(dev);
+ iounmap((char *)ioaddr);
+ pci_release_regions(pdev);
+ return i;
+ }
+
+ printk(KERN_INFO "%s: %s type %x at 0x%lx, ",
+ dev->name, chip_tbl[chip_id].name, readl(ioaddr + ChipRev),
+ ioaddr);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
+ i = readb(ioaddr + PCIClkMeas);
+ printk(KERN_INFO "%s: %d-bit %d Mhz PCI bus (%d), Virtual Jumpers "
+ "%2.2x, LPA %4.4x.\n",
+ dev->name, readw(ioaddr + MiscStatus) & 1 ? 64 : 32,
+ i ? 2000/(i&0x7f) : 0, i&0x7f, (int)readb(ioaddr + VirtualJumpers),
+ readw(ioaddr + ANLinkPartnerAbility));
+
if (chip_tbl[hmp->chip_id].flags & CanHaveMII) {
int phy, phy_idx = 0;
for (phy = 0; phy < 32 && phy_idx < MII_CNT; phy++) {
\f
static int hamachi_open(struct net_device *dev)
{
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
long ioaddr = dev->base_addr;
int i;
u_int32_t rx_int_var, tx_int_var;
static inline int hamachi_tx(struct net_device *dev)
{
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
/* Update the dirty pointer until we find an entry that is
still owned by the card */
static void hamachi_timer(unsigned long data)
{
struct net_device *dev = (struct net_device *)data;
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
long ioaddr = dev->base_addr;
int next_tick = 10*HZ;
static void hamachi_tx_timeout(struct net_device *dev)
{
int i;
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
long ioaddr = dev->base_addr;
printk(KERN_WARNING "%s: Hamachi transmit timed out, status %8.8x,"
writew(0x0002, dev->base_addr + TxCmd); /* STOP Tx */
writew(0x0001, dev->base_addr + TxCmd); /* START Tx */
writew(0x0001, dev->base_addr + RxCmd); /* START Rx */
+
+ netif_wake_queue(dev);
}
/* Initialize the Rx and Tx rings, along with various 'dev' bits. */
static void hamachi_init_ring(struct net_device *dev)
{
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
int i;
hmp->tx_full = 0;
static int hamachi_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
unsigned entry;
u16 status;
after the Tx thread. */
static void hamachi_interrupt(int irq, void *dev_instance, struct pt_regs *rgs)
{
- struct net_device *dev = (struct net_device *)dev_instance;
+ struct net_device *dev = dev_instance;
struct hamachi_private *hmp;
long ioaddr, boguscnt = max_interrupt_work;
#endif
ioaddr = dev->base_addr;
- hmp = (struct hamachi_private *)dev->priv;
+ hmp = dev->priv;
spin_lock(&hmp->lock);
do {
for clarity and better register allocation. */
static int hamachi_rx(struct net_device *dev)
{
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
int entry = hmp->cur_rx % RX_RING_SIZE;
int boguscnt = (hmp->dirty_rx + RX_RING_SIZE) - hmp->cur_rx;
static void hamachi_error(struct net_device *dev, int intr_status)
{
long ioaddr = dev->base_addr;
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
if (intr_status & (LinkChange|NegotiationChange)) {
if (hamachi_debug > 1)
static int hamachi_close(struct net_device *dev)
{
long ioaddr = dev->base_addr;
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
int i;
netif_stop_queue(dev);
static struct net_device_stats *hamachi_get_stats(struct net_device *dev)
{
long ioaddr = dev->base_addr;
- struct hamachi_private *hmp = (struct hamachi_private *)dev->priv;
+ struct hamachi_private *hmp = dev->priv;
/* We should lock this segment of code for SMP eventually, although
the vulnerability window is very small and statistics are
}
}
-#ifdef HAVE_PRIVATE_IOCTL
static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
long ioaddr = dev->base_addr;
return -EOPNOTSUPP;
}
}
-#endif /* HAVE_PRIVATE_IOCTL */
static void __exit hamachi_remove_one (struct pci_dev *pdev)
unregister_netdev(dev);
iounmap((char *)dev->base_addr);
kfree(dev);
+ pci_release_regions(pdev);
pci_set_drvdata(pdev, NULL);
}
}
if (dev) {
dev = init_etherdev(dev, sizeof(struct sonic_local));
+ if (!dev)
+ return -ENOMEM;
/* methinks this will always be true but better safe than sorry */
- if (dev->priv == NULL)
+ if (dev->priv == NULL) {
dev->priv = kmalloc(sizeof(struct sonic_local), GFP_KERNEL);
+ if (!dev->priv) /* FIXME: kfree dev if necessary */
+ return -ENOMEM;
+ }
} else {
dev = init_etherdev(NULL, sizeof(struct sonic_local));
}
-/* natsemi.c: A Linux PCI Ethernet driver for the NatSemi DP83810 series. */
+/* natsemi.c: A Linux PCI Ethernet driver for the NatSemi DP8381x series. */
/*
- Written/copyright 1999-2000 by Donald Becker.
+ Written/copyright 1999-2001 by Donald Becker.
This software may be used and distributed according to the terms of
the GNU General Public License (GPL), incorporated herein by reference.
- Call netif_start_queue from dev->tx_timeout
- wmb() in start_tx() to flush data
- Update Tx locking
+ - Clean up PCI enable (davej)
+ Version 1.0.4:
+ - Merge Donald Becker's natsemi.c version 1.07
*/
/* These identify the driver base version and may not be removed. */
static const char version1[] =
-"natsemi.c:v1.05 8/7/2000 Written by Donald Becker <becker@scyld.com>\n";
+"natsemi.c:v1.07 1/9/2001 Written by Donald Becker <becker@scyld.com>\n";
static const char version2[] =
" http://www.scyld.com/network/natsemi.html\n";
static const char version3[] =
-" (unofficial 2.4.x kernel port, version 1.0.3, January 21, 2001 Jeff Garzik, Tjeerd Mulder)\n";
+" (unofficial 2.4.x kernel port, version 1.0.4, February 26, 2001 Jeff Garzik, Tjeerd Mulder)\n";
/* Updated to recommendations in pci-skeleton v2.03. */
/* Automatically extracted configuration info:
probe-func: natsemi_probe
-config-in: tristate 'National Semiconductor DP83810 series PCI Ethernet support' CONFIG_NATSEMI
+config-in: tristate 'National Semiconductor DP8381x series PCI Ethernet support' CONFIG_NATSEMI
-c-help-name: National Semiconductor DP83810 series PCI Ethernet support
+c-help-name: National Semiconductor DP8381x series PCI Ethernet support
c-help-symbol: CONFIG_NATSEMI
-c-help: This driver is for the National Semiconductor DP83810 series,
+c-help: This driver is for the National Semiconductor DP8381x series,
c-help: including the 83815 chip.
c-help: More specific information and updates are available from
c-help: http://www.scyld.com/network/natsemi.html
bonding and packet priority.
There are no ill effects from too-large receive rings. */
#define TX_RING_SIZE 16
-#define TX_QUEUE_LEN 10 /* Limit ring entries actually used. */
+#define TX_QUEUE_LEN 10 /* Limit ring entries actually used, min 4. */
#define RX_RING_SIZE 32
/* Operational parameters that usually are not changed. */
#define le32desc_to_virt(addr) bus_to_virt(le32_to_cpu(addr))
MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
-MODULE_DESCRIPTION("National Semiconductor DP83810 series PCI Ethernet driver");
+MODULE_DESCRIPTION("National Semiconductor DP8381x series PCI Ethernet driver");
MODULE_PARM(max_interrupt_work, "i");
MODULE_PARM(mtu, "i");
MODULE_PARM(debug, "i");
The send packet thread has partial control over the Tx ring and 'dev->tbusy'
flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next
-queue slot is empty, it clears the tbusy flag when finished. Under 2.4, the
-"tbusy flag" is now controlled by netif_{start,stop,wake}_queue() and tested
-by netif_queue_stopped().
+queue slot is empty, it clears the tbusy flag when finished otherwise it sets
+the 'lp->tx_full' flag.
The interrupt handler has exclusive control over the Rx ring and records stats
from the Tx ring. After reaping the stats, it marks the Tx queue entry as
-empty by incrementing the dirty_tx mark. Iff Tx queueing is stopped and Tx
-entries were reaped, the Tx queue is started and scheduled.
+empty by incrementing the dirty_tx mark. Iff the 'lp->tx_full' flag is set, it
+clears both the tx_full and tbusy flags.
IV. Notes
WOLCmd=0x40, PauseCmd=0x44, RxFilterAddr=0x48, RxFilterData=0x4C,
BootRomAddr=0x50, BootRomData=0x54, StatsCtrl=0x5C, StatsData=0x60,
RxPktErrs=0x60, RxMissed=0x68, RxCRCErrs=0x64,
+ PCIPM = 0x44,
};
/* Bit in ChipCmd. */
IntrRxDone=0x0001, IntrRxIntr=0x0002, IntrRxErr=0x0004, IntrRxEarly=0x0008,
IntrRxIdle=0x0010, IntrRxOverrun=0x0020,
IntrTxDone=0x0040, IntrTxIntr=0x0080, IntrTxErr=0x0100,
- IntrTxIdle=0x0200, IntrTxOverrun=0x0400,
+ IntrTxIdle=0x0200, IntrTxUnderrun=0x0400,
StatsMax=0x0800, LinkChange=0x4000,
WOLPkt=0x2000,
RxResetDone=0x1000000, TxResetDone=0x2000000,
IntrPCIErr=0x00f00000,
- IntrAbnormalSummary=0xCD20,
+ IntrNormalSummary=0x0251, IntrAbnormalSummary=0xED20,
};
/* Bits in the RxMode register. */
enum rx_mode_bits {
- EnableFilter = 0x80000000,
- AcceptBroadcast = 0x40000000,
- AcceptAllMulticast = 0x20000000,
- AcceptAllPhys = 0x10000000,
- AcceptMyPhys = 0x08000000,
- AcceptMulticast = 0x00200000,
- AcceptErr=0x20, /* these 2 are in another register */
- AcceptRunt=0x10,/* and are not used in this driver */
+ AcceptErr=0x20, AcceptRunt=0x10,
+ AcceptBroadcast=0xC0000000,
+ AcceptMulticast=0x00200000, AcceptAllMulticast=0x20000000,
+ AcceptAllPhys=0x10000000, AcceptMyPhys=0x08000000,
};
/* The Rx and Tx buffer descriptors. */
u32 cur_rx_mode;
u32 rx_filter[16];
/* FIFO and PCI burst thresholds. */
- int tx_config, rx_config;
+ u32 tx_config, rx_config;
/* original contents of ClkRun register */
- int SavedClkRun;
+ u32 SavedClkRun;
/* MII transceiver section. */
u16 advertising; /* NWay media advertisement */
static int eeprom_read(long ioaddr, int location);
static int mdio_read(struct net_device *dev, int phy_id, int location);
-static void mdio_write(struct net_device *dev, int phy_id, int location, int value);
static int netdev_open(struct net_device *dev);
static void check_duplex(struct net_device *dev);
static void netdev_timer(unsigned long data);
{
struct net_device *dev;
struct netdev_private *np;
- int i, option, irq = pdev->irq, chip_idx = ent->driver_data;
+ int i, option, irq, chip_idx = ent->driver_data;
static int find_cnt = -1;
static int printed_version;
unsigned long ioaddr, iosize;
const int pcibar = 1; /* PCI base address register */
+ int prev_eedata;
+ u32 tmp;
if ((debug <= 1) && !printed_version++)
printk(KERN_INFO "%s" KERN_INFO "%s" KERN_INFO "%s",
version1, version2, version3);
+ i = pci_enable_device(pdev);
+ if (i) return i;
+
+ /* natsemi has a non-standard PM control register
+ * in PCI config space. Some boards apparently need
+ * to be brought to D0 in this manner.
+ */
+ pci_read_config_dword(pdev, PCIPM, &tmp);
+ if (tmp & (0x03|0x100)) {
+ /* D0 state, disable PME assertion */
+ u32 newtmp = tmp & ~(0x03|0x100);
+ pci_write_config_dword(pdev, PCIPM, newtmp);
+ }
+
find_cnt++;
option = find_cnt < MAX_UNITS ? options[find_cnt] : 0;
ioaddr = pci_resource_start(pdev, pcibar);
iosize = pci_resource_len(pdev, pcibar);
-
- if (pci_enable_device(pdev))
- return -EIO;
+ irq = pdev->irq;
+
if (natsemi_pci_info[chip_idx].flags & PCI_USES_MASTER)
pci_set_master(pdev);
- dev = init_etherdev(NULL, sizeof (struct netdev_private));
+ dev = alloc_etherdev(sizeof (struct netdev_private));
if (!dev)
return -ENOMEM;
SET_MODULE_OWNER(dev);
+ i = pci_request_regions(pdev, dev->name);
+ if (i) {
+ kfree(dev);
+ return i;
+ }
+
{
- void *mmio;
- if (request_mem_region(ioaddr, iosize, dev->name) == NULL) {
- unregister_netdev(dev);
- kfree(dev);
- return -EBUSY;
- }
- mmio = ioremap (ioaddr, iosize);
+ void *mmio = ioremap (ioaddr, iosize);
if (!mmio) {
- release_mem_region(ioaddr, iosize);
- unregister_netdev(dev);
+ pci_release_regions(pdev);
kfree(dev);
return -ENOMEM;
}
ioaddr = (unsigned long) mmio;
}
- printk(KERN_INFO "%s: %s at 0x%lx, ",
- dev->name, natsemi_pci_info[chip_idx].name, ioaddr);
-
- for (i = 0; i < ETH_ALEN/2; i++) {
- /* weird organization */
- unsigned short a;
- a = (le16_to_cpu(eeprom_read(ioaddr, i + 6)) >> 15) +
- (le16_to_cpu(eeprom_read(ioaddr, i + 7)) << 1);
- ((u16 *)dev->dev_addr)[i] = a;
+ /* Work around the dropped serial bit. */
+ prev_eedata = eeprom_read(ioaddr, 6);
+ for (i = 0; i < 3; i++) {
+ int eedata = eeprom_read(ioaddr, i + 7);
+ dev->dev_addr[i*2] = (eedata << 1) + (prev_eedata >> 15);
+ dev->dev_addr[i*2+1] = eedata >> 7;
+ prev_eedata = eedata;
}
- for (i = 0; i < ETH_ALEN-1; i++)
- printk("%2.2x:", dev->dev_addr[i]);
- printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
-
-#if ! defined(final_version) /* Dump the EEPROM contents during development. */
- if (debug > 4)
- for (i = 0; i < 64; i++)
- printk("%4.4x%s",
- eeprom_read(ioaddr, i), i % 16 != 15 ? " " : "\n");
-#endif
/* Reset the chip to erase previous misconfiguration. */
writel(ChipReset, ioaddr + ChipCmd);
np = dev->priv;
np->pci_dev = pdev;
- pdev->driver_data = dev;
+ pci_set_drvdata(pdev, dev);
np->iosize = iosize;
spin_lock_init(&np->lock);
if (mtu)
dev->mtu = mtu;
- np->advertising = readl(ioaddr + 0x90);
+ i = register_netdev(dev);
+ if (i) {
+ pci_release_regions(pdev);
+ unregister_netdev(dev);
+ kfree(dev);
+ pci_set_drvdata(pdev, NULL);
+ return i;
+ }
+
+ printk(KERN_INFO "%s: %s at 0x%lx, ",
+ dev->name, natsemi_pci_info[chip_idx].name, ioaddr);
+ for (i = 0; i < ETH_ALEN-1; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
+
+ np->advertising = mdio_read(dev, 1, 4);
+ if ((readl(ioaddr + ChipConfig) & 0xe000) != 0xe000) {
+ u32 chip_config = readl(ioaddr + ChipConfig);
+ printk(KERN_INFO "%s: Transceiver default autonegotiation %s "
+ "10%s %s duplex.\n",
+ dev->name,
+ chip_config & 0x2000 ? "enabled, advertise" : "disabled, force",
+ chip_config & 0x4000 ? "0" : "",
+ chip_config & 0x8000 ? "full" : "half");
+ }
printk(KERN_INFO "%s: Transceiver status 0x%4.4x advertising %4.4x.\n",
dev->name, (int)readl(ioaddr + 0x84), np->advertising);
eeprom_delay(ee_addr);
}
writel(EE_ChipSelect, ee_addr);
+ eeprom_delay(ee_addr);
- for (i = 16; i > 0; i--) {
+ for (i = 0; i < 16; i++) {
writel(EE_ChipSelect | EE_ShiftClk, ee_addr);
eeprom_delay(ee_addr);
- /* data bits are LSB first */
- retval = (retval >> 1) | ((readl(ee_addr) & EE_DataOut) ? 0x8000 : 0);
+ retval |= (readl(ee_addr) & EE_DataOut) ? 1 << i : 0;
writel(EE_ChipSelect, ee_addr);
eeprom_delay(ee_addr);
}
return 0xffff;
}
-static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
-{
- if (phy_id == 1 && location < 32)
- writew(value, dev->base_addr + 0x80 + (location<<2));
-}
-
\f
static int netdev_open(struct net_device *dev)
{
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
long ioaddr = dev->base_addr;
int i;
/* Initialize other registers. */
/* Configure the PCI bus bursts and FIFO thresholds. */
/* Configure for standard, in-spec Ethernet. */
- np->tx_config = (1<<28) + /* Automatic transmit padding */
- (1<<23) + /* Excessive collision retry */
- (0x0<<20) + /* Max DMA burst = 512 byte */
- (8<<8) + /* fill threshold = 256 byte */
- 2; /* drain threshold = 64 byte */
+
+ if (readl(ioaddr + ChipConfig) & 0x20000000) { /* Full duplex */
+ np->tx_config = 0xD0801002;
+ np->rx_config = 0x10000020;
+ } else {
+ np->tx_config = 0x10801002;
+ np->rx_config = 0x0020;
+ }
writel(np->tx_config, ioaddr + TxConfig);
- np->rx_config = (0x0<<20) /* Max DMA burst = 512 byte */ +
- (0x8<<1); /* Drain Threshold = 64 byte */
writel(np->rx_config, ioaddr + RxConfig);
if (dev->if_port == 0)
dev->if_port = np->default_port;
- /* Disable PME */
+ /* Disable PME:
+ * The PME bit is initialized from the EEPROM contents.
+ * PCI cards probably have PME disabled, but motherboard
+ * implementations may have PME set to enable WakeOnLan.
+ * With PME set the chip will scan incoming packets but
+ * nothing will be written to memory. */
np->SavedClkRun = readl(ioaddr + ClkRun);
writel(np->SavedClkRun & ~0x100, ioaddr + ClkRun);
check_duplex(dev);
set_rx_mode(dev);
- /* Enable interrupts by setting the interrupt mask.
- * We don't listen for TxDone interrupts and rely on TxIdle. */
- writel(IntrAbnormalSummary | IntrTxIdle | IntrRxIdle | IntrRxDone,
- ioaddr + IntrMask);
+ /* Enable interrupts by setting the interrupt mask. */
+ writel(IntrNormalSummary | IntrAbnormalSummary | 0x1f, ioaddr + IntrMask);
writel(1, ioaddr + IntrEnable);
writel(RxOn | TxOn, ioaddr + ChipCmd);
static void check_duplex(struct net_device *dev)
{
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
long ioaddr = dev->base_addr;
int duplex;
static void netdev_timer(unsigned long data)
{
struct net_device *dev = (struct net_device *)data;
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
long ioaddr = dev->base_addr;
int next_tick = 60*HZ;
static void tx_timeout(struct net_device *dev)
{
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
long ioaddr = dev->base_addr;
printk(KERN_WARNING "%s: Transmit timed out, status %8.8x,"
dev->trans_start = jiffies;
np->stats.tx_errors++;
- netif_start_queue(dev);
+ netif_wake_queue(dev);
}
/* Initialize the Rx and Tx rings, along with various 'dev' bits. */
static void init_ring(struct net_device *dev)
{
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
int i;
np->cur_rx = np->cur_tx = 0;
skb->dev = dev; /* Mark as being used by this device. */
np->rx_ring[i].addr = virt_to_le32desc(skb->tail);
np->rx_ring[i].cmd_status =
- cpu_to_le32(np->rx_buf_sz);
+ cpu_to_le32(DescIntr | np->rx_buf_sz);
}
np->dirty_rx = (unsigned int)(i - RX_RING_SIZE);
static int start_tx(struct sk_buff *skb, struct net_device *dev)
{
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
unsigned entry;
/* Note: Ordering is important here, set the field with the
np->tx_skbuff[entry] = skb;
np->tx_ring[entry].addr = virt_to_le32desc(skb->data);
- np->tx_ring[entry].cmd_status = cpu_to_le32(DescOwn | skb->len);
+ np->tx_ring[entry].cmd_status = cpu_to_le32(DescOwn|DescIntr | skb->len);
np->cur_tx++;
/* StrongARM: Explicitly cache flush np->tx_ring and skb->data,skb->len. */
if (intr_status == 0)
break;
- if (intr_status & (IntrRxDone | IntrRxErr | IntrRxIdle | IntrRxOverrun))
+ if (intr_status & (IntrRxDone | IntrRxIntr))
netdev_rx(dev);
spin_lock(&np->lock);
for clarity and better register allocation. */
static int netdev_rx(struct net_device *dev)
{
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
int entry = np->cur_rx % RX_RING_SIZE;
int boguscnt = np->dirty_rx + RX_RING_SIZE - np->cur_rx;
s32 desc_status = le32_to_cpu(np->rx_head_desc->cmd_status);
entry, desc_status);
if (--boguscnt < 0)
break;
-
if ((desc_status & (DescMore|DescPktOK|RxTooLong)) != DescPktOK) {
if (desc_status & DescMore) {
printk(KERN_WARNING "%s: Oversized(?) Ethernet frame spanned "
np->rx_ring[entry].addr = virt_to_le32desc(skb->tail);
}
np->rx_ring[entry].cmd_status =
- cpu_to_le32(np->rx_buf_sz);
+ cpu_to_le32(DescIntr | np->rx_buf_sz);
}
/* Restart Rx engine if stopped. */
static void netdev_error(struct net_device *dev, int intr_status)
{
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
long ioaddr = dev->base_addr;
if (intr_status & LinkChange) {
if (intr_status & StatsMax) {
get_stats(dev);
}
- if ((intr_status & ~(LinkChange|StatsMax|RxResetDone|TxResetDone|0x83ff))
+ if (intr_status & IntrTxUnderrun) {
+ if ((np->tx_config & 0x3f) < 62)
+ np->tx_config += 2;
+ writel(np->tx_config, ioaddr + TxConfig);
+ }
+ if (intr_status & WOLPkt) {
+ int wol_status = readl(ioaddr + WOLCmd);
+ printk(KERN_NOTICE "%s: Link wake-up event %8.8x",
+ dev->name, wol_status);
+ }
+ if ((intr_status & ~(LinkChange|StatsMax|RxResetDone|TxResetDone|0xA7ff))
&& debug)
printk(KERN_ERR "%s: Something Wicked happened! %4.4x.\n",
dev->name, intr_status);
static struct net_device_stats *get_stats(struct net_device *dev)
{
long ioaddr = dev->base_addr;
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
/* We should lock this segment of code for SMP eventually, although
the vulnerability window is very small and statistics are
static void set_rx_mode(struct net_device *dev)
{
long ioaddr = dev->base_addr;
- struct netdev_private *np = (struct netdev_private *)dev->priv;
- u16 mc_filter[32]; /* Multicast hash filter */
+ struct netdev_private *np = dev->priv;
+ u8 mc_filter[64]; /* Multicast hash filter */
u32 rx_mode;
if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
set_bit(ether_crc_le(ETH_ALEN, mclist->dmi_addr) & 0x1ff,
mc_filter);
}
- for (i = 0; i < 32; i++) {
- writew(0x200 + (i<<1), ioaddr + RxFilterAddr);
- writew(cpu_to_be16(mc_filter[i]), ioaddr + RxFilterData);
- }
rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ for (i = 0; i < 64; i += 2) {
+ writew(0x200 + i, ioaddr + RxFilterAddr);
+ writew((mc_filter[i+1]<<8) + mc_filter[i], ioaddr + RxFilterData);
+ }
}
- writel(rx_mode | EnableFilter, ioaddr + RxFilterAddr);
+ writel(rx_mode, ioaddr + RxFilterAddr);
np->cur_rx_mode = rx_mode;
}
static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
+ struct netdev_private *np = dev->priv;
u16 *data = (u16 *)&rq->ifr_data;
switch(cmd) {
case SIOCDEVPRIVATE+2: /* Write the specified MII register */
if (!capable(CAP_NET_ADMIN))
return -EPERM;
- mdio_write(dev, data[0] & 0x1f, data[1] & 0x1f, data[2]);
+ if (data[0] == 1) {
+ u16 miireg = data[1] & 0x1f;
+ u16 value = data[2];
+ writew(value, dev->base_addr + 0x80 + (miireg << 2));
+ switch (miireg) {
+ case 0:
+ /* Check for autonegotiation on or reset. */
+ np->duplex_lock = (value & 0x9000) ? 0 : 1;
+ if (np->duplex_lock)
+ np->full_duplex = (value & 0x0100) ? 1 : 0;
+ break;
+ case 4: np->advertising = value; break;
+ }
+ }
return 0;
default:
return -EOPNOTSUPP;
static int netdev_close(struct net_device *dev)
{
long ioaddr = dev->base_addr;
- struct netdev_private *np = (struct netdev_private *)dev->priv;
+ struct netdev_private *np = dev->priv;
int i;
netif_stop_queue(dev);
\f
static void __devexit natsemi_remove1 (struct pci_dev *pdev)
{
- struct net_device *dev = pdev->driver_data;
- struct netdev_private *np = (struct netdev_private *)dev->priv;
- const int pcibar = 1; /* PCI base address register */
+ struct net_device *dev = pci_get_drvdata(pdev);
unregister_netdev (dev);
- release_mem_region(pci_resource_start(pdev, pcibar), np->iosize);
+ pci_release_regions (pdev);
iounmap ((char *) dev->base_addr);
kfree (dev);
+ pci_set_drvdata(pdev, NULL);
}
static struct pci_driver natsemi_driver = {
}
}
- dev = init_etherdev(NULL, 0);
+ dev = alloc_etherdev(0);
if (!dev) {
printk (KERN_ERR "ne2k-pci: cannot allocate ethernet device\n");
goto err_out_free_res;
/* Allocate dev->priv and fill in 8390 specific dev fields. */
if (ethdev_init(dev)) {
- printk (KERN_ERR "%s: unable to get memory for dev->priv.\n", dev->name);
+ printk (KERN_ERR "ne2kpci(%s): unable to get memory for dev->priv.\n",
+ pdev->slot_name);
goto err_out_free_netdev;
}
- printk("%s: %s found at %#lx, IRQ %d, ",
- dev->name, pci_clone_list[chip_idx].name, ioaddr, dev->irq);
- for(i = 0; i < 6; i++) {
- printk("%2.2X%s", SA_prom[i], i == 5 ? ".\n": ":");
- dev->dev_addr[i] = SA_prom[i];
- }
-
ei_status.name = pci_clone_list[chip_idx].name;
ei_status.tx_start_page = start_page;
ei_status.stop_page = stop_page;
dev->open = &ne2k_pci_open;
dev->stop = &ne2k_pci_close;
NS8390_init(dev, 0);
+
+ i = register_netdev(dev);
+ if (i)
+ goto err_out_free_8390;
+
+ printk("%s: %s found at %#lx, IRQ %d, ",
+ dev->name, pci_clone_list[chip_idx].name, ioaddr, dev->irq);
+ for(i = 0; i < 6; i++) {
+ printk("%2.2X%s", SA_prom[i], i == 5 ? ".\n": ":");
+ dev->dev_addr[i] = SA_prom[i];
+ }
+
return 0;
+err_out_free_8390:
+ kfree(dev->priv);
err_out_free_netdev:
- unregister_netdev (dev);
kfree (dev);
err_out_free_res:
release_region (ioaddr, NE_IO_EXTENT);
+ pci_set_drvdata (pdev, NULL);
return -ENODEV;
}
up. We now share common code and have regularised name
allocation setups. Abolished the 16 card limits.
03/19/2000 - jgarzik and Urban Widmark: init_etherdev 32-byte align
+ 03/21/2001 - jgarzik: alloc_etherdev and friends
*/
#include <linux/config.h>
+#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/types.h>
*/
+static struct net_device *alloc_netdev(int sizeof_priv, const char *mask,
+ void (*setup)(struct net_device *))
+{
+ struct net_device *dev;
+ int alloc_size;
+
+ /* ensure 32-byte alignment of the private area */
+ alloc_size = sizeof (*dev) + sizeof_priv + 31;
+
+ dev = (struct net_device *) kmalloc (alloc_size, GFP_KERNEL);
+ if (dev == NULL)
+ {
+ printk(KERN_ERR "alloc_dev: Unable to allocate device memory.\n");
+ return NULL;
+ }
+
+ memset(dev, 0, alloc_size);
+
+ if (sizeof_priv)
+ dev->priv = (void *) (((long)(dev + 1) + 31) & ~31);
+
+ setup(dev);
+ strcpy(dev->name, mask);
+
+ return dev;
+}
+
static struct net_device *init_alloc_dev(int sizeof_priv)
{
struct net_device *dev;
return dev;
}
+static int __register_netdev(struct net_device *dev)
+{
+ dev_init_buffers(dev);
+
+ if (dev->init && dev->init(dev) != 0) {
+ unregister_netdev(dev);
+ return -EIO;
+ }
+ return 0;
+}
+
/**
* init_etherdev - Register ethernet device
* @dev: An ethernet device structure to be filled in, or %NULL if a new
return init_netdev(dev, sizeof_priv, "eth%d", ether_setup);
}
+/**
+ * alloc_etherdev - Register ethernet device
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this ethernet device
+ *
+ * Fill in the fields of the device structure with ethernet-generic values.
+ *
+ * Constructs a new net device, complete with a private data area of
+ * size @sizeof_priv. A 32-byte (not bit) alignment is enforced for
+ * this private data area.
+ */
+
+struct net_device *alloc_etherdev(int sizeof_priv)
+{
+ return alloc_netdev(sizeof_priv, "eth%d", ether_setup);
+}
+
+EXPORT_SYMBOL(init_etherdev);
+EXPORT_SYMBOL(alloc_etherdev);
static int eth_mac_addr(struct net_device *dev, void *p)
{
#ifdef CONFIG_FDDI
+/**
+ * init_fddidev - Register FDDI device
+ * @dev: A FDDI device structure to be filled in, or %NULL if a new
+ * struct should be allocated.
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this ethernet device
+ *
+ * Fill in the fields of the device structure with FDDI-generic values.
+ *
+ * If no device structure is passed, a new one is constructed, complete with
+ * a private data area of size @sizeof_priv. A 32-byte (not bit)
+ * alignment is enforced for this private data area.
+ *
+ * If an empty string area is passed as dev->name, or a new structure is made,
+ * a new name string is constructed.
+ */
+
struct net_device *init_fddidev(struct net_device *dev, int sizeof_priv)
{
return init_netdev(dev, sizeof_priv, "fddi%d", fddi_setup);
}
+/**
+ * alloc_fddidev - Register FDDI device
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this FDDI device
+ *
+ * Fill in the fields of the device structure with FDDI-generic values.
+ *
+ * Constructs a new net device, complete with a private data area of
+ * size @sizeof_priv. A 32-byte (not bit) alignment is enforced for
+ * this private data area.
+ */
+
+struct net_device *alloc_fddidev(int sizeof_priv)
+{
+ return alloc_netdev(sizeof_priv, "fddi%d", fddi_setup);
+}
+
+EXPORT_SYMBOL(init_fddidev);
+EXPORT_SYMBOL(alloc_fddidev);
+
static int fddi_change_mtu(struct net_device *dev, int new_mtu)
{
if ((new_mtu < FDDI_K_SNAP_HLEN) || (new_mtu > FDDI_K_SNAP_DLEN))
}
+/**
+ * init_hippi_dev - Register HIPPI device
+ * @dev: A HIPPI device structure to be filled in, or %NULL if a new
+ * struct should be allocated.
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this ethernet device
+ *
+ * Fill in the fields of the device structure with HIPPI-generic values.
+ *
+ * If no device structure is passed, a new one is constructed, complete with
+ * a private data area of size @sizeof_priv. A 32-byte (not bit)
+ * alignment is enforced for this private data area.
+ *
+ * If an empty string area is passed as dev->name, or a new structure is made,
+ * a new name string is constructed.
+ */
+
struct net_device *init_hippi_dev(struct net_device *dev, int sizeof_priv)
{
return init_netdev(dev, sizeof_priv, "hip%d", hippi_setup);
}
+/**
+ * alloc_hippi_dev - Register HIPPI device
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this HIPPI device
+ *
+ * Fill in the fields of the device structure with HIPPI-generic values.
+ *
+ * Constructs a new net device, complete with a private data area of
+ * size @sizeof_priv. A 32-byte (not bit) alignment is enforced for
+ * this private data area.
+ */
+
+struct net_device *alloc_hippi_dev(int sizeof_priv)
+{
+ return alloc_netdev(sizeof_priv, "hip%d", hippi_setup);
+}
+
+int register_hipdev(struct net_device *dev)
+{
+ return __register_netdev(dev);
+}
void unregister_hipdev(struct net_device *dev)
{
- rtnl_lock();
- unregister_netdevice(dev);
- rtnl_unlock();
+ unregister_netdev(dev);
}
+EXPORT_SYMBOL(init_hippi_dev);
+EXPORT_SYMBOL(alloc_hippi_dev);
+EXPORT_SYMBOL(register_hipdev);
+EXPORT_SYMBOL(unregister_hipdev);
static int hippi_neigh_setup_dev(struct net_device *dev, struct neigh_parms *p)
{
dev_init_buffers(dev);
}
+EXPORT_SYMBOL(ether_setup);
#ifdef CONFIG_FDDI
return;
}
+EXPORT_SYMBOL(fddi_setup);
#endif /* CONFIG_FDDI */
dev_init_buffers(dev);
}
+EXPORT_SYMBOL(hippi_setup);
#endif /* CONFIG_HIPPI */
#if defined(CONFIG_ATALK) || defined(CONFIG_ATALK_MODULE)
dev_init_buffers(dev);
}
+EXPORT_SYMBOL(ltalk_setup);
#endif /* CONFIG_ATALK || CONFIG_ATALK_MODULE */
if (strchr(dev->name, '%'))
{
- err = -EBUSY;
- if(dev_alloc_name(dev, dev->name)<0)
+ err = dev_alloc_name(dev, dev->name);
+ if (err < 0)
goto out;
}
if (dev->name[0]==0 || dev->name[0]==' ')
{
- err = -EBUSY;
- if(dev_alloc_name(dev, "eth%d")<0)
+ err = dev_alloc_name(dev, "eth%d");
+ if (err < 0)
goto out;
}
-
-
- err = -EIO;
- if (register_netdevice(dev))
- goto out;
- err = 0;
+ err = register_netdevice(dev);
out:
rtnl_unlock();
rtnl_unlock();
}
+EXPORT_SYMBOL(register_netdev);
+EXPORT_SYMBOL(unregister_netdev);
#ifdef CONFIG_TR
-static void tr_configure(struct net_device *dev)
+void tr_setup(struct net_device *dev)
{
/*
* Configure and register
dev->flags = IFF_BROADCAST | IFF_MULTICAST ;
}
+/**
+ * init_trdev - Register token ring device
+ * @dev: A token ring device structure to be filled in, or %NULL if a new
+ * struct should be allocated.
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this ethernet device
+ *
+ * Fill in the fields of the device structure with token ring-generic values.
+ *
+ * If no device structure is passed, a new one is constructed, complete with
+ * a private data area of size @sizeof_priv. A 32-byte (not bit)
+ * alignment is enforced for this private data area.
+ *
+ * If an empty string area is passed as dev->name, or a new structure is made,
+ * a new name string is constructed.
+ */
+
struct net_device *init_trdev(struct net_device *dev, int sizeof_priv)
{
- return init_netdev(dev, sizeof_priv, "tr%d", tr_configure);
+ return init_netdev(dev, sizeof_priv, "tr%d", tr_setup);
}
-void tr_setup(struct net_device *dev)
+/**
+ * alloc_trdev - Register token ring device
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this token ring device
+ *
+ * Fill in the fields of the device structure with token ring-generic values.
+ *
+ * Constructs a new net device, complete with a private data area of
+ * size @sizeof_priv. A 32-byte (not bit) alignment is enforced for
+ * this private data area.
+ */
+
+struct net_device *alloc_trdev(int sizeof_priv)
{
+ return alloc_netdev(sizeof_priv, "tr%d", tr_setup);
}
int register_trdev(struct net_device *dev)
{
- dev_init_buffers(dev);
-
- if (dev->init && dev->init(dev) != 0) {
- unregister_trdev(dev);
- return -EIO;
- }
- return 0;
+ return __register_netdev(dev);
}
void unregister_trdev(struct net_device *dev)
{
- rtnl_lock();
- unregister_netdevice(dev);
- rtnl_unlock();
+ unregister_netdev(dev);
}
+
+EXPORT_SYMBOL(tr_setup);
+EXPORT_SYMBOL(init_trdev);
+EXPORT_SYMBOL(alloc_trdev);
+EXPORT_SYMBOL(register_trdev);
+EXPORT_SYMBOL(unregister_trdev);
+
#endif /* CONFIG_TR */
/* New-style flags. */
dev->flags = IFF_BROADCAST;
dev_init_buffers(dev);
- return;
}
+/**
+ * init_fcdev - Register fibre channel device
+ * @dev: A fibre channel device structure to be filled in, or %NULL if a new
+ * struct should be allocated.
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this ethernet device
+ *
+ * Fill in the fields of the device structure with fibre channel-generic values.
+ *
+ * If no device structure is passed, a new one is constructed, complete with
+ * a private data area of size @sizeof_priv. A 32-byte (not bit)
+ * alignment is enforced for this private data area.
+ *
+ * If an empty string area is passed as dev->name, or a new structure is made,
+ * a new name string is constructed.
+ */
struct net_device *init_fcdev(struct net_device *dev, int sizeof_priv)
{
return init_netdev(dev, sizeof_priv, "fc%d", fc_setup);
}
+/**
+ * alloc_fcdev - Register fibre channel device
+ * @sizeof_priv: Size of additional driver-private structure to be allocated
+ * for this fibre channel device
+ *
+ * Fill in the fields of the device structure with fibre channel-generic values.
+ *
+ * Constructs a new net device, complete with a private data area of
+ * size @sizeof_priv. A 32-byte (not bit) alignment is enforced for
+ * this private data area.
+ */
+
+struct net_device *alloc_fcdev(int sizeof_priv)
+{
+ return alloc_netdev(sizeof_priv, "fc%d", fc_setup);
+}
+
int register_fcdev(struct net_device *dev)
{
- dev_init_buffers(dev);
- if (dev->init && dev->init(dev) != 0) {
- unregister_fcdev(dev);
- return -EIO;
- }
- return 0;
+ return __register_netdev(dev);
}
void unregister_fcdev(struct net_device *dev)
{
- rtnl_lock();
- unregister_netdevice(dev);
- rtnl_unlock();
+ unregister_netdev(dev);
}
+EXPORT_SYMBOL(fc_setup);
+EXPORT_SYMBOL(init_fcdev);
+EXPORT_SYMBOL(alloc_fcdev);
+EXPORT_SYMBOL(register_fcdev);
+EXPORT_SYMBOL(unregister_fcdev);
+
#endif /* CONFIG_NET_FC */
static void netdrv_interrupt (int irq, void *dev_instance,
struct pt_regs *regs);
static int netdrv_close (struct net_device *dev);
-static int mii_ioctl (struct net_device *dev, struct ifreq *rq, int cmd);
+static int netdrv_ioctl (struct net_device *dev, struct ifreq *rq, int cmd);
static struct net_device_stats *netdrv_get_stats (struct net_device *dev);
static inline u32 ether_crc (int length, unsigned char *data);
static void netdrv_set_rx_mode (struct net_device *dev);
*dev_out = NULL;
/* dev zeroed in init_etherdev */
- dev = init_etherdev (NULL, sizeof (*tp));
+ dev = alloc_etherdev (sizeof (*tp));
if (dev == NULL) {
printk (KERN_ERR PFX "unable to alloc new ethernet\n");
DPRINTK ("EXIT, returning -ENOMEM\n");
goto err_out;
}
- rc = pci_request_regions (pdev, dev->name);
+ rc = pci_request_regions (pdev, "pci-skeleton");
if (rc)
goto err_out;
tp->chipset,
rtl_chip_info[tp->chipset].name);
+ i = register_netdev (dev);
+ if (i)
+ goto err_out_unmap;
+
DPRINTK ("EXIT, returning 0\n");
*ioaddr_out = ioaddr;
*dev_out = dev;
return 0;
+err_out_unmap:
#ifndef USE_IO_OPS
+ iounmap(ioaddr);
err_out_free_res:
+#endif
pci_release_regions (pdev);
-#endif /* !USE_IO_OPS */
err_out:
- unregister_netdev (dev);
kfree (dev);
DPRINTK ("EXIT, returning %d\n", rc);
return rc;
dev->stop = netdrv_close;
dev->get_stats = netdrv_get_stats;
dev->set_multicast_list = netdrv_set_rx_mode;
- dev->do_ioctl = mii_ioctl;
+ dev->do_ioctl = netdrv_ioctl;
dev->tx_timeout = netdrv_tx_timeout;
dev->watchdog_timeo = TX_TIMEOUT;
}
#endif
- /* E. Gill */
- /* Note from BSD driver:
- * Here's a totally undocumented fact for you. When the
- * RealTek chip is in the process of copying a packet into
- * RAM for you, the length will be 0xfff0. If you spot a
- * packet header with this value, you need to stop. The
- * datasheet makes absolutely no mention of this and
- * RealTek should be shot for this.
- */
- if (rx_size == 0xfff0)
- break;
-
/* If Rx err or invalid rx_size/rx_status received
* (which happens if we get lost in the ring),
* Rx process gets reset, so we abort any further
}
-static int mii_ioctl (struct net_device *dev, struct ifreq *rq, int cmd)
+static int netdrv_ioctl (struct net_device *dev, struct ifreq *rq, int cmd)
{
struct netdrv_private *tp = dev->priv;
u16 *data = (u16 *) & rq->ifr_data;
bool ' Pcmcia Wireless LAN' CONFIG_NET_PCMCIA_RADIO
if [ "$CONFIG_NET_PCMCIA_RADIO" = "y" ]; then
dep_tristate ' Aviator/Raytheon 2.4MHz wireless support' CONFIG_PCMCIA_RAYCS $CONFIG_PCMCIA
- dep_tristate ' Hermes (AT&T/Lucent/Orinoco/3com) wireless support' CONFIG_PCMCIA_HERMES $CONFIG_PCMCIA
+ dep_tristate ' Hermes support (Orinoco/WavelanIEEE/PrismII/Symbol 802.11b cards)' CONFIG_PCMCIA_HERMES $CONFIG_PCMCIA
dep_tristate ' Xircom Netwave AirSurfer wireless support' CONFIG_PCMCIA_NETWAVE $CONFIG_PCMCIA
dep_tristate ' AT&T/Lucent Wavelan wireless support' CONFIG_PCMCIA_WAVELAN $CONFIG_PCMCIA
dep_tristate ' Aironet 4500/4800 PCMCIA support' CONFIG_AIRONET4500_CS $CONFIG_AIRONET4500 $CONFIG_PCMCIA
obj- :=
# Things that need to export symbols
-export-objs := ray_cs.o
+export-objs := ray_cs.o hermes.o
# 16-bit client drivers
obj-$(CONFIG_PCMCIA_3C589) += 3c589_cs.o
static const char *version = "hermes.c: 12 Dec 2000 David Gibson <hermes@gibson.dropbear.id.au>";
+#include <linux/config.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/smp.h>
return -EINVAL;
err = hermes_docmd_wait(hw, HERMES_CMD_ALLOC, size, &resp);
- if (err)
+ if (err) {
+ printk(KERN_WARNING "hermes @ 0x%x: Frame allocation command failed (0x%X).\n",
+ hw->iobase, err);
return err;
+ }
reg = hermes_read_regn(hw, EVSTAT);
k = ALLOC_COMPL_TIMEOUT;
#define HERMES_RID_CNF_PM_ENABLE ((uint16_t)0xfc09)
#define HERMES_RID_CNF_PM_MCAST_RX ((uint16_t)0xfc0b)
#define HERMES_RID_CNF_PM_PERIOD ((uint16_t)0xfc0c)
+#define HERMES_RID_CNF_PM_HOLDOVER ((uint16_t)0xfc0d)
#define HERMES_RID_CNF_NICKNAME ((uint16_t)0xfc0e)
#define HERMES_RID_CNF_WEP_ON ((uint16_t)0xfc20)
#define HERMES_RID_CNF_MWO_ROBUST ((uint16_t)0xfc25)
#define HERMES_RID_CNF_PRISM2_KEY1 ((uint16_t)0xfc25)
#define HERMES_RID_CNF_PRISM2_KEY2 ((uint16_t)0xfc26)
#define HERMES_RID_CNF_PRISM2_KEY3 ((uint16_t)0xfc27)
+#define HERMES_RID_CNF_SYMBOL_AUTH_TYPE ((uint16_t)0xfc2A)
+/* This one is read only */
+#define HERMES_RID_CNF_SYMBOL_KEY_LENGTH ((uint16_t)0xfc2B)
+#define HERMES_RID_CNF_SYMBOL_BASIC_RATES ((uint16_t)0xfc8A)
/*
* Information RIDs
(hermes_read_ltv((hw),(bap),(rid), sizeof(*buf), NULL, (buf)))
#define HERMES_WRITE_RECORD(hw, bap, rid, buf) \
(hermes_write_ltv((hw),(bap),(rid),HERMES_BYTES_TO_RECLEN(sizeof(*buf)),(buf)))
+#define HERMES_WRITE_RECORD_LEN(hw, bap, rid, buf, len) \
+ (hermes_write_ltv((hw),(bap),(rid),HERMES_BYTES_TO_RECLEN(len),(buf)))
static inline int hermes_read_wordrec(hermes_t *hw, int bap, uint16_t rid, uint16_t *word)
{
-/* dldwd_cs.c 0.01
+/* orinoco_cs.c 0.03 - (formerly known as dldwd_cs.c)
*
* A driver for "Hermes" chipset based PCMCIA wireless adaptors, such
- * as the Lucent/Orinoco cards and the Cabletron RoamAbout. It should
- * also be usable on Prism II based cards such as the Farallon Skyline
- * and the 3Com AirConnect.
+ * as the Lucent WavelanIEEE/Orinoco cards and their OEM (Cabletron/
+ * EnteraSys RoamAbout 802.11, ELSA Airlancer, Melco Buffalo and others).
+ * It should also be usable on various Prism II based cards such as the
+ * Linksys, D-Link and Farallon Skyline. It should also work on Symbol
+ * cards such as the 3Com AirConnect and Ericsson WLAN.
*
* Copyright (C) 2000 David Gibson, Linuxcare Australia <hermes@gibson.dropbear.id.au>
+ * With some help from :
+ * Copyright (C) 2001 Jean Tourrilhes, HP Labs <jt@hpl.hp.com>
*
* Based on dummy_cs.c 1.27 2000/06/12 21:27:25
*
* INTERRUPTS! so it shouldn't be used except for resets, when we
* don't care about that.*/
+/*
+ * Tentative changelog...
+ *
+ * v0.01 -> v0.02 - 21/3/2001 - Jean II
+ * o Allow to use regular ethX device name instead of dldwdX
+ * o Warning on IBSS with ESSID=any for firmware 6.06
+ * o Put proper range.throughput values (optimistic)
+ * o IWSPY support (IOCTL and stat gather in Rx path)
+ * o Allow setting frequency in Ad-Hoc mode
+ * o Disable WEP setting if !has_wep to work on old firmware
+ * o Fix txpower range
+ * o Start adding support for Samsung/Compaq firmware
+ *
+ * v0.02 -> v0.03 - 23/3/2001 - Jean II
+ * o Start adding Symbol support - need to check all that
+ * o Fix Prism2/Symbol WEP to accept 128 bits keys
+ * o Add Symbol WEP (add authentication type)
+ * o Add Prism2/Symbol rate
+ * o Add PM timeout (holdover duration)
+ * o Enable "iwconfig eth0 key off" and friends (toggle flags)
+ * o Enable "iwconfig eth0 power unicast/all" (toggle flags)
+ * o Try with an intel card. It report firmware 1.01, behave like
+ * an antiquated firmware, however on windows it says 2.00. Yuck !
+ * o Workaround firmware bug in allocate buffer (Intel 1.01)
+ * o Finish external renaming to orinoco...
+ * o Testing with various Wavelan firmwares
+ *
+ * TODO - Jean II
+ * o inline functions (lot's of candidate, need to reorder code)
+ * o Separate Pcmcia specific code to help Airport/Mini PCI driver
+ * o Test PrismII/Symbol cards & firmware versions
+ */
+
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#ifdef PCMCIA_DEBUG
static int pc_debug = PCMCIA_DEBUG;
-static char *version = "dldwd_cs.c 0.01 (David Gibson <hermes@gibson.dropbear.id.au>)";
+static char *version = "orinoco_cs.c 0.03 (David Gibson <hermes@gibson.dropbear.id.au>)";
MODULE_PARM(pc_debug, "i");
#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args)
#define DEBUGMORE(n, args...) do { if (pc_debug>(n)) printk(args); } while (0)
#if (! defined (WIRELESS_EXT)) || (WIRELESS_EXT < 10)
-#error "dldwd_cs requires Wireless extensions v10 or later."
+#error "orinoco_cs requires Wireless extensions v10 or later."
#endif /* (! defined (WIRELESS_EXT)) || (WIRELESS_EXT < 10) */
+#define WIRELESS_SPY // enable iwspy support
/*====================================================================*/
static uint irq_mask = 0xdeb8;
/* Newer, simpler way of listing specific interrupts */
static int irq_list[4] = { -1 };
+/* Control device name allocation. 0 -> dldwdX ; 1 -> ethX */
+static int eth = 1;
MODULE_PARM(irq_mask, "i");
MODULE_PARM(irq_list, "1-4i");
+MODULE_PARM(eth, "i");
/*====================================================================*/
#define DLDWD_MACPORT 0
#define IRQ_LOOP_MAX 10
#define TX_NICBUF_SIZE 2048
+#define TX_NICBUF_SIZE_BUG 1585 /* Bug in Intel firmware */
#define MAX_KEYS 4
#define MAX_KEY_SIZE 14
#define LARGE_KEY_SIZE 13
/* Capabilities of the hardware/firmware */
hermes_identity_t firmware_info;
- int has_ibss, has_port3, prefer_port3;
- int has_wep, has_big_wep, wep_type;
-#define WEP_TYPE_LUCENT 1
-#define WEP_TYPE_PRISM 2
+ int firmware_type;
+#define FIRMWARE_TYPE_LUCENT 1
+#define FIRMWARE_TYPE_PRISM2 2
+#define FIRMWARE_TYPE_SYMBOL 3
+ int has_ibss, has_port3, prefer_port3, has_ibss_any;
+ int has_wep, has_big_wep;
int has_mwo;
int has_pm;
- int broken_reset;
+ int broken_reset, broken_allocate;
uint16_t channel_mask;
/* Current configuration */
uint32_t iw_mode;
int port_type, allow_ibss;
- uint16_t wep_on, tx_key;
+ uint16_t wep_on, wep_auth, tx_key;
dldwd_keys_t keys;
char nick[IW_ESSID_MAX_SIZE+1];
char desired_essid[IW_ESSID_MAX_SIZE+1];
uint16_t channel;
uint16_t ap_density, rts_thresh;
uint16_t tx_rate_ctrl;
- uint16_t pm_on, pm_mcast, pm_period;
+ uint16_t pm_on, pm_mcast, pm_period, pm_timeout;
int promiscuous, allmulti, mc_count;
+#ifdef WIRELESS_SPY
+ int spy_number;
+ u_char spy_address[IW_MAX_SPY][ETH_ALEN];
+ struct iw_quality spy_stat[IW_MAX_SPY];
+#endif
+
/* /proc based debugging stuff */
struct proc_dir_entry *dir_dev;
struct proc_dir_entry *dir_regs;
static struct net_device_stats *dldwd_get_stats(struct net_device *dev);
static struct iw_statistics *dldwd_get_wireless_stats(struct net_device *dev);
+static void dldwd_stat_gather(struct net_device *dev,
+ struct sk_buff *skb,
+ struct dldwd_frame_hdr *hdr);
static int dldwd_ioctl_getiwrange(struct net_device *dev, struct iw_point *rrq);
static int dldwd_ioctl_setiwencode(struct net_device *dev, struct iw_point *erq);
hermes_t *hw = &priv->hw;
int err = 0;
hermes_id_t idbuf;
+ int frame_size;
TRACE_ENTER(priv->ndev.name);
if (err)
goto out;
- err = hermes_allocate(hw, TX_NICBUF_SIZE, &priv->txfid);
+ frame_size = TX_NICBUF_SIZE;
+ /* This stupid bug is present in Intel firmware 1.10, and
+ * may be fixed in later firmwares - Jean II */
+ if(priv->broken_allocate)
+ frame_size = TX_NICBUF_SIZE_BUG;
+ err = hermes_allocate(hw, frame_size, &priv->txfid);
if (err)
goto out;
priv->allow_ibss);
if (err)
goto out;
+ if((strlen(priv->desired_essid) == 0) && (priv->allow_ibss)
+ && (!priv->has_ibss_any)) {
+ printk(KERN_WARNING "%s: This firmware requires an \
+ESSID in IBSS-Ad-Hoc mode.\n", dev->name);
+ /* With wvlan_cs, in this case, we would crash.
+ * hopefully, this driver will behave better...
+ * Jean II */
+ }
}
/* Set up encryption */
- err = __dldwd_hw_setup_wep(priv);
- if (err)
- goto out;
+ if (priv->has_wep) {
+ err = __dldwd_hw_setup_wep(priv);
+ if (err)
+ goto out;
+ }
/* Set the desired ESSID */
idbuf.len = cpu_to_le16(strlen(priv->desired_essid));
priv->pm_period);
if (err)
goto out;
+ err = hermes_write_wordrec(hw, USER_BAP, HERMES_RID_CNF_PM_HOLDOVER,
+ priv->pm_timeout);
+ if (err)
+ goto out;
}
/* Set promiscuity / multicast*/
hermes_t *hw = &priv->hw;
int err = 0;
- switch (priv->wep_type) {
- case 1: /* Lucent style WEP */
+ switch (priv->firmware_type) {
+ case FIRMWARE_TYPE_LUCENT: /* Lucent style WEP */
if (priv->wep_on) {
err = hermes_write_wordrec(hw, USER_BAP, HERMES_RID_CNF_TX_KEY, priv->tx_key);
if (err)
return err;
break;
- case 2: /* Prism II style WEP */
+ case FIRMWARE_TYPE_PRISM2: /* Prism II style WEP */
+ case FIRMWARE_TYPE_SYMBOL: /* Symbol style WEP */
if (priv->wep_on) {
- char keybuf[SMALL_KEY_SIZE+1];
+ char keybuf[LARGE_KEY_SIZE+1];
+ int keylen;
+ int i;
err = hermes_write_wordrec(hw, USER_BAP, HERMES_RID_CNF_PRISM2_TX_KEY,
priv->tx_key);
if (err)
return err;
- keybuf[SMALL_KEY_SIZE] = '\0';
-
- memcpy(keybuf, priv->keys[0].data, SMALL_KEY_SIZE);
- err = HERMES_WRITE_RECORD(hw, USER_BAP, HERMES_RID_CNF_PRISM2_KEY0, &keybuf);
- if (err)
- return err;
- memcpy(keybuf, priv->keys[1].data, SMALL_KEY_SIZE);
- err = HERMES_WRITE_RECORD(hw, USER_BAP, HERMES_RID_CNF_PRISM2_KEY1, &keybuf);
- if (err)
- return err;
- memcpy(keybuf, priv->keys[2].data, SMALL_KEY_SIZE);
- err = HERMES_WRITE_RECORD(hw, USER_BAP, HERMES_RID_CNF_PRISM2_KEY2, &keybuf);
- if (err)
- return err;
- memcpy(keybuf, priv->keys[3].data, SMALL_KEY_SIZE);
- err = HERMES_WRITE_RECORD(hw, USER_BAP, HERMES_RID_CNF_PRISM2_KEY3, &keybuf);
- if (err)
- return err;
+ keybuf[LARGE_KEY_SIZE] = '\0';
+
+ /* Write all 4 keys */
+ for(i = 0; i < MAX_KEYS; i++) {
+ keylen = priv->keys[i].len;
+ keybuf[SMALL_KEY_SIZE] = '\0';
+ memcpy(keybuf, priv->keys[i].data, keylen);
+ err = HERMES_WRITE_RECORD_LEN(hw, USER_BAP, HERMES_RID_CNF_PRISM2_KEY0, &keybuf, keylen);
+ if (err)
+ return err;
+ }
+ /* Symbol cards : set the authentication :
+ * 0 -> no encryption, 1 -> open,
+ * 2 -> shared key, 3 -> shared key 128bit only */
+ if (priv->firmware_type == FIRMWARE_TYPE_SYMBOL) {
+ err = hermes_write_wordrec(hw, USER_BAP, HERMES_RID_CNF_SYMBOL_AUTH_TYPE, priv->wep_auth);
+ if (err)
+ return err;
+ }
}
err = hermes_write_wordrec(hw, USER_BAP, HERMES_RID_CNF_PRISM2_WEP_ON, priv->wep_on);
}
return 0;
- return 0;
}
static int dldwd_hw_get_bssid(dldwd_priv_t *priv, char buf[ETH_ALEN])
than from priv->desired_essid, just in case the
firmware is allowed to change it on us. I'm not
sure about this */
+ /* My guess is that the OWN_SSID should always be whatever
+ * we set to the card, whereas CURRENT_SSID is the one that
+ * may change... - Jean II */
uint16_t rid;
*active = 1;
static void __dldwd_ev_tick(dldwd_priv_t *priv, hermes_t *hw)
{
- struct net_device *dev = &priv->ndev;
-
- printk(KERN_DEBUG "%s: TICK\n", dev->name);
+ printk(KERN_DEBUG "%s: TICK\n", priv->ndev.name);
}
static void __dldwd_ev_wterr(dldwd_priv_t *priv, hermes_t *hw)
{
- struct net_device *dev = &priv->ndev;
-
/* This seems to happen a fair bit under load, but ignoring it
seems to work fine...*/
- DEBUG(1, "%s: MAC controller error (WTERR). Ignoring.\n", dev->name);
+ DEBUG(1, "%s: MAC controller error (WTERR). Ignoring.\n",
+ priv->ndev.name);
}
static void __dldwd_ev_infdrop(dldwd_priv_t *priv, hermes_t *hw)
{
- struct net_device *dev = &priv->ndev;
-
- printk(KERN_WARNING "%s: Information frame lost.\n", dev->name);
+ printk(KERN_WARNING "%s: Information frame lost.\n", priv->ndev.name);
}
static void __dldwd_ev_info(dldwd_priv_t *priv, hermes_t *hw)
{
- struct net_device *dev = &priv->ndev;
-
- DEBUG(3, "%s: Information frame received.\n", dev->name);
+ DEBUG(3, "%s: Information frame received.\n", priv->ndev.name);
/* We don't actually do anything about it - we assume the MAC
controller can deal with it */
}
/* Yes, you heard right, that's le16. 802.2 and 802.3 are
big-endian, but 802.11 is little-endian believe it or
not. */
+ /* Correct. 802.3 is big-endian byte order and little endian bit
+ * order, whereas 802.11 is little endian for both byte and bit
+ * order. That's specified in the 802.11 spec. - Jean II */
/* Sanity check */
if (length > MAX_FRAME_SIZE) {
skb->protocol = eth_type_trans(skb, dev);
skb->ip_summed = CHECKSUM_NONE;
-
+ /* Process the wireless stats if needed */
+ dldwd_stat_gather(dev, skb, &hdr);
+
+ /* Pass the packet to the networking stack */
netif_rx(skb);
stats->rx_packets++;
stats->rx_bytes += length;
switch (priv->firmware_info.vendor) {
case 0x1:
- /* Lucent Wavelan IEEE, Orinoco or Cabletron card */
+ /* Lucent Wavelan IEEE, Lucent Orinoco, Cabletron RoamAbout,
+ * ELSA, Melco, HP, IBM, Dell 1150 cards */
vendor_str = "Lucent";
+ /* Lucent MAC : 00:60:1D:* & 00:02:2D:* */
+ priv->firmware_type = FIRMWARE_TYPE_LUCENT;
priv->broken_reset = 0;
+ priv->broken_allocate = 0;
priv->has_port3 = 1;
priv->has_ibss = (firmver >= 0x60006);
+ priv->has_ibss_any = (firmver >= 0x60010);
priv->has_wep = (firmver >= 0x40020);
priv->has_big_wep = 1; /* FIXME: this is wrong - how do we tell
Gold cards from the others? */
- priv->wep_type = WEP_TYPE_LUCENT;
priv->has_mwo = (firmver >= 0x60000);
priv->has_pm = (firmver >= 0x40020);
+ /* Tested with Lucent firmware :
+ * 1.16 ; 4.08 ; 4.52 ; 6.04 ; 6.16 => Jean II
+ * Tested CableTron firmware : 4.32 => Anton */
break;
case 0x2:
vendor_str = "Generic Prism II";
+ /* Note : my Intel card report this value, but I can't do
+ * much with it, so I guess it's broken - Jean II */
+ priv->firmware_type = FIRMWARE_TYPE_PRISM2;
priv->broken_reset = 0;
- priv->has_port3 = 1; /* FIXME: no idea if this is right */
+ priv->broken_allocate = (firmver <= 0x10001);
+ priv->has_port3 = 1;
priv->has_ibss = 0; /* FIXME: no idea if this is right */
- priv->has_wep = 1;
- priv->has_big_wep = 0;
- priv->wep_type = WEP_TYPE_PRISM;
+ priv->has_wep = (firmver >= 0x20000);
+ priv->has_big_wep = 1;
priv->has_mwo = 0;
- priv->has_pm = 1;
+ priv->has_pm = (firmver >= 0x20000);
+ /* Tested with Intel firmware : 1.01 => Jean II */
+ /* Note : firmware 1.01 is *seriously* broken */
+ break;
+ case 0x3:
+ vendor_str = "Samsung";
+ /* To check - Should cover Samsung & Compaq */
+
+ priv->firmware_type = FIRMWARE_TYPE_PRISM2;
+ priv->broken_reset = 0;
+ priv->broken_allocate = 0;
+ priv->has_port3 = 1;
+ priv->has_ibss = 0; /* FIXME: available in later firmwares */
+ priv->has_wep = (firmver >= 0x20000); /* FIXME */
+ priv->has_big_wep = 0; /* FIXME */
+ priv->has_mwo = 0;
+ priv->has_pm = (firmver >= 0x20000); /* FIXME */
break;
case 0x6:
- vendor_str = "LinkSys";
+ vendor_str = "LinkSys/D-Link";
+ /* To check */
+ priv->firmware_type = FIRMWARE_TYPE_PRISM2;
priv->broken_reset = 0;
+ priv->broken_allocate = 0;
priv->has_port3 = 1;
- priv->has_ibss = 0;
- priv->has_wep = 1;
+ priv->has_ibss = 0; /* FIXME: available in later firmwares */
+ priv->has_wep = (firmver >= 0x20000); /* FIXME */
priv->has_big_wep = 0;
- priv->wep_type = WEP_TYPE_PRISM;
priv->has_mwo = 0;
- priv->has_pm = 1;
+ priv->has_pm = (firmver >= 0x20000); /* FIXME */
break;
+#if 0
+ case 0x???: /* Could someone help here ??? */
+ vendor_str = "Symbol";
+ /* Symbol , 3Com AirConnect, Ericsson WLAN */
+
+ priv->firmware_type = FIRMWARE_TYPE_SYMBOL;
+ priv->broken_reset = 0;
+ priv->broken_allocate = 0;
+ priv->has_port3 = 1;
+ priv->has_ibss = 0; /* FIXME: available in later firmwares */
+ priv->has_wep = (firmver >= 0x20000); /* FIXME */
+ priv->has_big_wep = 1; /* Probably RID_SYMBOL_KEY_LENGTH */
+ priv->has_mwo = 0;
+ priv->has_pm = (firmver >= 0x20000);
+ break;
+#endif
default:
vendor_str = "UNKNOWN";
+ priv->firmware_type = 0;
priv->broken_reset = 0;
+ priv->broken_allocate = 0;
priv->has_port3 = 0;
priv->has_ibss = 0;
priv->has_wep = 0;
priv->has_big_wep = 0;
- priv->wep_type = 0;
priv->has_mwo = 0;
priv->has_pm = 0;
}
dev->name, priv->firmware_info.id, priv->firmware_info.vendor,
vendor_str, priv->firmware_info.major, priv->firmware_info.minor);
+ if ((priv->broken_reset) || (priv->broken_allocate))
+ printk(KERN_INFO "%s: Buggy firmware, please upgrade ASAP.\n", dev->name);
if (priv->has_port3)
- printk(KERN_INFO "%s: Lucent ad-hoc demo mode supported.\n", dev->name);
+ printk(KERN_INFO "%s: Ad-hoc demo mode supported.\n", dev->name);
if (priv->has_ibss)
printk(KERN_INFO "%s: IEEE standard IBSS ad-hoc mode supported.\n",
dev->name);
dev->name);
goto out;
}
+ err = hermes_read_wordrec(hw, USER_BAP, HERMES_RID_CNF_PM_HOLDOVER,
+ &priv->pm_timeout);
+ if (err) {
+ printk(KERN_ERR "%s: failed to read power management timeout!\n",
+ dev->name);
+ goto out;
+ }
}
/* Set up the default configuration */
if (priv->port_type == 3) {
memset(&wstats->qual, 0, sizeof(wstats->qual));
+#ifdef WIRELESS_SPY
+ /* If a spy address is defined, we report stats of the
+ * first spy address - Jean II */
+ if (priv->spy_number > 0) {
+ wstats->qual.qual = priv->spy_stat[0].qual;
+ wstats->qual.level = priv->spy_stat[0].level;
+ wstats->qual.noise = priv->spy_stat[0].noise;
+ wstats->qual.updated = priv->spy_stat[0].updated;
+ }
+#endif /* WIRELESS_SPY */
} else {
err = hermes_read_commsqual(hw, USER_BAP, &cq);
+ DEBUG(3, "%s: Global stats = %X-%X-%X\n", dev->name,
+ cq.qual, cq.signal, cq.noise);
+
+ /* Why are we using MIN/MAX ? We don't really care
+ * if the value goes above max, because we export the
+ * raw dBm values anyway. The normalisation should be done
+ * in user space - Jean II */
wstats->qual.qual = MAX(MIN(cq.qual, 0x8b-0x2f), 0);
wstats->qual.level = MAX(MIN(cq.signal, 0x8a), 0x2f) - 0x95;
wstats->qual.noise = MAX(MIN(cq.noise, 0x8a), 0x2f) - 0x95;
return wstats;
}
+#ifdef WIRELESS_SPY
+static inline void dldwd_spy_gather(struct net_device *dev,
+ u_char *mac,
+ hermes_commsqual_t *cq)
+{
+ dldwd_priv_t *priv = (dldwd_priv_t *)dev->priv;
+ int i;
+
+ /* Gather wireless spy statistics: for each packet, compare the
+ * source address with out list, and if match, get the stats... */
+ for (i = 0; i < priv->spy_number; i++)
+ if (!memcmp(mac, priv->spy_address[i], ETH_ALEN)) {
+ priv->spy_stat[i].qual = MAX(MIN(cq->qual, 0x8b-0x2f), 0);
+ priv->spy_stat[i].level = MAX(MIN(cq->signal, 0x8a), 0x2f) - 0x95;
+ priv->spy_stat[i].noise = MAX(MIN(cq->noise, 0x8a), 0x2f) - 0x95;
+ priv->spy_stat[i].updated = 7;
+ }
+}
+#endif /* WIRELESS_SPY */
+
+static void dldwd_stat_gather(struct net_device *dev,
+ struct sk_buff *skb,
+ struct dldwd_frame_hdr *hdr)
+{
+ dldwd_priv_t *priv = (dldwd_priv_t *)dev->priv;
+ hermes_commsqual_t cq;
+
+ /* Using spy support with lots of Rx packets, like in an
+ * infrastructure (AP), will really slow down everything, because
+ * the MAC address must be compared to each entry of the spy list.
+ * If the user really asks for it (set some address in the
+ * spy list), we do it, but he will pay the price.
+ * Note that to get here, you need both WIRELESS_SPY
+ * compiled in AND some addresses in the list !!!
+ */
+#ifdef WIRELESS_EXT
+ /* Note : gcc will optimise the whole section away if
+ * WIRELESS_SPY is not defined... - Jean II */
+ if (
+#ifdef WIRELESS_SPY
+ (priv->spy_number > 0) ||
+#endif
+ 0 )
+ {
+ u_char *stats = (u_char *) &(hdr->desc.q_info);
+ /* This code may look strange. Everywhere we are using 16 bit
+ * ints except here. I've verified that these are are the
+ * correct values. Please check on PPC - Jean II */
+ cq.signal = stats[1]; /* High order byte */
+ cq.noise = stats[0]; /* Low order byte */
+ cq.qual = stats[0] - stats[1]; /* Better than nothing */
+
+ DEBUG(3, "%s: Packet stats = %X-%X-%X\n", dev->name,
+ cq.qual, cq.signal, cq.noise);
+
+#ifdef WIRELESS_SPY
+ dldwd_spy_gather(dev, skb->mac.raw + ETH_ALEN, &cq);
+#endif
+ }
+#endif /* WIRELESS_EXT */
+}
+
struct p8022_hdr encaps_hdr = {
0xaa, 0xaa, 0x03, {0x00, 0x00, 0xf8}
};
/* Much of this shamelessly taken from wvlan_cs.c. No idea
* what it all means -dgibson */
- range.throughput = 0.5 * 1024 * 1024; /* TCP throughput measured with socklib.
- I'm hoping MB/s are the right units. */
range.min_nwid = range.max_nwid = 0; /* We don't use nwids */
-
+ /* Set available channels/frequencies */
range.num_channels = NUM_CHANNELS;
- if (err)
- return err;
k = 0;
for (i = 0; i < NUM_CHANNELS; i++) {
if (priv->channel_mask & (1 << i)) {
range.sensitivity = 3;
- if (ptype == 3) {
+ if ((ptype == 3) && (priv->spy_number == 0)){
/* Quality stats meaningless in ad-hoc mode */
range.max_qual.qual = 0;
range.max_qual.level = 0;
return err;
range.num_bitrates = numrates;
+ /* Set an indication of the max TCP throughput in bit/s that we can
+ * expect using this interface. May be use for QoS stuff...
+ * Jean II */
+ if(numrates > 2)
+ range.throughput = 5 * 1000 * 1000; /* ~5 Mb/s */
+ else
+ range.throughput = 1.5 * 1000 * 1000; /* ~1.5 Mb/s */
+
range.min_rts = 0;
range.max_rts = 2347;
range.min_frag = 256;
range.min_pmp = 0;
range.max_pmp = 65535000;
+ range.min_pmt = 0;
+ range.max_pmt = 65535 * 1000; /* ??? */
range.pmp_flags = IW_POWER_PERIOD;
- range.pmt_flags = 0;
- range.pm_capa = IW_POWER_PERIOD | IW_POWER_UNICAST_R;
+ range.pmt_flags = IW_POWER_TIMEOUT;
+ range.pm_capa = IW_POWER_PERIOD | IW_POWER_TIMEOUT | IW_POWER_UNICAST_R;
- range.num_txpower = 0;
+ range.num_txpower = 1;
range.txpower[0] = 15; /* 15dBm */
range.txpower_capa = IW_TXPOW_DBM;
{
dldwd_priv_t *priv = dev->priv;
int index = (erq->flags & IW_ENCODE_INDEX) - 1;
- int enable = 0;
+ int setindex = priv->tx_key;
+ int enable = priv->wep_on;
+ int auth = priv->wep_auth;
uint16_t xlen = 0;
int err = 0;
char keybuf[MAX_KEY_SIZE];
- if (erq->flags & IW_ENCODE_RESTRICTED)
- return -EINVAL;
-
if (erq->pointer) {
/* We actually have a key to set */
} else
xlen = 0;
- if ( (index == priv->tx_key) && (xlen > 0) )
+ /* Switch on WEP if off */
+ if ((!enable) && (xlen > 0)) {
+ setindex = index;
enable = 1;
+ }
} else {
+ /* Important note : if the user do "iwconfig eth0 enc off",
+ * we will arrive there with an index of -1. This is valid
+ * but need to be taken care off... Jean II */
if ((index < 0) || (index >= MAX_KEYS)) {
- err = -EINVAL;
- goto out;
+ if((index != -1) || (erq->flags == 0)) {
+ err = -EINVAL;
+ goto out;
+ }
+ } else {
+ /* Set the index : Check that the key is valid */
+ if(priv->keys[index].len == 0) {
+ err = -EINVAL;
+ goto out;
+ }
+ setindex = index;
}
-
- enable = 1;
}
if (erq->flags & IW_ENCODE_DISABLED)
enable = 0;
-
+ /* Only for symbol cards (so far) - Jean II */
+ if (erq->flags & IW_ENCODE_OPEN)
+ auth = 1;
+ if (erq->flags & IW_ENCODE_RESTRICTED)
+ auth = 2; /* If all key are 128 -> should be 3 ??? */
+ /* Agree with master wep setting */
+ if (enable == 0)
+ auth = 0;
+ else if(auth == 0)
+ auth = 1; /* Encryption require some authentication */
+
if (erq->pointer) {
priv->keys[index].len = cpu_to_le16(xlen);
memset(priv->keys[index].data, 0, sizeof(priv->keys[index].data));
memcpy(priv->keys[index].data, keybuf, erq->length);
}
- priv->tx_key = index;
+ priv->tx_key = setindex;
priv->wep_on = enable;
+ priv->wep_auth = auth;
out:
dldwd_unlock(priv);
erq->flags |= IW_ENCODE_DISABLED;
erq->flags |= index + 1;
+ /* Only for symbol cards - Jean II */
+ if (priv->firmware_type == FIRMWARE_TYPE_SYMBOL) {
+ switch(priv->wep_auth) {
+ case 1:
+ erq->flags |= IW_ENCODE_OPEN;
+ break;
+ case 2:
+ case 3:
+ erq->flags |= IW_ENCODE_RESTRICTED;
+ break;
+ case 0:
+ default:
+ break;
+ }
+ }
+
xlen = le16_to_cpu(priv->keys[index].len);
erq->length = xlen;
dldwd_priv_t *priv = dev->priv;
char essidbuf[IW_ESSID_MAX_SIZE+1];
+ /* Note : ESSID is ignored in Ad-Hoc demo mode, but we can set it
+ * anyway... - Jean II */
+
memset(&essidbuf, 0, sizeof(essidbuf));
if (erq->flags) {
dldwd_priv_t *priv = dev->priv;
int chan = -1;
- if (priv->iw_mode == IW_MODE_ADHOC)
+ /* We can only use this in Ad-Hoc demo mode to set the operating
+ * frequency, or in IBSS mode to set the frequency where the IBSS
+ * will be created - Jean II */
+ if (priv->iw_mode != IW_MODE_ADHOC)
return -EOPNOTSUPP;
if ( (frq->e == 0) && (frq->m <= 1000) ) {
dldwd_priv_t *priv = dev->priv;
int err = 0;
int rate_ctrl = -1;
+ int fixed, upto;
int brate;
int i;
dldwd_lock(priv);
+ /* Normalise value */
brate = rrq->value / 500000;
- if (! rrq->fixed) {
- if (brate > 0)
- brate = -brate;
- else
- brate = -22;
- }
+
+ switch (priv->firmware_type) {
+ case FIRMWARE_TYPE_LUCENT: /* Lucent style rate */
+ if (! rrq->fixed) {
+ if (brate > 0)
+ brate = -brate;
+ else
+ brate = -22;
+ }
- for (i = 0; i < NUM_RATES; i++)
- if (rate_list[i] == brate) {
- rate_ctrl = i;
+ for (i = 0; i < NUM_RATES; i++)
+ if (rate_list[i] == brate) {
+ rate_ctrl = i;
+ break;
+ }
+
+ if ( (rate_ctrl < 1) || (rate_ctrl >= NUM_RATES) )
+ err = -EINVAL;
+ else
+ priv->tx_rate_ctrl = rate_ctrl;
+ break;
+ case FIRMWARE_TYPE_PRISM2: /* Prism II style rate */
+ case FIRMWARE_TYPE_SYMBOL: /* Symbol style rate */
+ switch(brate) {
+ case 0:
+ fixed = 0x0;
+ upto = 0x15;
+ break;
+ case 2:
+ fixed = 0x1;
+ upto = 0x1;
+ break;
+ case 4:
+ fixed = 0x2;
+ upto = 0x3;
break;
+ case 11:
+ fixed = 0x4;
+ upto = 0x7;
+ break;
+ case 22:
+ fixed = 0x8;
+ upto = 0x15;
+ break;
+ default:
+ fixed = 0x0;
+ upto = 0x0;
}
-
- if ( (rate_ctrl < 1) || (rate_ctrl >= NUM_RATES) )
- err = -EINVAL;
- else
- priv->tx_rate_ctrl = rate_ctrl;
+ if (rrq->fixed)
+ rate_ctrl = fixed;
+ else
+ rate_ctrl = upto;
+ if (rate_ctrl == 0)
+ err = -EINVAL;
+ else
+ priv->tx_rate_ctrl = rate_ctrl;
+ break;
+ }
dldwd_unlock(priv);
hermes_t *hw = &priv->hw;
int err = 0;
uint16_t val;
- int brate;
+ int brate = 0;
dldwd_lock(priv);
err = hermes_read_wordrec(hw, USER_BAP, HERMES_RID_CNF_TX_RATE_CTRL, &val);
if (err)
goto out;
- brate = rate_list[val];
+ switch (priv->firmware_type) {
+ case FIRMWARE_TYPE_LUCENT: /* Lucent style rate */
+ brate = rate_list[val];
- if (brate < 0) {
- rrq->fixed = 0;
+ if (brate < 0) {
+ rrq->fixed = 0;
- err = hermes_read_wordrec(hw, USER_BAP, HERMES_RID_CURRENT_TX_RATE, &val);
- if (err)
- goto out;
+ err = hermes_read_wordrec(hw, USER_BAP, HERMES_RID_CURRENT_TX_RATE, &val);
+ if (err)
+ goto out;
- if (val == 6)
+ if (val == 6)
+ brate = 11;
+ else
+ brate = 2*val;
+ } else
+ rrq->fixed = 1;
+ break;
+ case FIRMWARE_TYPE_PRISM2: /* Prism II style rate */
+ case FIRMWARE_TYPE_SYMBOL: /* Symbol style rate */
+ /* Check if auto or fixed (crude approximation) */
+ if((val & 0x1) && (val > 1)) {
+ rrq->fixed = 0;
+
+ err = hermes_read_wordrec(hw, USER_BAP, HERMES_RID_CURRENT_TX_RATE, &val);
+ if (err)
+ goto out;
+ } else
+ rrq->fixed = 1;
+
+ if(val >= 8)
+ brate = 22;
+ else if(val >= 4)
brate = 11;
+ else if(val >= 2)
+ brate = 4;
else
- brate = 2*val;
- } else
- rrq->fixed = 1;
+ brate = 2;
+ break;
+ }
rrq->value = brate * 500000;
rrq->disabled = 0;
{
dldwd_priv_t *priv = dev->priv;
int err = 0;
- int mcast = 1;
dldwd_lock(priv);
} else {
switch (prq->flags & IW_POWER_MODE) {
case IW_POWER_UNICAST_R:
- mcast = 0;
+ priv->pm_mcast = 0;
+ priv->pm_on = 1;
break;
case IW_POWER_ALL_R:
- mcast = 1;
+ priv->pm_mcast = 1;
+ priv->pm_on = 1;
break;
case IW_POWER_ON:
- mcast = priv->pm_mcast;
+ /* No flags : but we may have a value - Jean II */
break;
default:
err = -EINVAL;
goto out;
if (prq->flags & IW_POWER_TIMEOUT) {
- err = -EINVAL;
- goto out;
+ priv->pm_on = 1;
+ priv->pm_timeout = prq->value / 1000;
}
-
if (prq->flags & IW_POWER_PERIOD) {
priv->pm_on = 1;
- priv->pm_mcast = mcast;
priv->pm_period = prq->value / 1000;
- } else {
+ }
+ /* It's valid to not have a value if we are just toggling
+ * the flags... Jean II */
+ if(!priv->pm_on) {
err = -EINVAL;
goto out;
}
-
}
out:
dldwd_priv_t *priv = dev->priv;
hermes_t *hw = &priv->hw;
int err = 0;
- uint16_t enable, period, mcast;
+ uint16_t enable, period, timeout, mcast;
dldwd_lock(priv);
if (err)
goto out;
+ err = hermes_read_wordrec(hw, USER_BAP, HERMES_RID_CNF_PM_HOLDOVER, &timeout);
+ if (err)
+ goto out;
+
err = hermes_read_wordrec(hw, USER_BAP, HERMES_RID_CNF_PM_MCAST_RX, &mcast);
if (err)
goto out;
prq->disabled = !enable;
- prq->flags = IW_POWER_PERIOD;
+ /* Note : by default, display the period */
+ if ((prq->flags & IW_POWER_TYPE) == IW_POWER_TIMEOUT) {
+ prq->flags = IW_POWER_TIMEOUT;
+ prq->value = timeout * 1000;
+ } else {
+ prq->flags = IW_POWER_PERIOD;
+ prq->value = period * 1000;
+ }
if (mcast)
prq->flags |= IW_POWER_ALL_R;
else
prq->flags |= IW_POWER_UNICAST_R;
- prq->value = period * 1000;
out:
dldwd_unlock(priv);
return 0;
}
+/* Spy is used for link quality/strength measurements in Ad-Hoc mode
+ * Jean II */
+static int dldwd_ioctl_setspy(struct net_device *dev, struct iw_point *srq)
+{
+ dldwd_priv_t *priv = dev->priv;
+ struct sockaddr address[IW_MAX_SPY];
+ int number = srq->length;
+ int i;
+ int err = 0;
+
+ /* Check the number of addresses */
+ if (number > IW_MAX_SPY)
+ return -E2BIG;
+
+ /* Get the data in the driver */
+ if (srq->pointer) {
+ if (copy_from_user(address, srq->pointer,
+ sizeof(struct sockaddr) * number))
+ return -EFAULT;
+ }
+
+ /* Make sure nobody mess with the structure while we do */
+ dldwd_lock(priv);
+
+ /* dldwd_lock() doesn't disable interrupts, so make sure the
+ * interrupt rx path don't get confused while we copy */
+ priv->spy_number = 0;
+
+ if (number > 0) {
+ /* Extract the addresses */
+ for (i = 0; i < number; i++)
+ memcpy(priv->spy_address[i], address[i].sa_data,
+ ETH_ALEN);
+ /* Reset stats */
+ memset(priv->spy_stat, 0,
+ sizeof(struct iw_quality) * IW_MAX_SPY);
+ /* Set number of addresses */
+ priv->spy_number = number;
+ }
+
+ /* Time to show what we have done... */
+ DEBUG(0, "%s: New spy list:\n", dev->name);
+ for (i = 0; i < number; i++) {
+ DEBUG(0, "%s: %d - %02x:%02x:%02x:%02x:%02x:%02x\n",
+ dev->name, i+1,
+ priv->spy_address[i][0], priv->spy_address[i][1],
+ priv->spy_address[i][2], priv->spy_address[i][3],
+ priv->spy_address[i][4], priv->spy_address[i][5]);
+ }
+
+ /* Now, let the others play */
+ dldwd_unlock(priv);
+
+ return err;
+}
+
+static int dldwd_ioctl_getspy(struct net_device *dev, struct iw_point *srq)
+{
+ dldwd_priv_t *priv = dev->priv;
+ struct sockaddr address[IW_MAX_SPY];
+ struct iw_quality spy_stat[IW_MAX_SPY];
+ int number;
+ int i;
+
+ dldwd_lock(priv);
+
+ number = priv->spy_number;
+ if ((number > 0) && (srq->pointer)) {
+ /* Create address struct */
+ for (i = 0; i < number; i++) {
+ memcpy(address[i].sa_data, priv->spy_address[i],
+ ETH_ALEN);
+ address[i].sa_family = AF_UNIX;
+ }
+ /* Copy stats */
+ /* In theory, we should disable irqs while copying the stats
+ * because the rx path migh update it in the middle...
+ * Bah, who care ? - Jean II */
+ memcpy(&spy_stat, priv->spy_stat,
+ sizeof(struct iw_quality) * IW_MAX_SPY);
+ for (i=0; i < number; i++)
+ priv->spy_stat[i].updated = 0;
+ }
+
+ dldwd_unlock(priv);
+
+ /* Push stuff to user space */
+ srq->length = number;
+ if(copy_to_user(srq->pointer, address,
+ sizeof(struct sockaddr) * number))
+ return -EFAULT;
+ if(copy_to_user(srq->pointer + (sizeof(struct sockaddr)*number),
+ &spy_stat, sizeof(struct iw_quality) * number))
+ return -EFAULT;
+
+ return 0;
+}
+
static int dldwd_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
dldwd_priv_t *priv = dev->priv;
wrq->u.txpower.flags = IW_TXPOW_DBM;
break;
+ case SIOCSIWSPY:
+ DEBUG(1, "%s: SIOCSIWSPY\n", dev->name);
+
+ err = dldwd_ioctl_setspy(dev, &wrq->u.data);
+ break;
+
+ case SIOCGIWSPY:
+ DEBUG(1, "%s: SIOCGIWSPY\n", dev->name);
+
+ err = dldwd_ioctl_getspy(dev, &wrq->u.data);
+ break;
+
case SIOCGIWPRIV:
DEBUG(1, "%s: SIOCGIWPRIV\n", dev->name);
if (wrq->u.data.pointer) {
TRACE_ENTER("dldwd");
/* create the directory for it to sit in */
- dir_base = create_proc_entry("dldwd", S_IFDIR, &proc_root);
+ dir_base = create_proc_entry("hermes", S_IFDIR, &proc_root);
if (dir_base == NULL) {
- printk(KERN_ERR "Unable to initialise /proc/dldwd.\n");
+ printk(KERN_ERR "Unable to initialise /proc/hermes.\n");
dldwd_proc_cleanup();
err = -ENOMEM;
}
dev->dir_dev = create_proc_entry(dev->node.dev_name, S_IFDIR | S_IRUGO | S_IXUGO,
dir_base);
if (dev->dir_dev == NULL) {
- printk(KERN_ERR "Unable to initialise /proc/dldwd/%s.\n", dev->node.dev_name);
+ printk(KERN_ERR "Unable to initialise /proc/hermes/%s.\n", dev->node.dev_name);
goto fail;
}
dev->dir_regs = create_proc_read_entry("regs", S_IFREG | S_IRUGO,
dev->dir_dev, dldwd_proc_get_hermes_regs, dev);
if (dev->dir_regs == NULL) {
- printk(KERN_ERR "Unable to initialise /proc/dldwd/%s/regs.\n", dev->node.dev_name);
+ printk(KERN_ERR "Unable to initialise /proc/hermes/%s/regs.\n", dev->node.dev_name);
goto fail;
}
dev->dir_recs = create_proc_read_entry("recs", S_IFREG | S_IRUGO,
dev->dir_dev, dldwd_proc_get_hermes_recs, dev);
if (dev->dir_recs == NULL) {
- printk(KERN_ERR "Unable to initialise /proc/dldwd/%s/recs.\n", dev->node.dev_name);
+ printk(KERN_ERR "Unable to initialise /proc/hermes/%s/recs.\n", dev->node.dev_name);
goto fail;
}
TRACE_ENTER("dldwd");
if (dir_base) {
- remove_proc_entry("dldwd", &proc_root);
+ remove_proc_entry("hermes", &proc_root);
dir_base = NULL;
}
database.
*/
-static dev_info_t dev_info = "dldwd_cs";
+static dev_info_t dev_info = "orinoco_cs";
/*
A linked list of "instances" of the dummy device. Each actual
*/
if (link->state & DEV_CONFIG) {
#ifdef PCMCIA_DEBUG
- printk(KERN_DEBUG "dldwd_cs: detach postponed, '%s' "
+ printk(KERN_DEBUG "orinoco_cs: detach postponed, '%s' "
"still locked\n", link->dev->dev_name);
#endif
link->state |= DEV_STALE_LINK;
/* Unlink device structure, and free it */
*linkp = link->next;
- DEBUG(0, "dldwd_cs: detach: link=%p link->dev=%p\n", link, link->dev);
+ DEBUG(0, "orinoco_cs: detach: link=%p link->dev=%p\n", link, link->dev);
if (link->dev) {
- DEBUG(0, "dldwd_cs: About to unregister net device %p\n",
+ DEBUG(0, "orinoco_cs: About to unregister net device %p\n",
&priv->ndev);
unregister_netdev(&priv->ndev);
}
CS_CHECK(RequestIRQ, link->handle, &link->irq);
}
- sprintf(priv->node.dev_name, "dldwd%d", priv->instance);
-
/* We initialize the hermes structure before completing PCMCIA
configuration just in case the interrupt handler gets
called. */
*/
CS_CHECK(RequestConfiguration, link->handle, &link->conf);
+ ndev->base_addr = link->io.BasePort1;
+ ndev->irq = link->irq.AssignedIRQ;
+
+ /* Instance name : by default, use hermesX, on demand use the
+ * regular ethX (less risky) - Jean II */
+ if(!eth)
+ sprintf(ndev->name, "hermes%d", priv->instance);
+ else
+ ndev->name[0] = '\0';
+ /* Tell the stack we exist */
+ if (register_netdev(ndev) != 0) {
+ printk(KERN_ERR "orinoco_cs: register_netdev() failed\n");
+ goto failed;
+ }
+ strcpy(priv->node.dev_name, ndev->name);
+
/* Finally, report what we've done */
printk(KERN_INFO "%s: index 0x%02x: Vcc %d.%d",
priv->node.dev_name, link->conf.ConfigIndex,
/* And give us the proc nodes for debugging */
if (dldwd_proc_dev_init(priv) != 0) {
- printk(KERN_ERR "dldwd_cs: Failed to create /proc node for %s\n",
+ printk(KERN_ERR "orinoco_cs: Failed to create /proc node for %s\n",
priv->node.dev_name);
goto failed;
}
- ndev->base_addr = link->io.BasePort1;
- ndev->irq = link->irq.AssignedIRQ;
-
- strcpy(ndev->name, priv->node.dev_name);
- if (register_netdev(ndev) != 0) {
- printk(KERN_ERR "dldwd_cs: register_netdev() failed\n");
- goto failed;
- }
-
/*
At this point, the dev_node_t structure(s) need to be
initialized and arranged in a linked list at link->dev.
no one will try to access the device or its data structures.
*/
if (link->open) {
- DEBUG(0, "dldwd_cs: release postponed, '%s' still open\n",
+ DEBUG(0, "orinoco_cs: release postponed, '%s' still open\n",
link->dev->dev_name);
link->state |= DEV_STALE_CONFIG;
return;
DEBUG(0, "%s\n", version);
CardServices(GetCardServicesInfo, &serv);
if (serv.Revision != CS_RELEASE_CODE) {
- printk(KERN_NOTICE "dldwd_cs: Card Services release "
+ printk(KERN_NOTICE "orinoco_cs: Card Services release "
"does not match!\n");
return -1;
}
unregister_pccard_driver(&dev_info);
if (dev_list)
- DEBUG(0, "dldwd_cs: Removing leftover devices.\n");
+ DEBUG(0, "orinoco_cs: Removing leftover devices.\n");
while (dev_list != NULL) {
del_timer(&dev_list->release);
if (dev_list->state & DEV_CONFIG)
if (tulip_debug > 0 && did_version++ == 0)
printk(KERN_INFO "%s", version);
- dev = init_etherdev(NULL, 0);
+ dev = alloc_etherdev(0);
if (!dev)
return NULL;
if (tulip_tbl[chip_idx].flags & HAS_ACPI)
pci_write_config_dword(pdev, 0x40, 0x00000000);
- printk(KERN_INFO "%s: %s rev %d at %#3lx,",
- dev->name, tulip_tbl[chip_idx].chip_name, chip_rev, ioaddr);
-
/* Stop the chip's Tx and Rx processes. */
outl_CSR6(inl(ioaddr + CSR6) & ~0x2002, ioaddr, chip_idx);
/* Clear the missed-packet counter. */
(volatile int)inl(ioaddr + CSR8);
- if (chip_idx == DC21041) {
- if (inl(ioaddr + CSR9) & 0x8000) {
- printk(" 21040 compatible mode,");
- chip_idx = DC21040;
- } else {
- printk(" 21041 mode,");
- }
- }
-
/* The station address ROM is read byte serially. The register must
be polled, waiting for the value to be read bit serially from the
EEPROM.
#endif
}
- for (i = 0; i < 6; i++)
- printk("%c%2.2X", i ? ':' : ' ', last_phys_addr[i] = dev->dev_addr[i]);
- printk(", IRQ %d.\n", irq);
last_irq = irq;
/* We do a request_region() only to register /proc/ioports info. */
- /* Note that proper size is tulip_tbl[chip_idx].chip_name, but... */
- request_region(ioaddr, tulip_tbl[chip_idx].io_size, dev->name);
+ request_region(ioaddr, tulip_tbl[chip_idx].io_size, "xircom_tulip_cb");
dev->base_addr = ioaddr;
dev->irq = irq;
else if (chip_idx == AX88140)
tp->csr0 |= 0x2000;
-#ifdef TULIP_FULL_DUPLEX
- tp->full_duplex = 1;
- tp->full_duplex_lock = 1;
-#endif
-#ifdef TULIP_DEFAULT_MEDIA
- tp->default_port = TULIP_DEFAULT_MEDIA;
-#endif
-#ifdef TULIP_NO_MEDIA_SWITCH
- tp->medialock = 1;
-#endif
-
/* The lower four bits are the media type. */
if (board_idx >= 0 && board_idx < MAX_UNITS) {
tp->default_port = options[board_idx] & 15;
int reg4 = ((mii_status>>6) & tp->to_advertise) | 1;
tp->phys[phy_idx] = phy;
tp->advertising[phy_idx++] = reg4;
- printk(KERN_INFO "%s: MII transceiver #%d "
+ printk(KERN_INFO "xircom(%s): MII transceiver #%d "
"config %4.4x status %4.4x advertising %4.4x.\n",
- dev->name, phy, mii_reg0, mii_status, mii_advert);
+ pdev->slot_name, phy, mii_reg0, mii_status, mii_advert);
/* Fixup for DLink with miswired PHY. */
if (mii_advert != reg4) {
- printk(KERN_DEBUG "%s: Advertising %4.4x on PHY %d,"
+ printk(KERN_DEBUG "xircom(%s): Advertising %4.4x on PHY %d,"
" previously advertising %4.4x.\n",
- dev->name, reg4, phy, mii_advert);
+ pdev->slot_name, reg4, phy, mii_advert);
mdio_write(dev, phy, 4, reg4);
}
/* Enable autonegotiation: some boards default to off. */
}
tp->mii_cnt = phy_idx;
if (tp->mtable && tp->mtable->has_mii && phy_idx == 0) {
- printk(KERN_INFO "%s: ***WARNING***: No MII transceiver found!\n",
- dev->name);
+ printk(KERN_INFO "xircom(%s): ***WARNING***: No MII transceiver found!\n",
+ pdev->slot_name);
tp->phys[0] = 1;
}
}
/* Reset the xcvr interface and turn on heartbeat. */
switch (chip_idx) {
- case DC21041:
- outl(0x00000000, ioaddr + CSR13);
- outl(0xFFFFFFFF, ioaddr + CSR14);
- outl(0x00000008, ioaddr + CSR15); /* Listen on AUI also. */
- outl_CSR6(inl(ioaddr + CSR6) | 0x0200, ioaddr, chip_idx);
- outl(0x0000EF05, ioaddr + CSR13);
- break;
- case DC21040:
- outl(0x00000000, ioaddr + CSR13);
- outl(0x00000004, ioaddr + CSR13);
- break;
case DC21140: default:
if (tp->mtable)
outl(tp->mtable->csr12dir | 0x100, ioaddr + CSR12);
break;
- case DC21142:
- case PNIC2:
- if (tp->mii_cnt || media_cap[dev->if_port] & MediaIsMII) {
- outl_CSR6(0x82020000, ioaddr, chip_idx);
- outl(0x0000, ioaddr + CSR13);
- outl(0x0000, ioaddr + CSR14);
- outl_CSR6(0x820E0000, ioaddr, chip_idx);
- } else {
- outl_CSR6(0x82420200, ioaddr, chip_idx);
- outl(0x0001, ioaddr + CSR13);
- outl(0x0003FFFF, ioaddr + CSR14);
- outl(0x0008, ioaddr + CSR15);
- outl(0x0001, ioaddr + CSR13);
- outl(0x1301, ioaddr + CSR12); /* Start NWay. */
- }
- break;
case X3201_3:
outl(0x0008, ioaddr + CSR15);
udelay(5); /* The delays are Xircom recommended to give the
udelay(5);
outl_CSR6(0x32000200, ioaddr, chip_idx);
break;
- case LC82C168:
- if ( ! tp->mii_cnt) {
- outl_CSR6(0x00420000, ioaddr, chip_idx);
- outl(0x30, ioaddr + CSR12);
- outl(0x0001F078, ioaddr + 0xB8);
- outl(0x0201F078, ioaddr + 0xB8); /* Turn on autonegotiation. */
- }
- break;
- case MX98713: case COMPEX9881:
- outl_CSR6(0x00000000, ioaddr, chip_idx);
- outl(0x000711C0, ioaddr + CSR14); /* Turn on NWay. */
- outl(0x00000001, ioaddr + CSR13);
- break;
- case MX98715: case MX98725:
- outl_CSR6(0x01a80000, ioaddr, chip_idx);
- outl(0xFFFFFFFF, ioaddr + CSR14);
- outl(0x00001000, ioaddr + CSR12);
- break;
- case COMET:
- /* No initialization necessary. */
- break;
}
+ if (register_netdev(dev)) {
+ request_region(ioaddr, tulip_tbl[chip_idx].io_size, "xircom_tulip_cb");
+ if (tp->mtable)
+ kfree(tp->mtable);
+ kfree(dev->priv);
+ kfree(dev);
+ return NULL;
+ }
+
+ printk(KERN_INFO "%s: %s rev %d at %#3lx,",
+ dev->name, tulip_tbl[chip_idx].chip_name, chip_rev, ioaddr);
+ for (i = 0; i < 6; i++)
+ printk("%c%2.2X", i ? ':' : ' ',
+ last_phys_addr[i] = dev->dev_addr[i]);
+ printk(", IRQ %d.\n", irq);
+
return dev;
}
\f
}
/* Put the setup frame on the Tx list. */
- tp->tx_ring[0].length = 0x08000000 | 192;
+ tp->tx_ring[tp->cur_tx].length = 0x08000000 | 192;
/* Lie about the address of our setup frame to make the */
/* chip happy */
- tp->tx_ring[0].buffer1 = virt_to_bus(tp->setup_frame);
- tp->tx_ring[0].status = DescOwned;
+ tp->tx_ring[tp->cur_tx].buffer1 = virt_to_bus(tp->setup_frame);
+ tp->tx_ring[tp->cur_tx].status = DescOwned;
tp->cur_tx++;
}
#ifdef CARDBUS
if (tp->chip_id == X3201_3)
tp->tx_aligned_skbuff[i] = dev_alloc_skb(PKT_BUF_SZ);
-#endif CARDBUS
+#endif /* CARDBUS */
}
tp->tx_ring[i-1].buffer2 = virt_to_bus(&tp->tx_ring[0]);
}
request_region(ioaddr, PCNET32_TOTAL_SIZE, chipname);
/* pci_alloc_consistent returns page-aligned memory, so we do not have to check the alignment */
- if ((lp = (struct pcnet32_private *)pci_alloc_consistent(pdev, sizeof(*lp), &lp_dma_addr)) == NULL)
+ if ((lp = pci_alloc_consistent(pdev, sizeof(*lp), &lp_dma_addr)) == NULL)
return -ENOMEM;
memset(lp, 0, sizeof(*lp));
static int
pcnet32_open(struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
unsigned long ioaddr = dev->base_addr;
u16 val;
int i;
static void
pcnet32_purge_tx_ring(struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
int i;
for (i = 0; i < TX_RING_SIZE; i++) {
static int
pcnet32_init_ring(struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
int i;
lp->tx_full = 0;
static void
pcnet32_restart(struct net_device *dev, unsigned int csr0_bits)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
unsigned long ioaddr = dev->base_addr;
int i;
static void
pcnet32_tx_timeout (struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
unsigned int ioaddr = dev->base_addr;
/* Transmitter timeout, serious problems. */
static int
pcnet32_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
unsigned int ioaddr = dev->base_addr;
u16 status;
int entry;
static void
pcnet32_interrupt(int irq, void *dev_id, struct pt_regs * regs)
{
- struct net_device *dev = (struct net_device *)dev_id;
+ struct net_device *dev = dev_id;
struct pcnet32_private *lp;
unsigned long ioaddr;
u16 csr0,rap;
}
ioaddr = dev->base_addr;
- lp = (struct pcnet32_private *)dev->priv;
+ lp = dev->priv;
spin_lock(&lp->lock);
static int
pcnet32_rx(struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
int entry = lp->cur_rx & RX_RING_MOD_MASK;
/* If we own the next entry, it's a new packet. Send it up. */
pcnet32_close(struct net_device *dev)
{
unsigned long ioaddr = dev->base_addr;
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
int i;
netif_stop_queue(dev);
static struct net_device_stats *
pcnet32_get_stats(struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
unsigned long ioaddr = dev->base_addr;
u16 saved_addr;
unsigned long flags;
/* taken from the sunlance driver, which it took from the depca driver */
static void pcnet32_load_multicast (struct net_device *dev)
{
- struct pcnet32_private *lp = (struct pcnet32_private *) dev->priv;
+ struct pcnet32_private *lp = dev->priv;
volatile struct pcnet32_init_block *ib = &lp->init_block;
volatile u16 *mcast_table = (u16 *)&ib->filter;
struct dev_mc_list *dmi=dev->mc_list;
static void pcnet32_set_multicast_list(struct net_device *dev)
{
unsigned long ioaddr = dev->base_addr;
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
if (dev->flags&IFF_PROMISC) {
/* Log any net taps. */
static int pcnet32_mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
unsigned long ioaddr = dev->base_addr;
- struct pcnet32_private *lp = (struct pcnet32_private *)dev->priv;
+ struct pcnet32_private *lp = dev->priv;
u16 *data = (u16 *)&rq->ifr_data;
int phyaddr = lp->a.read_bcr (ioaddr, 33);
/* No need to check MOD_IN_USE, as sys_delete_module() checks. */
while (pcnet32_dev) {
- struct pcnet32_private *lp = (struct pcnet32_private *) pcnet32_dev->priv;
+ struct pcnet32_private *lp = pcnet32_dev->priv;
next_dev = lp->next;
unregister_netdev(pcnet32_dev);
release_region(pcnet32_dev->base_addr, PCNET32_TOTAL_SIZE);
RCopen(struct net_device *dev)
{
int post_buffers = MAX_NMBR_RCV_BUFFERS;
- PDPA pDpa = (PDPA) dev->priv;
+ PDPA pDpa = dev->priv;
int count = 0;
int requested = 0;
int error;
RC_xmit_packet(struct sk_buff *skb, struct net_device *dev)
{
- PDPA pDpa = (PDPA) dev->priv;
+ PDPA pDpa = dev->priv;
singleTCB tcb;
psingleTCB ptcb = &tcb;
RC_RETURN status = 0;
{
PDPA pDpa;
- struct net_device *dev = (struct net_device *)(dev_id);
+ struct net_device *dev = dev_id;
- pDpa = (PDPA) (dev->priv);
+ pDpa = dev->priv;
if (pDpa->shutdown)
dprintk("shutdown: service irq\n");
static void rc_timer(unsigned long data)
{
struct net_device *dev = (struct net_device *)data;
- PDPA pDpa = (PDPA) (dev->priv);
+ PDPA pDpa = dev->priv;
int init_status;
static int retry;
int post_buffers = MAX_NMBR_RCV_BUFFERS;
static int
RCclose(struct net_device *dev)
{
- PDPA pDpa = (PDPA) dev->priv;
+ PDPA pDpa = dev->priv;
netif_stop_queue(dev);
irq = pci_dev->irq;
ioaddr = pci_resource_start(pci_dev, 0);
- net_dev = init_etherdev(NULL, sizeof(struct sis900_private));
+ net_dev = alloc_etherdev(sizeof(struct sis900_private));
if (!net_dev)
return -ENOMEM;
SET_MODULE_OWNER(net_dev);
- if (!request_region(ioaddr, SIS900_TOTAL_SIZE, net_dev->name)) {
- printk(KERN_ERR "sis900.c: can't allocate I/O space at 0x%lX\n", ioaddr);
- ret = -EBUSY;
+ ret = pci_request_regions(pci_dev, "sis900");
+ if (ret)
goto err_out;
- }
pci_read_config_byte(pci_dev, PCI_CLASS_REVISION, &revision);
if (revision == SIS630E_900_REV || revision == SIS630EA1_900_REV)
goto err_out_region;
}
- /* print some information about our NIC */
- printk(KERN_INFO "%s: %s at %#lx, IRQ %d, ", net_dev->name,
- card_name, ioaddr, irq);
- for (i = 0; i < 5; i++)
- printk("%2.2x:", (u8)net_dev->dev_addr[i]);
- printk("%2.2x.\n", net_dev->dev_addr[i]);
-
sis_priv = net_dev->priv;
/* We do a request_region() to register /proc/ioports info. */
net_dev->tx_timeout = sis900_tx_timeout;
net_dev->watchdog_timeo = TX_TIMEOUT;
+ ret = register_netdev(net_dev);
+ if (ret)
+ goto err_out_cleardev;
+
+ /* print some information about our NIC */
+ printk(KERN_INFO "%s: %s at %#lx, IRQ %d, ", net_dev->name,
+ card_name, ioaddr, irq);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", (u8)net_dev->dev_addr[i]);
+ printk("%2.2x.\n", net_dev->dev_addr[i]);
+
return 0;
+err_out_cleardev:
+ pci_set_drvdata(pci_dev, NULL);
err_out_region:
- release_region(ioaddr, SIS900_TOTAL_SIZE);
+ pci_release_regions(pci_dev);
err_out:
- unregister_netdev(net_dev);
kfree(net_dev);
return ret;
}
static int __init sis900_mii_probe (struct net_device * net_dev)
{
- struct sis900_private * sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private * sis_priv = net_dev->priv;
int phy_addr;
u8 revision;
static int
sis900_open(struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
long ioaddr = net_dev->base_addr;
u8 revision;
int ret;
static void
sis900_init_tx_ring(struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
long ioaddr = net_dev->base_addr;
int i;
static void
sis900_init_rx_ring(struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
long ioaddr = net_dev->base_addr;
int i;
static void sis630_set_eq(struct net_device *net_dev, u8 revision)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
u16 reg14h, eq_value, max_value=0, min_value=0;
u8 host_bridge_rev;
int i, maxcount=10;
static void sis900_timer(unsigned long data)
{
struct net_device *net_dev = (struct net_device *)data;
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
struct mii_phy *mii_phy = sis_priv->mii;
static int next_tick = 5*HZ;
u16 status;
static void sis900_check_mode (struct net_device *net_dev, struct mii_phy *mii_phy)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
long ioaddr = net_dev->base_addr;
int speed, duplex;
u32 tx_flags = 0, rx_flags = 0;
static void sis900_tx_timeout(struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
long ioaddr = net_dev->base_addr;
unsigned long flags;
int i;
static int
sis900_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
long ioaddr = net_dev->base_addr;
unsigned int entry;
unsigned long flags;
static void sis900_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
{
- struct net_device *net_dev = (struct net_device *)dev_instance;
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct net_device *net_dev = dev_instance;
+ struct sis900_private *sis_priv = net_dev->priv;
int boguscnt = max_interrupt_work;
long ioaddr = net_dev->base_addr;
u32 status;
static int sis900_rx(struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
long ioaddr = net_dev->base_addr;
unsigned int entry = sis_priv->cur_rx % NUM_RX_DESC;
u32 rx_status = sis_priv->rx_ring[entry].cmdsts;
static void sis900_finish_xmit (struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
for (; sis_priv->dirty_tx < sis_priv->cur_tx; sis_priv->dirty_tx++) {
unsigned int entry;
sis900_close(struct net_device *net_dev)
{
long ioaddr = net_dev->base_addr;
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
int i;
netif_stop_queue(net_dev);
static int mii_ioctl(struct net_device *net_dev, struct ifreq *rq, int cmd)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
u16 *data = (u16 *)&rq->ifr_data;
switch(cmd) {
static struct net_device_stats *
sis900_get_stats(struct net_device *net_dev)
{
- struct sis900_private *sis_priv = (struct sis900_private *)net_dev->priv;
+ struct sis900_private *sis_priv = net_dev->priv;
return &sis_priv->stats;
}
static int sis900_set_config(struct net_device *dev, struct ifmap *map)
{
- struct sis900_private *sis_priv = (struct sis900_private *)dev->priv;
+ struct sis900_private *sis_priv = dev->priv;
struct mii_phy *mii_phy = sis_priv->mii;
u16 status;
struct net_device *net_dev = pci_get_drvdata(pci_dev);
unregister_netdev(net_dev);
- release_region(net_dev->base_addr, SIS900_TOTAL_SIZE);
kfree(net_dev);
+ pci_release_regions(pci_dev);
pci_set_drvdata(pci_dev, NULL);
}
#define SKFDDI_PSZ 32 /* address PROM size */
/*
- * address transmision from logical to physical offset address on board
+ * address transmission from logical to physical offset address on board
*/
#define FMA(a) (0x0400|((a)<<1)) /* FORMAC+ (r/w) */
#define P1A(a) (0x0800|((a)<<1)) /* PLC1 (r/w) */
#define SA_PMD_TYPE (8) /* start addr. PMD-Type */
/*
- * address transmision from logical to physical offset address on board
+ * address transmission from logical to physical offset address on board
*/
#define FMA(a) (0x0100|((a)<<1)) /* FORMAC+ (r/w) */
#define P2(a) (0x00c0|((a)<<1)) /* PLC2 (r/w) (DAS) */
#ifdef ISA
/*
- * address transmision from logic NPADDR6-0 to physical offset address on board
+ * address transmission from logic NPADDR6-0 to physical offset address on board
*/
#define FMA(a) (0x8000|(((a)&0x07)<<1)|(((a)&0x78)<<7)) /* FORMAC+ (r/w) */
#define PRA(a) (0x1000|(((a)&0x07)<<1)|(((a)&0x18)<<7)) /* PROM (read only)*/
/* PCI_SUB_ID 16 bit Subsystem ID */
/* PCI_BASE_ROM 32 bit Expansion ROM Base Address */
-#define PCI_ROMBASE 0xfffe0000L /* Bit 31..17: ROM BASE addres (1st) */
+#define PCI_ROMBASE 0xfffe0000L /* Bit 31..17: ROM BASE address (1st) */
#define PCI_ROMBASZ 0x0001c000L /* Bit 16..14: Treat as BASE or SIZE */
#define PCI_ROMSIZE 0x00003800L /* Bit 13..11: ROM Size Requirements */
#define PCI_ROMEN 0x00000001L /* Bit 0: Address Decode enable */
#define PCI_PROG_INTFC 0x00 /* PCI programming Interface (=0) */
/*
- * address transmision from logical to physical offset address on board
+ * address transmission from logical to physical offset address on board
*/
#define FMA(a) (0x0400|((a)<<2)) /* FORMAC+ (r/w) (SN3) */
#define P1(a) (0x0380|((a)<<2)) /* PLC1 (r/w) (DAS) */
struct s_srf_evc *evc ;
int cond_asserted = 0 ;
int cond_deasserted = 0 ;
- int event_occured = 0 ;
+ int event_occurred = 0 ;
int tsr ;
int T_Limit = 2*TICKS_PER_SECOND ;
*evc->evc_multiple = FALSE ;
}
smc->srf.any_report = TRUE ;
- event_occured = TRUE ;
+ event_occurred = TRUE ;
}
#ifdef FDDI_MIB
snmp_srf_event(smc,evc) ;
break ;
}
/* SR01c */
- if (event_occured && tsr < T_Limit) {
+ if (event_occurred && tsr < T_Limit) {
smc->srf.sr_state = SR1_HOLDOFF ;
break ;
}
break ;
}
/* SR00d */
- if (event_occured && tsr >= T_Limit) {
+ if (event_occurred && tsr >= T_Limit) {
smc->srf.TSR = smt_get_time() ;
smt_send_srf(smc) ;
break ;
irq = pdev->irq;
- dev = init_etherdev(NULL, sizeof(*np));
+ dev = alloc_etherdev(sizeof(*np));
if (!dev)
return -ENOMEM;
SET_MODULE_OWNER(dev);
- if (pci_request_regions(pdev, dev->name))
+ if (pci_request_regions(pdev, "sundance"))
goto err_out_netdev;
#ifdef USE_IO_OPS
goto err_out_iomem;
#endif
- printk(KERN_INFO "%s: %s at 0x%lx, ",
- dev->name, pci_id_tbl[chip_idx].name, ioaddr);
-
for (i = 0; i < 3; i++)
((u16 *)dev->dev_addr)[i] =
le16_to_cpu(eeprom_read(ioaddr, i + EEPROM_SA_OFFSET));
- for (i = 0; i < 5; i++)
- printk("%2.2x:", dev->dev_addr[i]);
- printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
dev->base_addr = ioaddr;
dev->irq = irq;
if (mtu)
dev->mtu = mtu;
+ i = register_netdev(dev);
+ if (i)
+ goto err_out_cleardev;
+
+ printk(KERN_INFO "%s: %s at 0x%lx, ",
+ dev->name, pci_id_tbl[chip_idx].name, ioaddr);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
+
if (1) {
int phy, phy_idx = 0;
np->phys[0] = 1; /* Default setting */
card_idx++;
return 0;
+err_out_cleardev:
+ pci_set_drvdata(pdev, NULL);
#ifndef USE_IO_OPS
+ iounmap((void *)ioaddr);
err_out_iomem:
- pci_release_regions(pdev);
#endif
+ pci_release_regions(pdev);
err_out_netdev:
- unregister_netdev (dev);
kfree (dev);
return -ENODEV;
}
--- /dev/null
+/* $Id: sungem.c,v 1.8 2001/03/22 22:48:51 davem Exp $
+ * sungem.c: Sun GEM ethernet driver.
+ *
+ * Copyright (C) 2000, 2001 David S. Miller (davem@redhat.com)
+ */
+
+#include <linux/module.h>
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/types.h>
+#include <linux/fcntl.h>
+#include <linux/interrupt.h>
+#include <linux/ptrace.h>
+#include <linux/ioport.h>
+#include <linux/in.h>
+#include <linux/malloc.h>
+#include <linux/string.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <asm/io.h>
+#include <asm/byteorder.h>
+
+#ifdef __sparc__
+#include <asm/idprom.h>
+#include <asm/openprom.h>
+#include <asm/oplib.h>
+#include <asm/pbm.h>
+#endif
+
+#include "sungem.h"
+
+static char version[] __devinitdata =
+ "sungem.c:v0.75 21/Mar/01 David S. Miller (davem@redhat.com)\n";
+
+MODULE_AUTHOR("David S. Miller (davem@redhat.com)");
+MODULE_DESCRIPTION("Sun GEM Gbit ethernet driver");
+MODULE_PARM(gem_debug, "i");
+
+#define GEM_MODULE_NAME "gem"
+#define PFX GEM_MODULE_NAME ": "
+
+#ifdef GEM_DEBUG
+int gem_debug = GEM_DEBUG;
+#else
+int gem_debug = 1;
+#endif
+
+static struct pci_device_id gem_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_SUN, PCI_DEVICE_ID_SUN_GEM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
+
+ /* These models only differ from the original GEM in
+ * that their tx/rx fifos are of a different size and
+ * they only support 10/100 speeds. -DaveM
+ */
+ { PCI_VENDOR_ID_SUN, PCI_DEVICE_ID_SUN_RIO_GEM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
+#if 0
+ /* Need to figure this one out. */
+ { PCI_VENDOR_ID_SUN, PCI_DEVICE_ID_SUN_PPC_GEM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
+#endif
+ {0, }
+};
+
+MODULE_DEVICE_TABLE(pci, gem_pci_tbl);
+
+static u16 phy_read(struct gem *gp, int reg)
+{
+ u32 cmd;
+ int limit = 10000;
+
+ cmd = (1 << 30);
+ cmd |= (2 << 28);
+ cmd |= (gp->mii_phy_addr << 23) & MIF_FRAME_PHYAD;
+ cmd |= (reg << 18) & MIF_FRAME_REGAD;
+ cmd |= (MIF_FRAME_TAMSB);
+ writel(cmd, gp->regs + MIF_FRAME);
+
+ while (limit--) {
+ cmd = readl(gp->regs + MIF_FRAME);
+ if (cmd & MIF_FRAME_TALSB)
+ break;
+
+ udelay(10);
+ }
+
+ if (!limit)
+ cmd = 0xffff;
+
+ return cmd & MIF_FRAME_DATA;
+}
+
+static void phy_write(struct gem *gp, int reg, u16 val)
+{
+ u32 cmd;
+ int limit = 10000;
+
+ cmd = (1 << 30);
+ cmd |= (1 << 28);
+ cmd |= (gp->mii_phy_addr << 23) & MIF_FRAME_PHYAD;
+ cmd |= (reg << 18) & MIF_FRAME_REGAD;
+ cmd |= (MIF_FRAME_TAMSB);
+ cmd |= (val & MIF_FRAME_DATA);
+ writel(cmd, gp->regs + MIF_FRAME);
+
+ while (limit--) {
+ cmd = readl(gp->regs + MIF_FRAME);
+ if (cmd & MIF_FRAME_TALSB)
+ break;
+
+ udelay(10);
+ }
+}
+
+static void gem_handle_mif_event(struct gem *gp, u32 reg_val, u32 changed_bits)
+{
+}
+
+static int gem_pcs_interrupt(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ u32 pcs_istat = readl(gp->regs + PCS_ISTAT);
+ u32 pcs_miistat;
+
+ if (!(pcs_istat & PCS_ISTAT_LSC)) {
+ printk(KERN_ERR "%s: PCS irq but no link status change???\n",
+ dev->name);
+ return 0;
+ }
+
+ /* The link status bit latches on zero, so you must
+ * read it twice in such a case to see a transition
+ * to the link being up.
+ */
+ pcs_miistat = readl(gp->regs + PCS_MIISTAT);
+ if (!(pcs_miistat & PCS_MIISTAT_LS))
+ pcs_miistat |=
+ (readl(gp->regs + PCS_MIISTAT) &
+ PCS_MIISTAT_LS);
+
+ if (pcs_miistat & PCS_MIISTAT_ANC) {
+ /* The remote-fault indication is only valid
+ * when autoneg has completed.
+ */
+ if (pcs_miistat & PCS_MIISTAT_RF)
+ printk(KERN_INFO "%s: PCS AutoNEG complete, "
+ "RemoteFault\n", dev->name);
+ else
+ printk(KERN_INFO "%s: PCS AutoNEG complete.\n",
+ dev->name);
+ }
+
+ if (pcs_miistat & PCS_MIISTAT_LS)
+ printk(KERN_INFO "%s: PCS link is now up.\n",
+ dev->name);
+ else
+ printk(KERN_INFO "%s: PCS link is now down.\n",
+ dev->name);
+
+ return 0;
+}
+
+static int gem_txmac_interrupt(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ u32 txmac_stat = readl(gp->regs + MAC_TXSTAT);
+
+ /* Defer timer expiration is quite normal,
+ * don't even log the event.
+ */
+ if ((txmac_stat & MAC_TXSTAT_DTE) &&
+ !(txmac_stat & ~MAC_TXSTAT_DTE))
+ return 0;
+
+ if (txmac_stat & MAC_TXSTAT_URUN) {
+ printk("%s: TX MAC xmit underrun.\n",
+ dev->name);
+ gp->net_stats.tx_fifo_errors++;
+ }
+
+ if (txmac_stat & MAC_TXSTAT_MPE) {
+ printk("%s: TX MAC max packet size error.\n",
+ dev->name);
+ gp->net_stats.tx_errors++;
+ }
+
+ /* The rest are all cases of one of the 16-bit TX
+ * counters expiring.
+ */
+ if (txmac_stat & MAC_TXSTAT_NCE)
+ gp->net_stats.collisions += 0x10000;
+
+ if (txmac_stat & MAC_TXSTAT_ECE) {
+ gp->net_stats.tx_aborted_errors += 0x10000;
+ gp->net_stats.collisions += 0x10000;
+ }
+
+ if (txmac_stat & MAC_TXSTAT_LCE) {
+ gp->net_stats.tx_aborted_errors += 0x10000;
+ gp->net_stats.collisions += 0x10000;
+ }
+
+ /* We do not keep track of MAC_TXSTAT_FCE and
+ * MAC_TXSTAT_PCE events.
+ */
+ return 0;
+}
+
+static int gem_rxmac_interrupt(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ u32 rxmac_stat = readl(gp->regs + MAC_RXSTAT);
+
+ if (rxmac_stat & MAC_RXSTAT_OFLW) {
+ printk("%s: RX MAC fifo overflow.\n",
+ dev->name);
+ gp->net_stats.rx_over_errors++;
+ gp->net_stats.rx_fifo_errors++;
+ }
+
+ if (rxmac_stat & MAC_RXSTAT_ACE)
+ gp->net_stats.rx_frame_errors += 0x10000;
+
+ if (rxmac_stat & MAC_RXSTAT_CCE)
+ gp->net_stats.rx_crc_errors += 0x10000;
+
+ if (rxmac_stat & MAC_RXSTAT_LCE)
+ gp->net_stats.rx_length_errors += 0x10000;
+
+ /* We do not track MAC_RXSTAT_FCE and MAC_RXSTAT_VCE
+ * events.
+ */
+ return 0;
+}
+
+static int gem_mac_interrupt(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ u32 mac_cstat = readl(gp->regs + MAC_CSTAT);
+
+ /* This interrupt is just for pause frame and pause
+ * tracking. It is useful for diagnostics and debug
+ * but probably by default we will mask these events.
+ */
+ if (mac_cstat & MAC_CSTAT_PS)
+ gp->pause_entered++;
+
+ if (mac_cstat & MAC_CSTAT_PRCV)
+ gp->pause_last_time_recvd = (mac_cstat >> 16);
+
+ return 0;
+}
+
+static int gem_mif_interrupt(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ u32 mif_status = readl(gp->regs + MIF_STATUS);
+ u32 reg_val, changed_bits;
+
+ reg_val = (mif_status & MIF_STATUS_DATA) >> 16;
+ changed_bits = (mif_status & MIF_STATUS_STAT);
+
+ gem_handle_mif_event(gp, reg_val, changed_bits);
+
+ return 0;
+}
+
+static int gem_pci_interrupt(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ u32 pci_estat = readl(gp->regs + GREG_PCIESTAT);
+
+ if (gp->pdev->device == PCI_DEVICE_ID_SUN_GEM) {
+ printk(KERN_ERR "%s: PCI error [%04x] ",
+ dev->name, pci_estat);
+
+ if (pci_estat & GREG_PCIESTAT_BADACK)
+ printk("<No ACK64# during ABS64 cycle> ");
+ if (pci_estat & GREG_PCIESTAT_DTRTO)
+ printk("<Delayed transaction timeout> ");
+ if (pci_estat & GREG_PCIESTAT_OTHER)
+ printk("<other>");
+ printk("\n");
+ } else {
+ pci_estat |= GREG_PCIESTAT_OTHER;
+ printk(KERN_ERR "%s: PCI error\n", dev->name);
+ }
+
+ if (pci_estat & GREG_PCIESTAT_OTHER) {
+ u16 pci_cfg_stat;
+
+ /* Interrogate PCI config space for the
+ * true cause.
+ */
+ pci_read_config_word(gp->pdev, PCI_STATUS,
+ &pci_cfg_stat);
+ printk(KERN_ERR "%s: Read PCI cfg space status [%04x]\n",
+ dev->name, pci_cfg_stat);
+ if (pci_cfg_stat & PCI_STATUS_PARITY)
+ printk(KERN_ERR "%s: PCI parity error detected.\n",
+ dev->name);
+ if (pci_cfg_stat & PCI_STATUS_SIG_TARGET_ABORT)
+ printk(KERN_ERR "%s: PCI target abort.\n",
+ dev->name);
+ if (pci_cfg_stat & PCI_STATUS_REC_TARGET_ABORT)
+ printk(KERN_ERR "%s: PCI master acks target abort.\n",
+ dev->name);
+ if (pci_cfg_stat & PCI_STATUS_REC_MASTER_ABORT)
+ printk(KERN_ERR "%s: PCI master abort.\n",
+ dev->name);
+ if (pci_cfg_stat & PCI_STATUS_SIG_SYSTEM_ERROR)
+ printk(KERN_ERR "%s: PCI system error SERR#.\n",
+ dev->name);
+ if (pci_cfg_stat & PCI_STATUS_DETECTED_PARITY)
+ printk(KERN_ERR "%s: PCI parity error.\n",
+ dev->name);
+
+ /* Write the error bits back to clear them. */
+ pci_cfg_stat &= (PCI_STATUS_PARITY |
+ PCI_STATUS_SIG_TARGET_ABORT |
+ PCI_STATUS_REC_TARGET_ABORT |
+ PCI_STATUS_REC_MASTER_ABORT |
+ PCI_STATUS_SIG_SYSTEM_ERROR |
+ PCI_STATUS_DETECTED_PARITY);
+ pci_write_config_word(gp->pdev,
+ PCI_STATUS, pci_cfg_stat);
+ }
+
+ /* For all PCI errors, we should reset the chip. */
+ return 1;
+}
+
+static void gem_stop(struct gem *, unsigned long);
+static void gem_init_rings(struct gem *, int);
+static void gem_init_hw(struct gem *);
+
+/* All non-normal interrupt conditions get serviced here.
+ * Returns non-zero if we should just exit the interrupt
+ * handler right now (ie. if we reset the card which invalidates
+ * all of the other original irq status bits).
+ */
+static int gem_abnormal_irq(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ if (gem_status & GREG_STAT_RXNOBUF) {
+ /* Frame arrived, no free RX buffers available. */
+ gp->net_stats.rx_dropped++;
+ }
+
+ if (gem_status & GREG_STAT_RXTAGERR) {
+ /* corrupt RX tag framing */
+ gp->net_stats.rx_errors++;
+
+ goto do_reset;
+ }
+
+ if (gem_status & GREG_STAT_PCS) {
+ if (gem_pcs_interrupt(dev, gp, gem_status))
+ goto do_reset;
+ }
+
+ if (gem_status & GREG_STAT_TXMAC) {
+ if (gem_txmac_interrupt(dev, gp, gem_status))
+ goto do_reset;
+ }
+
+ if (gem_status & GREG_STAT_RXMAC) {
+ if (gem_rxmac_interrupt(dev, gp, gem_status))
+ goto do_reset;
+ }
+
+ if (gem_status & GREG_STAT_MAC) {
+ if (gem_mac_interrupt(dev, gp, gem_status))
+ goto do_reset;
+ }
+
+ if (gem_status & GREG_STAT_MIF) {
+ if (gem_mif_interrupt(dev, gp, gem_status))
+ goto do_reset;
+ }
+
+ if (gem_status & GREG_STAT_PCIERR) {
+ if (gem_pci_interrupt(dev, gp, gem_status))
+ goto do_reset;
+ }
+
+ return 0;
+
+do_reset:
+ gem_stop(gp, gp->regs);
+ gem_init_rings(gp, 1);
+ gem_init_hw(gp);
+ return 1;
+}
+
+static __inline__ void gem_tx(struct net_device *dev, struct gem *gp, u32 gem_status)
+{
+ int entry, limit;
+
+ entry = gp->tx_old;
+ limit = ((gem_status & GREG_STAT_TXNR) >> GREG_STAT_TXNR_SHIFT);
+ while (entry != limit) {
+ struct sk_buff *skb;
+ struct gem_txd *txd;
+ u32 dma_addr;
+
+ txd = &gp->init_block->txd[entry];
+ skb = gp->tx_skbs[entry];
+ dma_addr = (u32) le64_to_cpu(txd->buffer);
+ pci_unmap_single(gp->pdev, dma_addr,
+ skb->len, PCI_DMA_TODEVICE);
+ gp->tx_skbs[entry] = NULL;
+
+ gp->net_stats.tx_bytes += skb->len;
+ gp->net_stats.tx_packets++;
+
+ dev_kfree_skb_irq(skb);
+
+ entry = NEXT_TX(entry);
+ }
+ gp->tx_old = entry;
+
+ if (netif_queue_stopped(dev) &&
+ TX_BUFFS_AVAIL(gp) > 0)
+ netif_wake_queue(dev);
+}
+
+static __inline__ void gem_post_rxds(struct gem *gp, int limit)
+{
+ int cluster_start, curr, count, kick;
+
+ cluster_start = curr = (gp->rx_new & ~(4 - 1));
+ count = 0;
+ kick = -1;
+ while (curr != limit) {
+ curr = NEXT_RX(curr);
+ if (++count == 4) {
+ struct gem_rxd *rxd =
+ &gp->init_block->rxd[cluster_start];
+ for (;;) {
+ rxd->status_word = cpu_to_le64(RXDCTRL_FRESH);
+ rxd++;
+ cluster_start = NEXT_RX(cluster_start);
+ if (cluster_start == curr)
+ break;
+ }
+ kick = curr;
+ count = 0;
+ }
+ }
+ if (kick >= 0)
+ writel(kick, gp->regs + RXDMA_KICK);
+}
+
+static void gem_rx(struct gem *gp)
+{
+ int entry, drops;
+
+ entry = gp->rx_new;
+ drops = 0;
+ for (;;) {
+ struct gem_rxd *rxd = &gp->init_block->rxd[entry];
+ struct sk_buff *skb;
+ u64 status = cpu_to_le64(rxd->status_word);
+ u32 dma_addr;
+ int len;
+
+ if ((status & RXDCTRL_OWN) != 0)
+ break;
+
+ skb = gp->rx_skbs[entry];
+
+ len = (status & RXDCTRL_BUFSZ) >> 16;
+ if ((len < ETH_ZLEN) || (status & RXDCTRL_BAD)) {
+ gp->net_stats.rx_errors++;
+ if (len < ETH_ZLEN)
+ gp->net_stats.rx_length_errors++;
+ if (len & RXDCTRL_BAD)
+ gp->net_stats.rx_crc_errors++;
+
+ /* We'll just return it to GEM. */
+ drop_it:
+ gp->net_stats.rx_dropped++;
+ goto next;
+ }
+
+ dma_addr = (u32) cpu_to_le64(rxd->buffer);
+ if (len > RX_COPY_THRESHOLD) {
+ struct sk_buff *new_skb;
+
+ new_skb = gem_alloc_skb(RX_BUF_ALLOC_SIZE, GFP_ATOMIC);
+ if (new_skb == NULL) {
+ drops++;
+ goto drop_it;
+ }
+ pci_unmap_single(gp->pdev, dma_addr,
+ RX_BUF_ALLOC_SIZE, PCI_DMA_FROMDEVICE);
+ gp->rx_skbs[entry] = new_skb;
+ new_skb->dev = gp->dev;
+ skb_put(new_skb, (ETH_FRAME_LEN + RX_OFFSET));
+ rxd->buffer = cpu_to_le64(pci_map_single(gp->pdev,
+ new_skb->data,
+ RX_BUF_ALLOC_SIZE,
+ PCI_DMA_FROMDEVICE));
+ skb_reserve(new_skb, RX_OFFSET);
+
+ /* Trim the original skb for the netif. */
+ skb_trim(skb, len);
+ } else {
+ struct sk_buff *copy_skb = dev_alloc_skb(len + 2);
+
+ if (copy_skb == NULL) {
+ drops++;
+ goto drop_it;
+ }
+
+ copy_skb->dev = gp->dev;
+ skb_reserve(copy_skb, 2);
+ skb_put(copy_skb, len);
+ pci_dma_sync_single(gp->pdev, dma_addr, len, PCI_DMA_FROMDEVICE);
+ memcpy(copy_skb->data, skb->data, len);
+
+ /* We'll reuse the original ring buffer. */
+ skb = copy_skb;
+ }
+
+ skb->protocol = eth_type_trans(skb, gp->dev);
+ netif_rx(skb);
+
+ gp->net_stats.rx_packets++;
+ gp->net_stats.rx_bytes += len;
+
+ next:
+ entry = NEXT_RX(entry);
+ }
+
+ gem_post_rxds(gp, entry);
+
+ gp->rx_new = entry;
+
+ if (drops)
+ printk(KERN_INFO "%s: Memory squeeze, deferring packet.\n",
+ gp->dev->name);
+}
+
+static void gem_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = (struct net_device *) dev_id;
+ struct gem *gp = (struct gem *) dev->priv;
+ u32 gem_status = readl(gp->regs + GREG_STAT);
+
+ spin_lock(&gp->lock);
+
+ if (gem_status & GREG_STAT_ABNORMAL) {
+ if (gem_abnormal_irq(dev, gp, gem_status))
+ goto out;
+ }
+ if (gem_status & (GREG_STAT_TXALL | GREG_STAT_TXINTME))
+ gem_tx(dev, gp, gem_status);
+ if (gem_status & GREG_STAT_RXDONE)
+ gem_rx(gp);
+
+out:
+ spin_unlock(&gp->lock);
+}
+
+static int gem_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct gem *gp = (struct gem *) dev->priv;
+ long len;
+ int entry, avail;
+ u32 mapping;
+
+ len = skb->len;
+ mapping = pci_map_single(gp->pdev, skb->data, len, PCI_DMA_TODEVICE);
+
+ spin_lock_irq(&gp->lock);
+ entry = gp->tx_new;
+ gp->tx_skbs[entry] = skb;
+
+ gp->tx_new = NEXT_TX(entry);
+ avail = TX_BUFFS_AVAIL(gp);
+ if (avail <= 0)
+ netif_stop_queue(dev);
+
+ {
+ struct gem_txd *txd = &gp->init_block->txd[entry];
+ u64 ctrl = (len & TXDCTRL_BUFSZ) | TXDCTRL_EOF | TXDCTRL_SOF;
+
+ txd->control_word = cpu_to_le64(ctrl);
+ txd->buffer = cpu_to_le64(mapping);
+ }
+
+ writel(gp->tx_new, gp->regs + TXDMA_KICK);
+ spin_unlock_irq(&gp->lock);
+
+ dev->trans_start = jiffies;
+
+ return 0;
+}
+
+#define STOP_TRIES 32
+
+static void gem_stop(struct gem *gp, unsigned long regs)
+{
+ int limit;
+ u32 val;
+
+ writel(GREG_SWRST_TXRST | GREG_SWRST_RXRST, regs + GREG_SWRST);
+
+ limit = STOP_TRIES;
+
+ do {
+ udelay(20);
+ val = readl(regs + GREG_SWRST);
+ if (limit-- <= 0)
+ break;
+ } while (val & (GREG_SWRST_TXRST | GREG_SWRST_RXRST));
+
+ if (limit <= 0)
+ printk(KERN_ERR "gem: SW reset is ghetto.\n");
+}
+
+/* A link-up condition has occurred, initialize and enable the
+ * rest of the chip.
+ */
+static void gem_set_link_modes(struct gem *gp)
+{
+ u32 val;
+ int full_duplex, speed;
+
+ full_duplex = 0;
+ speed = 10;
+ if (gp->phy_type == phy_mii_mdio0 ||
+ gp->phy_type == phy_mii_mdio1) {
+ if (gp->lstate == aneg_wait) {
+ val = phy_read(gp, PHY_LPA);
+ if (val & (PHY_LPA_10FULL | PHY_LPA_100FULL))
+ full_duplex = 1;
+ if (val & (PHY_LPA_100FULL | PHY_LPA_100HALF))
+ speed = 100;
+ } else {
+ val = phy_read(gp, PHY_CTRL);
+ if (val & PHY_CTRL_FDPLX)
+ full_duplex = 1;
+ if (val & PHY_CTRL_SPD100)
+ speed = 100;
+ }
+ } else {
+ u32 pcs_lpa = readl(gp->regs + PCS_MIILP);
+
+ if (pcs_lpa & PCS_MIIADV_FD)
+ full_duplex = 1;
+ speed = 1000;
+ }
+
+ printk(KERN_INFO "%s: Link is up at %d Mbps, %s-duplex.\n",
+ gp->dev->name, speed, (full_duplex ? "full" : "half"));
+
+ val = (MAC_TXCFG_EIPG0 | MAC_TXCFG_NGU);
+ if (full_duplex) {
+ val |= (MAC_TXCFG_ICS | MAC_TXCFG_ICOLL);
+ } else {
+ /* MAC_TXCFG_NBO must be zero. */
+ }
+ writel(val, gp->regs + MAC_TXCFG);
+
+ val = (MAC_XIFCFG_OE | MAC_XIFCFG_LLED);
+ if (!full_duplex &&
+ (gp->phy_type == phy_mii_mdio0 ||
+ gp->phy_type == phy_mii_mdio1)) {
+ val |= MAC_XIFCFG_DISE;
+ } else if (full_duplex) {
+ val |= MAC_XIFCFG_FLED;
+ }
+ writel(val, gp->regs + MAC_XIFCFG);
+
+ if (gp->phy_type == phy_serialink ||
+ gp->phy_type == phy_serdes) {
+ u32 pcs_lpa = readl(gp->regs + PCS_MIILP);
+
+ val = readl(gp->regs + MAC_MCCFG);
+ if (pcs_lpa & (PCS_MIIADV_SP | PCS_MIIADV_AP))
+ val |= (MAC_MCCFG_SPE | MAC_MCCFG_RPE);
+ else
+ val &= ~(MAC_MCCFG_SPE | MAC_MCCFG_RPE);
+ writel(val, gp->regs + MAC_MCCFG);
+
+ /* XXX Set up PCS MII Control and Serialink Control
+ * XXX registers.
+ */
+
+ if (!full_duplex)
+ writel(512, gp->regs + MAC_STIME);
+ else
+ writel(64, gp->regs + MAC_STIME);
+ } else {
+ /* Set slot-time of 64. */
+ writel(64, gp->regs + MAC_STIME);
+ }
+
+ /* We are ready to rock, turn everything on. */
+ val = readl(gp->regs + TXDMA_CFG);
+ writel(val | TXDMA_CFG_ENABLE, gp->regs + TXDMA_CFG);
+ val = readl(gp->regs + RXDMA_CFG);
+ writel(val | RXDMA_CFG_ENABLE, gp->regs + RXDMA_CFG);
+ val = readl(gp->regs + MAC_TXCFG);
+ writel(val | MAC_TXCFG_ENAB, gp->regs + MAC_TXCFG);
+ val = readl(gp->regs + MAC_RXCFG);
+ writel(val | MAC_RXCFG_ENAB, gp->regs + MAC_RXCFG);
+}
+
+static int gem_mdio_link_not_up(struct gem *gp)
+{
+ if (gp->lstate == aneg_wait) {
+ u16 val = phy_read(gp, PHY_CTRL);
+
+ /* Try forced modes. */
+ val &= ~(PHY_CTRL_ANRES | PHY_CTRL_ANENAB);
+ val &= ~(PHY_CTRL_FDPLX);
+ val |= PHY_CTRL_SPD100;
+ phy_write(gp, PHY_CTRL, val);
+ gp->timer_ticks = 0;
+ gp->lstate = force_wait;
+ return 1;
+ } else {
+ /* Downgrade from 100 to 10 Mbps if necessary.
+ * If already at 10Mbps, warn user about the
+ * situation every 10 ticks.
+ */
+ u16 val = phy_read(gp, PHY_CTRL);
+ if (val & PHY_CTRL_SPD100) {
+ val &= ~PHY_CTRL_SPD100;
+ phy_write(gp, PHY_CTRL, val);
+ gp->timer_ticks = 0;
+ return 1;
+ } else {
+ printk(KERN_ERR "%s: Link down, cable problem?\n",
+ gp->dev->name);
+ val |= (PHY_CTRL_ANRES | PHY_CTRL_ANENAB);
+ phy_write(gp, PHY_CTRL, val);
+ gp->timer_ticks = 1;
+ gp->lstate = aneg_wait;
+ return 1;
+ }
+ }
+}
+
+static void gem_link_timer(unsigned long data)
+{
+ struct gem *gp = (struct gem *) data;
+ int restart_timer = 0;
+
+ gp->timer_ticks++;
+ if (gp->phy_type == phy_mii_mdio0 ||
+ gp->phy_type == phy_mii_mdio1) {
+ u16 val = phy_read(gp, PHY_STAT);
+
+ if (val & PHY_STAT_LSTAT) {
+ gem_set_link_modes(gp);
+ } else if (gp->timer_ticks < 10) {
+ restart_timer = 1;
+ } else {
+ restart_timer = gem_mdio_link_not_up(gp);
+ }
+ } else {
+ /* XXX Code PCS support... XXX */
+ }
+
+ if (restart_timer) {
+ gp->link_timer.expires = jiffies + ((12 * HZ) / 10);
+ add_timer(&gp->link_timer);
+ }
+}
+
+static void gem_clean_rings(struct gem *gp)
+{
+ struct gem_init_block *gb = gp->init_block;
+ struct sk_buff *skb;
+ int i;
+ u32 dma_addr;
+
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ struct gem_rxd *rxd;
+
+ rxd = &gb->rxd[i];
+ if (gp->rx_skbs[i] != NULL) {
+
+ skb = gp->rx_skbs[i];
+ dma_addr = (u32) le64_to_cpu(rxd->buffer);
+ pci_unmap_single(gp->pdev, dma_addr,
+ RX_BUF_ALLOC_SIZE,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb_any(skb);
+ gp->rx_skbs[i] = NULL;
+ }
+ rxd->status_word = 0;
+ rxd->buffer = 0;
+ }
+
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ if (gp->tx_skbs[i] != NULL) {
+ struct gem_txd *txd;
+
+ skb = gp->tx_skbs[i];
+ txd = &gb->txd[i];
+ dma_addr = (u32) le64_to_cpu(txd->buffer);
+ pci_unmap_single(gp->pdev, dma_addr,
+ skb->len, PCI_DMA_TODEVICE);
+ dev_kfree_skb_any(skb);
+ gp->tx_skbs[i] = NULL;
+ }
+ }
+}
+
+static void gem_init_rings(struct gem *gp, int from_irq)
+{
+ struct gem_init_block *gb = gp->init_block;
+ struct net_device *dev = gp->dev;
+ int i, gfp_flags = GFP_KERNEL;
+ u32 dma_addr;
+
+ if (from_irq)
+ gfp_flags = GFP_ATOMIC;
+
+ gp->rx_new = gp->rx_old = gp->tx_new = gp->tx_old = 0;
+
+ gem_clean_rings(gp);
+
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ struct sk_buff *skb;
+ struct gem_rxd *rxd = &gb->rxd[i];
+
+ skb = gem_alloc_skb(RX_BUF_ALLOC_SIZE, gfp_flags);
+ if (!skb) {
+ rxd->buffer = 0;
+ rxd->status_word = 0;
+ continue;
+ }
+
+ gp->rx_skbs[i] = skb;
+ skb->dev = dev;
+ skb_put(skb, (ETH_FRAME_LEN + RX_OFFSET));
+ dma_addr = pci_map_single(gp->pdev, skb->data,
+ RX_BUF_ALLOC_SIZE,
+ PCI_DMA_FROMDEVICE);
+ rxd->buffer = cpu_to_le64(dma_addr);
+ rxd->status_word = cpu_to_le64(RXDCTRL_FRESH);
+ skb_reserve(skb, RX_OFFSET);
+ }
+
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ struct gem_txd *txd = &gb->txd[i];
+
+ txd->control_word = 0;
+ txd->buffer = 0;
+ }
+}
+
+static void gem_init_phy(struct gem *gp)
+{
+ if (gp->pdev->device == PCI_DEVICE_ID_SUN_GEM) {
+ /* Init datapath mode register. */
+ if (gp->phy_type == phy_mii_mdio0 ||
+ gp->phy_type == phy_mii_mdio1) {
+ writel(PCS_DMODE_MGM, gp->regs + PCS_DMODE);
+ } else if (gp->phy_type == phy_serialink) {
+ writel(PCS_DMODE_SM, gp->regs + PCS_DMODE);
+ } else {
+ writel(PCS_DMODE_ESM, gp->regs + PCS_DMODE);
+ }
+ }
+
+ if (gp->phy_type == phy_mii_mdio0 ||
+ gp->phy_type == phy_mii_mdio1) {
+ u16 val = phy_read(gp, PHY_CTRL);
+ int limit = 10000;
+
+ /* Take PHY out of isloate mode and reset it. */
+ val &= ~PHY_CTRL_ISO;
+ val |= PHY_CTRL_RST;
+ phy_write(gp, PHY_CTRL, val);
+
+ while (limit--) {
+ val = phy_read(gp, PHY_CTRL);
+ if ((val & PHY_CTRL_RST) == 0)
+ break;
+ udelay(10);
+ }
+
+ /* Init advertisement and enable autonegotiation. */
+ phy_write(gp, PHY_ADV,
+ (PHY_ADV_10HALF | PHY_ADV_10FULL |
+ PHY_ADV_100HALF | PHY_ADV_100FULL));
+
+ val |= (PHY_CTRL_ANRES | PHY_CTRL_ANENAB);
+ phy_write(gp, PHY_CTRL, val);
+ } else {
+ /* XXX Implement me XXX */
+ }
+}
+
+static void gem_init_dma(struct gem *gp)
+{
+ u32 val;
+
+ val = (TXDMA_CFG_BASE | (0x4ff << 10) | TXDMA_CFG_PMODE);
+ writel(val, gp->regs + TXDMA_CFG);
+
+ writel(0, gp->regs + TXDMA_DBHI);
+ writel(gp->gblock_dvma, gp->regs + TXDMA_DBLOW);
+
+ writel(0, gp->regs + TXDMA_KICK);
+
+ val = (RXDMA_CFG_BASE | (RX_OFFSET << 10) |
+ ((14 / 2) << 13) | RXDMA_CFG_FTHRESH_128);
+ writel(val, gp->regs + RXDMA_CFG);
+
+ writel(0, gp->regs + RXDMA_DBHI);
+ writel((gp->gblock_dvma +
+ (TX_RING_SIZE * sizeof(struct gem_txd))),
+ gp->regs + RXDMA_DBLOW);
+
+ writel(RX_RING_SIZE - 4, gp->regs + RXDMA_KICK);
+
+ val = (((gp->rx_pause_off / 64) << 0) & RXDMA_PTHRESH_OFF);
+ val |= (((gp->rx_pause_on / 64) << 12) & RXDMA_PTHRESH_ON);
+ writel(val, gp->regs + RXDMA_PTHRESH);
+
+ if (readl(gp->regs + GREG_BIFCFG) & GREG_BIFCFG_M66EN)
+ writel(((5 & RXDMA_BLANK_IPKTS) |
+ ((8 << 12) & RXDMA_BLANK_ITIME)),
+ gp->regs + RXDMA_BLANK);
+ else
+ writel(((5 & RXDMA_BLANK_IPKTS) |
+ ((4 << 12) & RXDMA_BLANK_ITIME)),
+ gp->regs + RXDMA_BLANK);
+}
+
+#define CRC_POLYNOMIAL_LE 0xedb88320UL /* Ethernet CRC, little endian */
+
+static void gem_init_mac(struct gem *gp)
+{
+ unsigned char *e = &gp->dev->dev_addr[0];
+ u32 rxcfg;
+
+ if (gp->pdev->device == PCI_DEVICE_ID_SUN_GEM)
+ writel(0x1bf0, gp->regs + MAC_SNDPAUSE);
+
+ writel(0x00, gp->regs + MAC_IPG0);
+ writel(0x08, gp->regs + MAC_IPG1);
+ writel(0x04, gp->regs + MAC_IPG2);
+ writel(0x40, gp->regs + MAC_STIME);
+ writel(0x40, gp->regs + MAC_MINFSZ);
+ writel(0x5ee, gp->regs + MAC_MAXFSZ);
+ writel(0x07, gp->regs + MAC_PASIZE);
+ writel(0x04, gp->regs + MAC_JAMSIZE);
+ writel(0x10, gp->regs + MAC_ATTLIM);
+ writel(0x8808, gp->regs + MAC_MCTYPE);
+
+ writel((e[5] | (e[4] << 8)) & 0x3ff, gp->regs + MAC_RANDSEED);
+
+ writel((e[4] << 8) | e[5], gp->regs + MAC_ADDR0);
+ writel((e[2] << 8) | e[3], gp->regs + MAC_ADDR1);
+ writel((e[0] << 8) | e[1], gp->regs + MAC_ADDR2);
+
+ writel(0, gp->regs + MAC_ADDR3);
+ writel(0, gp->regs + MAC_ADDR4);
+ writel(0, gp->regs + MAC_ADDR5);
+
+ writel(0x0001, gp->regs + MAC_ADDR6);
+ writel(0xc200, gp->regs + MAC_ADDR7);
+ writel(0x0180, gp->regs + MAC_ADDR8);
+
+ writel(0, gp->regs + MAC_AFILT0);
+ writel(0, gp->regs + MAC_AFILT1);
+ writel(0, gp->regs + MAC_AFILT2);
+ writel(0, gp->regs + MAC_AF21MSK);
+ writel(0, gp->regs + MAC_AF0MSK);
+
+ rxcfg = 0;
+ if ((gp->dev->flags & IFF_ALLMULTI) ||
+ (gp->dev->mc_count > 256)) {
+ writel(0xffff, gp->regs + MAC_HASH0);
+ writel(0xffff, gp->regs + MAC_HASH1);
+ writel(0xffff, gp->regs + MAC_HASH2);
+ writel(0xffff, gp->regs + MAC_HASH3);
+ writel(0xffff, gp->regs + MAC_HASH4);
+ writel(0xffff, gp->regs + MAC_HASH5);
+ writel(0xffff, gp->regs + MAC_HASH6);
+ writel(0xffff, gp->regs + MAC_HASH7);
+ writel(0xffff, gp->regs + MAC_HASH8);
+ writel(0xffff, gp->regs + MAC_HASH9);
+ writel(0xffff, gp->regs + MAC_HASH10);
+ writel(0xffff, gp->regs + MAC_HASH11);
+ writel(0xffff, gp->regs + MAC_HASH12);
+ writel(0xffff, gp->regs + MAC_HASH13);
+ writel(0xffff, gp->regs + MAC_HASH14);
+ writel(0xffff, gp->regs + MAC_HASH15);
+ } else if (gp->dev->flags & IFF_PROMISC) {
+ rxcfg |= MAC_RXCFG_PROM;
+ } else {
+ u16 hash_table[16];
+ u32 crc, poly = CRC_POLYNOMIAL_LE;
+ struct dev_mc_list *dmi = gp->dev->mc_list;
+ int i, j, bit, byte;
+
+ for (i = 0; i < 16; i++)
+ hash_table[i] = 0;
+
+ for (i = 0; i < gp->dev->mc_count; i++) {
+ char *addrs = dmi->dmi_addr;
+
+ dmi = dmi->next;
+
+ if (!(*addrs & 1))
+ continue;
+
+ crc = 0xffffffffU;
+ for (byte = 0; byte < 6; byte++) {
+ for (bit = *addrs++, j = 0; j < 8; j++, bit >>= 1) {
+ int test;
+
+ test = ((bit ^ crc) & 0x01);
+ crc >>= 1;
+ if (test)
+ crc = crc ^ poly;
+ }
+ }
+ crc >>= 24;
+ hash_table[crc >> 4] |= 1 << (crc & 0xf);
+ }
+ writel(hash_table[0], gp->regs + MAC_HASH0);
+ writel(hash_table[1], gp->regs + MAC_HASH1);
+ writel(hash_table[2], gp->regs + MAC_HASH2);
+ writel(hash_table[3], gp->regs + MAC_HASH3);
+ writel(hash_table[4], gp->regs + MAC_HASH4);
+ writel(hash_table[5], gp->regs + MAC_HASH5);
+ writel(hash_table[6], gp->regs + MAC_HASH6);
+ writel(hash_table[7], gp->regs + MAC_HASH7);
+ writel(hash_table[8], gp->regs + MAC_HASH8);
+ writel(hash_table[9], gp->regs + MAC_HASH9);
+ writel(hash_table[10], gp->regs + MAC_HASH10);
+ writel(hash_table[11], gp->regs + MAC_HASH11);
+ writel(hash_table[12], gp->regs + MAC_HASH12);
+ writel(hash_table[13], gp->regs + MAC_HASH13);
+ writel(hash_table[14], gp->regs + MAC_HASH14);
+ writel(hash_table[15], gp->regs + MAC_HASH15);
+ }
+
+ writel(0, gp->regs + MAC_NCOLL);
+ writel(0, gp->regs + MAC_FASUCC);
+ writel(0, gp->regs + MAC_ECOLL);
+ writel(0, gp->regs + MAC_LCOLL);
+ writel(0, gp->regs + MAC_DTIMER);
+ writel(0, gp->regs + MAC_PATMPS);
+ writel(0, gp->regs + MAC_RFCTR);
+ writel(0, gp->regs + MAC_LERR);
+ writel(0, gp->regs + MAC_AERR);
+ writel(0, gp->regs + MAC_FCSERR);
+ writel(0, gp->regs + MAC_RXCVERR);
+
+ /* Clear RX/TX/MAC/XIF config, we will set these up and enable
+ * them once a link is established.
+ */
+ writel(0, gp->regs + MAC_TXCFG);
+ writel(rxcfg, gp->regs + MAC_RXCFG);
+ writel(0, gp->regs + MAC_MCCFG);
+ writel(0, gp->regs + MAC_XIFCFG);
+
+ writel((MAC_TXSTAT_URUN | MAC_TXSTAT_MPE |
+ MAC_TXSTAT_NCE | MAC_TXSTAT_ECE |
+ MAC_TXSTAT_LCE | MAC_TXSTAT_FCE |
+ MAC_TXSTAT_DTE | MAC_TXSTAT_PCE), gp->regs + MAC_TXMASK);
+ writel((MAC_RXSTAT_OFLW | MAC_RXSTAT_FCE |
+ MAC_RXSTAT_ACE | MAC_RXSTAT_CCE |
+ MAC_RXSTAT_LCE | MAC_RXSTAT_VCE), gp->regs + MAC_RXMASK);
+ writel(0, gp->regs + MAC_MCMASK);
+}
+
+static void gem_init_hw(struct gem *gp)
+{
+ gem_init_phy(gp);
+ gem_init_dma(gp);
+ gem_init_mac(gp);
+
+ writel(GREG_STAT_TXDONE, gp->regs + GREG_IMASK);
+
+ gp->timer_ticks = 0;
+ gp->lstate = aneg_wait;
+ gp->link_timer.expires = jiffies + ((12 * HZ) / 10);
+ add_timer(&gp->link_timer);
+}
+
+static int gem_open(struct net_device *dev)
+{
+ struct gem *gp = (struct gem *) dev->priv;
+ unsigned long regs = gp->regs;
+
+ del_timer(&gp->link_timer);
+
+ if (request_irq(gp->pdev->irq, gem_interrupt,
+ SA_SHIRQ, dev->name, (void *)dev))
+ return -EAGAIN;
+
+ gem_stop(gp, regs);
+ gem_init_rings(gp, 0);
+ gem_init_hw(gp);
+
+ return 0;
+}
+
+static int gem_close(struct net_device *dev)
+{
+ struct gem *gp = dev->priv;
+
+ free_irq(gp->pdev->irq, (void *)dev);
+ return 0;
+}
+
+static struct net_device_stats *gem_get_stats(struct net_device *dev)
+{
+ struct gem *gp = dev->priv;
+ struct net_device_stats *stats = &gp->net_stats;
+
+ stats->rx_crc_errors += readl(gp->regs + MAC_FCSERR);
+ writel(0, gp->regs + MAC_FCSERR);
+
+ stats->rx_frame_errors += readl(gp->regs + MAC_AERR);
+ writel(0, gp->regs + MAC_AERR);
+
+ stats->rx_length_errors += readl(gp->regs + MAC_LERR);
+ writel(0, gp->regs + MAC_LERR);
+
+ stats->tx_aborted_errors += readl(gp->regs + MAC_ECOLL);
+ stats->collisions +=
+ (readl(gp->regs + MAC_ECOLL) +
+ readl(gp->regs + MAC_LCOLL));
+ writel(0, gp->regs + MAC_ECOLL);
+ writel(0, gp->regs + MAC_LCOLL);
+
+ return &gp->net_stats;
+}
+
+static void gem_set_multicast(struct net_device *dev)
+{
+ struct gem *gp = dev->priv;
+
+ netif_stop_queue(dev);
+
+ if ((gp->dev->flags & IFF_ALLMULTI) ||
+ (gp->dev->mc_count > 256)) {
+ writel(0xffff, gp->regs + MAC_HASH0);
+ writel(0xffff, gp->regs + MAC_HASH1);
+ writel(0xffff, gp->regs + MAC_HASH2);
+ writel(0xffff, gp->regs + MAC_HASH3);
+ writel(0xffff, gp->regs + MAC_HASH4);
+ writel(0xffff, gp->regs + MAC_HASH5);
+ writel(0xffff, gp->regs + MAC_HASH6);
+ writel(0xffff, gp->regs + MAC_HASH7);
+ writel(0xffff, gp->regs + MAC_HASH8);
+ writel(0xffff, gp->regs + MAC_HASH9);
+ writel(0xffff, gp->regs + MAC_HASH10);
+ writel(0xffff, gp->regs + MAC_HASH11);
+ writel(0xffff, gp->regs + MAC_HASH12);
+ writel(0xffff, gp->regs + MAC_HASH13);
+ writel(0xffff, gp->regs + MAC_HASH14);
+ writel(0xffff, gp->regs + MAC_HASH15);
+ } else if (gp->dev->flags & IFF_PROMISC) {
+ u32 rxcfg = readl(gp->regs + MAC_RXCFG);
+ int limit = 10000;
+
+ writel(rxcfg & ~MAC_RXCFG_ENAB, gp->regs + MAC_RXCFG);
+ while (readl(gp->regs + MAC_RXCFG) & MAC_RXCFG_ENAB) {
+ if (!limit--)
+ break;
+ udelay(10);
+ }
+
+ rxcfg |= MAC_RXCFG_PROM;
+ writel(rxcfg, gp->regs + MAC_RXCFG);
+ } else {
+ u16 hash_table[16];
+ u32 crc, poly = CRC_POLYNOMIAL_LE;
+ struct dev_mc_list *dmi = gp->dev->mc_list;
+ int i, j, bit, byte;
+
+ for (i = 0; i < 16; i++)
+ hash_table[i] = 0;
+
+ for (i = 0; i < dev->mc_count; i++) {
+ char *addrs = dmi->dmi_addr;
+
+ dmi = dmi->next;
+
+ if (!(*addrs & 1))
+ continue;
+
+ crc = 0xffffffffU;
+ for (byte = 0; byte < 6; byte++) {
+ for (bit = *addrs++, j = 0; j < 8; j++, bit >>= 1) {
+ int test;
+
+ test = ((bit ^ crc) & 0x01);
+ crc >>= 1;
+ if (test)
+ crc = crc ^ poly;
+ }
+ }
+ crc >>= 24;
+ hash_table[crc >> 4] |= 1 << (crc & 0xf);
+ }
+ writel(hash_table[0], gp->regs + MAC_HASH0);
+ writel(hash_table[1], gp->regs + MAC_HASH1);
+ writel(hash_table[2], gp->regs + MAC_HASH2);
+ writel(hash_table[3], gp->regs + MAC_HASH3);
+ writel(hash_table[4], gp->regs + MAC_HASH4);
+ writel(hash_table[5], gp->regs + MAC_HASH5);
+ writel(hash_table[6], gp->regs + MAC_HASH6);
+ writel(hash_table[7], gp->regs + MAC_HASH7);
+ writel(hash_table[8], gp->regs + MAC_HASH8);
+ writel(hash_table[9], gp->regs + MAC_HASH9);
+ writel(hash_table[10], gp->regs + MAC_HASH10);
+ writel(hash_table[11], gp->regs + MAC_HASH11);
+ writel(hash_table[12], gp->regs + MAC_HASH12);
+ writel(hash_table[13], gp->regs + MAC_HASH13);
+ writel(hash_table[14], gp->regs + MAC_HASH14);
+ writel(hash_table[15], gp->regs + MAC_HASH15);
+ }
+
+ netif_wake_queue(dev);
+}
+
+static int gem_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ return -EINVAL;
+}
+
+static int __devinit gem_check_invariants(struct gem *gp)
+{
+ struct pci_dev *pdev = gp->pdev;
+ u32 mif_cfg = readl(gp->regs + MIF_CFG);
+
+ if (pdev->device == PCI_DEVICE_ID_SUN_RIO_GEM
+#if 0
+ || pdev->device == PCI_DEVICE_ID_SUN_PPC_GEM
+#endif
+ ) {
+ /* One of the MII PHYs _must_ be present
+ * as these chip versions have no gigabit
+ * PHY.
+ */
+ if ((mif_cfg & (MIF_CFG_MDI0 | MIF_CFG_MDI1)) == 0) {
+ printk(KERN_ERR PFX "RIO GEM lacks MII phy, mif_cfg[%08x]\n",
+ mif_cfg);
+ return -1;
+ }
+ }
+
+ /* Determine initial PHY interface type guess. MDIO1 is the
+ * external PHY and thus takes precedence over MDIO0.
+ */
+ if (mif_cfg & MIF_CFG_MDI1)
+ gp->phy_type = phy_mii_mdio1;
+ else if (mif_cfg & MIF_CFG_MDI0)
+ gp->phy_type = phy_mii_mdio0;
+ else
+ gp->phy_type = phy_serialink;
+
+ if (gp->phy_type == phy_mii_mdio1 ||
+ gp->phy_type == phy_mii_mdio0) {
+ int i;
+
+ for (i = 0; i < 32; i++) {
+ gp->mii_phy_addr = i;
+ if (phy_read(gp, PHY_CTRL) != 0xffff)
+ break;
+ }
+ }
+
+ /* Fetch the FIFO configurations now too. */
+ gp->tx_fifo_sz = readl(gp->regs + TXDMA_FSZ) * 64;
+ gp->rx_fifo_sz = readl(gp->regs + RXDMA_FSZ) * 64;
+
+ if (pdev->device == PCI_DEVICE_ID_SUN_GEM) {
+ if (gp->tx_fifo_sz != (9 * 1024) ||
+ gp->rx_fifo_sz != (20 * 1024)) {
+ printk(KERN_ERR PFX "GEM has bogus fifo sizes tx(%d) rx(%d)\n",
+ gp->tx_fifo_sz, gp->rx_fifo_sz);
+ return -1;
+ }
+ } else {
+ if (gp->tx_fifo_sz != (2 * 1024) ||
+ gp->rx_fifo_sz != (2 * 1024)) {
+ printk(KERN_ERR PFX "RIO GEM has bogus fifo sizes tx(%d) rx(%d)\n",
+ gp->tx_fifo_sz, gp->rx_fifo_sz);
+ return -1;
+ }
+ }
+
+ /* Calculate pause thresholds. Setting the OFF threshold to the
+ * full RX fifo size effectively disables PAUSE generation which
+ * is what we do for 10/100 only GEMs which have FIFOs too small
+ * to make real gains from PAUSE.
+ */
+ if (gp->rx_fifo_sz <= (2 * 1024)) {
+ gp->rx_pause_off = gp->rx_pause_on = gp->rx_fifo_sz;
+ } else {
+ int off = ((gp->rx_fifo_sz * 3) / 4);
+ int on = off - (1 * 1024);
+
+ gp->rx_pause_off = off;
+ gp->rx_pause_on = on;
+ }
+
+ {
+ u32 bifcfg = readl(gp->regs + GREG_BIFCFG);
+
+ bifcfg |= GREG_BIFCFG_B64DIS;
+ writel(bifcfg, gp->regs + GREG_BIFCFG);
+ }
+
+ return 0;
+}
+
+static int __devinit gem_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ static int gem_version_printed = 0;
+ unsigned long gemreg_base, gemreg_len;
+ struct net_device *dev;
+ struct gem *gp;
+ int i;
+
+ if (gem_version_printed++ == 0)
+ printk(KERN_INFO "%s", version);
+
+ gemreg_base = pci_resource_start(pdev, 0);
+ gemreg_len = pci_resource_len(pdev, 0);
+
+ if ((pci_resource_flags(pdev, 0) & IORESOURCE_IO) != 0) {
+ printk(KERN_ERR PFX "Cannot find proper PCI device "
+ "base address, aborting.\n");
+ return -ENODEV;
+ }
+
+ dev = init_etherdev(NULL, sizeof(*gp));
+ if (!dev) {
+ printk(KERN_ERR PFX "Etherdev init failed, aborting.\n");
+ return -ENOMEM;
+ }
+
+ if (!request_mem_region(gemreg_base, gemreg_len, dev->name)) {
+ printk(KERN_ERR PFX "MMIO resource (0x%lx@0x%lx) unavailable, "
+ "aborting.\n", gemreg_base, gemreg_len);
+ goto err_out_free_netdev;
+ }
+
+ if (pci_enable_device(pdev)) {
+ printk(KERN_ERR PFX "Cannot enable MMIO operation, "
+ "aborting.\n");
+ goto err_out_free_mmio_res;
+ }
+
+ pci_set_master(pdev);
+
+ gp = dev->priv;
+ memset(gp, 0, sizeof(*gp));
+
+ gp->pdev = pdev;
+ dev->base_addr = (long) pdev;
+
+ spin_lock_init(&gp->lock);
+
+ gp->regs = (unsigned long) ioremap(gemreg_base, gemreg_len);
+ if (gp->regs == 0UL) {
+ printk(KERN_ERR PFX "Cannot map device registers, "
+ "aborting.\n");
+ goto err_out_free_mmio_res;
+ }
+
+ if (gem_check_invariants(gp))
+ goto err_out_iounmap;
+
+ /* It is guarenteed that the returned buffer will be at least
+ * PAGE_SIZE aligned.
+ */
+ gp->init_block = (struct gem_init_block *)
+ pci_alloc_consistent(pdev, sizeof(struct gem_init_block),
+ &gp->gblock_dvma);
+ if (!gp->init_block) {
+ printk(KERN_ERR PFX "Cannot allocate init block, "
+ "aborting.\n");
+ goto err_out_iounmap;
+ }
+
+ pci_set_drvdata(pdev, dev);
+
+ printk(KERN_INFO "%s: Sun GEM (PCI) 10/100/1000BaseT Ethernet ",
+ dev->name);
+
+#ifdef __sparc__
+ memcpy(dev->dev_addr, idprom->id_ethaddr, 6);
+#endif
+
+ for (i = 0; i < 6; i++)
+ printk("%2.2x%c", dev->dev_addr[i],
+ i == 5 ? ' ' : ':');
+ printk("\n");
+
+ init_timer(&gp->link_timer);
+ gp->link_timer.function = gem_link_timer;
+ gp->link_timer.data = (unsigned long) gp;
+
+ gp->dev = dev;
+ dev->open = gem_open;
+ dev->stop = gem_close;
+ dev->hard_start_xmit = gem_start_xmit;
+ dev->get_stats = gem_get_stats;
+ dev->set_multicast_list = gem_set_multicast;
+ dev->do_ioctl = gem_ioctl;
+ dev->irq = pdev->irq;
+ dev->dma = 0;
+
+ return 0;
+
+err_out_iounmap:
+ iounmap((void *) gp->regs);
+
+err_out_free_mmio_res:
+ release_mem_region(gemreg_base, gemreg_len);
+
+err_out_free_netdev:
+ unregister_netdev(dev);
+ kfree(dev);
+
+ return -ENODEV;
+
+}
+
+static void gem_suspend(struct pci_dev *pdev)
+{
+}
+
+static void gem_resume(struct pci_dev *pdev)
+{
+}
+
+static void __devexit gem_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+
+ if (dev) {
+ struct gem *gp = dev->priv;
+
+ unregister_netdev(dev);
+
+ pci_free_consistent(pdev,
+ sizeof(struct gem_init_block),
+ gp->init_block,
+ gp->gblock_dvma);
+ iounmap((void *) gp->regs);
+ release_mem_region(pci_resource_start(pdev, 0),
+ pci_resource_len(pdev, 0));
+ kfree(dev);
+
+ pci_set_drvdata(pdev, NULL);
+ }
+}
+
+static struct pci_driver gem_driver = {
+ name: GEM_MODULE_NAME,
+ id_table: gem_pci_tbl,
+ probe: gem_init_one,
+ remove: gem_remove_one,
+ suspend: gem_suspend,
+ resume: gem_resume,
+};
+
+static int __init gem_init(void)
+{
+ return pci_module_init(&gem_driver);
+}
+
+static void __exit gem_cleanup(void)
+{
+ pci_unregister_driver(&gem_driver);
+}
+
+module_init(gem_init);
+module_exit(gem_cleanup);
--- /dev/null
+/* $Id: sungem.h,v 1.5 2001/03/21 23:02:04 davem Exp $
+ * sungem.h: Definitions for Sun GEM ethernet driver.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ */
+
+#ifndef _SUNGEM_H
+#define _SUNGEM_H
+
+/* Global Registers */
+#define GREG_SEBSTATE 0x0000UL /* SEB State Register */
+#define GREG_CFG 0x0004UL /* Configuration Register */
+#define GREG_STAT 0x000CUL /* Status Register */
+#define GREG_IMASK 0x0010UL /* Interrupt Mask Register */
+#define GREG_IACK 0x0014UL /* Interrupt ACK Register */
+#define GREG_STAT2 0x001CUL /* Alias of GREG_STAT */
+#define GREG_PCIESTAT 0x1000UL /* PCI Error Status Register */
+#define GREG_PCIEMASK 0x1004UL /* PCI Error Mask Register */
+#define GREG_BIFCFG 0x1008UL /* BIF Configuration Register */
+#define GREG_BIFDIAG 0x100CUL /* BIF Diagnostics Register */
+#define GREG_SWRST 0x1010UL /* Software Reset Register */
+
+/* Global SEB State Register */
+#define GREG_SEBSTATE_ARB 0x00000003 /* State of Arbiter */
+#define GREG_SEBSTATE_RXWON 0x00000004 /* RX won internal arbitration */
+
+/* Global Configuration Register */
+#define GREG_CFG_IBURST 0x00000001 /* Infinite Burst */
+#define GREG_CFG_TXDMALIM 0x0000003e /* TX DMA grant limit */
+#define GREG_CFG_RXDMALIM 0x000007c0 /* RX DMA grant limit */
+
+/* Global Interrupt Status Register.
+ *
+ * Reading this register automatically clears bits 0 through 6.
+ * This auto-clearing does not occur when the alias at GREG_STAT2
+ * is read instead. The rest of the interrupt bits only clear when
+ * the secondary interrupt status register corresponding to that
+ * bit is read (ie. if GREG_STAT_PCS is set, it will be cleared by
+ * reading PCS_ISTAT).
+ */
+#define GREG_STAT_TXINTME 0x00000001 /* TX INTME frame transferred */
+#define GREG_STAT_TXALL 0x00000002 /* All TX frames transferred */
+#define GREG_STAT_TXDONE 0x00000004 /* One TX frame transferred */
+#define GREG_STAT_RXDONE 0x00000010 /* One RX frame arrived */
+#define GREG_STAT_RXNOBUF 0x00000020 /* No free RX buffers available */
+#define GREG_STAT_RXTAGERR 0x00000040 /* RX tag framing is corrupt */
+#define GREG_STAT_PCS 0x00002000 /* PCS signalled interrupt */
+#define GREG_STAT_TXMAC 0x00004000 /* TX MAC signalled interrupt */
+#define GREG_STAT_RXMAC 0x00008000 /* RX MAC signalled interrupt */
+#define GREG_STAT_MAC 0x00010000 /* MAC Control signalled irq */
+#define GREG_STAT_MIF 0x00020000 /* MIF signalled interrupt */
+#define GREG_STAT_PCIERR 0x00040000 /* PCI Error interrupt */
+#define GREG_STAT_TXNR 0xfff80000 /* == TXDMA_TXDONE reg val */
+#define GREG_STAT_TXNR_SHIFT 19
+
+#define GREG_STAT_ABNORMAL (GREG_STAT_RXNOBUF | GREG_STAT_RXTAGERR | \
+ GREG_STAT_PCS | GREG_STAT_TXMAC | GREG_STAT_RXMAC | \
+ GREG_STAT_MAC | GREG_STAT_MIF | GREG_STAT_PCIERR)
+
+/* The layout of GREG_IMASK and GREG_IACK is identical to GREG_STAT.
+ * Bits set in GREG_IMASK will prevent that interrupt type from being
+ * signalled to the cpu. GREG_IACK can be used to clear specific top-level
+ * interrupt conditions in GREG_STAT, ie. it only works for bits 0 through 6.
+ * Setting the bit will clear that interrupt, clear bits will have no effect
+ * on GREG_STAT.
+ */
+
+/* Global PCI Error Status Register */
+#define GREG_PCIESTAT_BADACK 0x00000001 /* No ACK64# during ABS64 cycle */
+#define GREG_PCIESTAT_DTRTO 0x00000002 /* Delayed transaction timeout */
+#define GREG_PCIESTAT_OTHER 0x00000004 /* Other PCI error, check cfg space */
+
+/* The layout of the GREG_PCIEMASK is identical to that of GREG_PCIESTAT.
+ * Bits set in GREG_PCIEMASK will prevent that interrupt type from being
+ * signalled to the cpu.
+ */
+
+/* Global BIF Configuration Register */
+#define GREG_BIFCFG_SLOWCLK 0x00000001 /* Set if PCI runs < 25Mhz */
+#define GREG_BIFCFG_B64DIS 0x00000002 /* Disable 64bit wide data cycle*/
+#define GREG_BIFCFG_M66EN 0x00000004 /* Set if on 66Mhz PCI segment */
+
+/* Global BIF Diagnostics Register */
+#define GREG_BIFDIAG_BURSTSM 0x007f0000 /* PCI Burst state machine */
+#define GREG_BIFDIAG_BIFSM 0xff000000 /* BIF state machine */
+
+/* Global Software Reset Register.
+ *
+ * This register is used to perform a global reset of the RX and TX portions
+ * of the GEM asic. Setting the RX or TX reset bit will start the reset.
+ * The driver _MUST_ poll these bits until they clear. One may not attempt
+ * to program any other part of GEM until the bits clear.
+ */
+#define GREG_SWRST_TXRST 0x00000001 /* TX Software Reset */
+#define GREG_SWRST_RXRST 0x00000002 /* RX Software Reset */
+#define GREG_SWRST_RSTOUT 0x00000004 /* Force RST# pin active */
+
+/* TX DMA Registers */
+#define TXDMA_KICK 0x2000UL /* TX Kick Register */
+#define TXDMA_CFG 0x2004UL /* TX Configuration Register */
+#define TXDMA_DBLOW 0x2008UL /* TX Desc. Base Low */
+#define TXDMA_DBHI 0x200CUL /* TX Desc. Base High */
+#define TXDMA_FWPTR 0x2014UL /* TX FIFO Write Pointer */
+#define TXDMA_FSWPTR 0x2018UL /* TX FIFO Shadow Write Pointer */
+#define TXDMA_FRPTR 0x201CUL /* TX FIFO Read Pointer */
+#define TXDMA_FSRPTR 0x2020UL /* TX FIFO Shadow Read Pointer */
+#define TXDMA_PCNT 0x2024UL /* TX FIFO Packet Counter */
+#define TXDMA_SMACHINE 0x2028UL /* TX State Machine Register */
+#define TXDMA_DPLOW 0x2030UL /* TX Data Pointer Low */
+#define TXDMA_DPHI 0x2034UL /* TX Data Pointer High */
+#define TXDMA_TXDONE 0x2100UL /* TX Completion Register */
+#define TXDMA_FADDR 0x2104UL /* TX FIFO Address */
+#define TXDMA_FTAG 0x2108UL /* TX FIFO Tag */
+#define TXDMA_DLOW 0x210CUL /* TX FIFO Data Low */
+#define TXDMA_DHIT1 0x2110UL /* TX FIFO Data HighT1 */
+#define TXDMA_DHIT0 0x2114UL /* TX FIFO Data HighT0 */
+#define TXDMA_FSZ 0x2118UL /* TX FIFO Size */
+
+/* TX Kick Register.
+ *
+ * This 13-bit register is programmed by the driver to hold the descriptor
+ * entry index which follows the last valid transmit descriptor.
+ */
+
+/* TX Completion Register.
+ *
+ * This 13-bit register is updated by GEM to hold to descriptor entry index
+ * which follows the last descriptor already processed by GEM. Note that
+ * this value is mirrored in GREG_STAT which eliminates the need to even
+ * access this register in the driver during interrupt processing.
+ */
+
+/* TX Configuration Register.
+ *
+ * Note that TXDMA_CFG_FTHRESH, the TX FIFO Threshold, is an obsolete feature
+ * that was meant to be used with jumbo packets. It should be set to the
+ * maximum value of 0x4ff, else one risks getting TX MAC Underrun errors.
+ */
+#define TXDMA_CFG_ENABLE 0x00000001 /* Enable TX DMA channel */
+#define TXDMA_CFG_RINGSZ 0x0000001e /* TX descriptor ring size */
+#define TXDMA_CFG_RINGSZ_32 0x00000000 /* 32 TX descriptors */
+#define TXDMA_CFG_RINGSZ_64 0x00000002 /* 64 TX descriptors */
+#define TXDMA_CFG_RINGSZ_128 0x00000004 /* 128 TX descriptors */
+#define TXDMA_CFG_RINGSZ_256 0x00000006 /* 256 TX descriptors */
+#define TXDMA_CFG_RINGSZ_512 0x00000008 /* 512 TX descriptors */
+#define TXDMA_CFG_RINGSZ_1K 0x0000000a /* 1024 TX descriptors */
+#define TXDMA_CFG_RINGSZ_2K 0x0000000c /* 2048 TX descriptors */
+#define TXDMA_CFG_RINGSZ_4K 0x0000000e /* 4096 TX descriptors */
+#define TXDMA_CFG_RINGSZ_8K 0x00000010 /* 8192 TX descriptors */
+#define TXDMA_CFG_PIOSEL 0x00000020 /* Enable TX FIFO PIO from cpu */
+#define TXDMA_CFG_FTHRESH 0x001ffc00 /* TX FIFO Threshold, obsolete */
+#define TXDMA_CFG_PMODE 0x00200000 /* TXALL irq means TX FIFO empty*/
+
+/* TX Descriptor Base Low/High.
+ *
+ * These two registers store the 53 most significant bits of the base address
+ * of the TX descriptor table. The 11 least significant bits are always
+ * zero. As a result, the TX descriptor table must be 2K aligned.
+ */
+
+/* The rest of the TXDMA_* registers are for diagnostics and debug, I will document
+ * them later. -DaveM
+ */
+
+/* Receive DMA Registers */
+#define RXDMA_CFG 0x4000UL /* RX Configuration Register */
+#define RXDMA_DBLOW 0x4004UL /* RX Descriptor Base Low */
+#define RXDMA_DBHI 0x4008UL /* RX Descriptor Base High */
+#define RXDMA_FWPTR 0x400CUL /* RX FIFO Write Pointer */
+#define RXDMA_FSWPTR 0x4010UL /* RX FIFO Shadow Write Pointer */
+#define RXDMA_FRPTR 0x4014UL /* RX FIFO Read Pointer */
+#define RXDMA_PCNT 0x4018UL /* RX FIFO Packet Counter */
+#define RXDMA_SMACHINE 0x401CUL /* RX State Machine Register */
+#define RXDMA_PTHRESH 0x4020UL /* Pause Thresholds */
+#define RXDMA_DPLOW 0x4024UL /* RX Data Pointer Low */
+#define RXDMA_DPHI 0x4028UL /* RX Data Pointer High */
+#define RXDMA_KICK 0x4100UL /* RX Kick Register */
+#define RXDMA_DONE 0x4104UL /* RX Completion Register */
+#define RXDMA_BLANK 0x4108UL /* RX Blanking Register */
+#define RXDMA_FADDR 0x410CUL /* RX FIFO Address */
+#define RXDMA_FTAG 0x4110UL /* RX FIFO Tag */
+#define RXDMA_DLOW 0x4114UL /* RX FIFO Data Low */
+#define RXDMA_DHIT1 0x4118UL /* RX FIFO Data HighT0 */
+#define RXDMA_DHIT0 0x411CUL /* RX FIFO Data HighT1 */
+#define RXDMA_FSZ 0x4120UL /* RX FIFO Size */
+
+/* RX Configuration Register. */
+#define RXDMA_CFG_ENABLE 0x00000001 /* Enable RX DMA channel */
+#define RXDMA_CFG_RINGSZ 0x0000001e /* RX descriptor ring size */
+#define RXDMA_CFG_RINGSZ_32 0x00000000 /* - 32 entries */
+#define RXDMA_CFG_RINGSZ_64 0x00000002 /* - 64 entries */
+#define RXDMA_CFG_RINGSZ_128 0x00000004 /* - 128 entries */
+#define RXDMA_CFG_RINGSZ_256 0x00000006 /* - 256 entries */
+#define RXDMA_CFG_RINGSZ_512 0x00000008 /* - 512 entries */
+#define RXDMA_CFG_RINGSZ_1K 0x0000000a /* - 1024 entries */
+#define RXDMA_CFG_RINGSZ_2K 0x0000000c /* - 2048 entries */
+#define RXDMA_CFG_RINGSZ_4K 0x0000000e /* - 4096 entries */
+#define RXDMA_CFG_RINGSZ_8K 0x00000010 /* - 8192 entries */
+#define RXDMA_CFG_RINGSZ_BDISAB 0x00000020 /* Disable RX desc batching */
+#define RXDMA_CFG_FBOFF 0x00001c00 /* Offset of first data byte */
+#define RXDMA_CFG_CSUMOFF 0x000fe000 /* Skip bytes before csum calc */
+#define RXDMA_CFG_FTHRESH 0x07000000 /* RX FIFO dma start threshold */
+#define RXDMA_CFG_FTHRESH_64 0x00000000 /* - 64 bytes */
+#define RXDMA_CFG_FTHRESH_128 0x01000000 /* - 128 bytes */
+#define RXDMA_CFG_FTHRESH_256 0x02000000 /* - 256 bytes */
+#define RXDMA_CFG_FTHRESH_512 0x03000000 /* - 512 bytes */
+#define RXDMA_CFG_FTHRESH_1K 0x04000000 /* - 1024 bytes */
+#define RXDMA_CFG_FTHRESH_2K 0x05000000 /* - 2048 bytes */
+
+/* RX Descriptor Base Low/High.
+ *
+ * These two registers store the 53 most significant bits of the base address
+ * of the RX descriptor table. The 11 least significant bits are always
+ * zero. As a result, the RX descriptor table must be 2K aligned.
+ */
+
+/* RX PAUSE Thresholds.
+ *
+ * These values determine when XOFF and XON PAUSE frames are emitted by
+ * GEM. The thresholds measure RX FIFO occupancy in units of 64 bytes.
+ */
+#define RXDMA_PTHRESH_OFF 0x000001ff /* XOFF emitted w/FIFO > this */
+#define RXDMA_PTHRESH_ON 0x001ff000 /* XON emitted w/FIFO < this */
+
+/* RX Kick Register.
+ *
+ * This 13-bit register is written by the host CPU and holds the last
+ * valid RX descriptor number plus one. This is, if 'N' is written to
+ * this register, it means that all RX descriptors up to but excluding
+ * 'N' are valid.
+ *
+ * The hardware requires that RX descriptors are posted in increments
+ * of 4. This means 'N' must be a multiple of four. For the best
+ * performance, the first new descriptor being posted should be (PCI)
+ * cache line aligned.
+ */
+
+/* RX Completion Register.
+ *
+ * This 13-bit register is updated by GEM to indicate which RX descriptors
+ * have already been used for receive frames. All descriptors up to but
+ * excluding the value in this register are ready to be processed. GEM
+ * updates this register value after the RX FIFO empties completely into
+ * the RX descriptor's buffer, but before the RX_DONE bit is set in the
+ * interrupt status register.
+ */
+
+/* RX Blanking Register. */
+#define RXDMA_BLANK_IPKTS 0x000001ff /* RX_DONE asserted after this
+ * many packets received since
+ * previous RX_DONE.
+ */
+#define RXDMA_BLANK_ITIME 0x000ff000 /* RX_DONE asserted after this
+ * many clocks (measured in 2048
+ * PCI clocks) were counted since
+ * the previous RX_DONE.
+ */
+
+/* RX FIFO Size.
+ *
+ * This 11-bit read-only register indicates how large, in units of 64-bytes,
+ * the RX FIFO is. The driver uses this to properly configure the RX PAUSE
+ * thresholds.
+ */
+
+/* The rest of the RXDMA_* registers are for diagnostics and debug, I will document
+ * them later. -DaveM
+ */
+
+/* MAC Registers */
+#define MAC_TXRST 0x6000UL /* TX MAC Software Reset Command*/
+#define MAC_RXRST 0x6004UL /* RX MAC Software Reset Command*/
+#define MAC_SNDPAUSE 0x6008UL /* Send Pause Command Register */
+#define MAC_TXSTAT 0x6010UL /* TX MAC Status Register */
+#define MAC_RXSTAT 0x6014UL /* RX MAC Status Register */
+#define MAC_CSTAT 0x6018UL /* MAC Control Status Register */
+#define MAC_TXMASK 0x6020UL /* TX MAC Mask Register */
+#define MAC_RXMASK 0x6024UL /* RX MAC Mask Register */
+#define MAC_MCMASK 0x6028UL /* MAC Control Mask Register */
+#define MAC_TXCFG 0x6030UL /* TX MAC Configuration Register*/
+#define MAC_RXCFG 0x6034UL /* RX MAC Configuration Register*/
+#define MAC_MCCFG 0x6038UL /* MAC Control Config Register */
+#define MAC_XIFCFG 0x603CUL /* XIF Configuration Register */
+#define MAC_IPG0 0x6040UL /* InterPacketGap0 Register */
+#define MAC_IPG1 0x6044UL /* InterPacketGap1 Register */
+#define MAC_IPG2 0x6048UL /* InterPacketGap2 Register */
+#define MAC_STIME 0x604CUL /* SlotTime Register */
+#define MAC_MINFSZ 0x6050UL /* MinFrameSize Register */
+#define MAC_MAXFSZ 0x6054UL /* MaxFrameSize Register */
+#define MAC_PASIZE 0x6058UL /* PA Size Register */
+#define MAC_JAMSIZE 0x605CUL /* JamSize Register */
+#define MAC_ATTLIM 0x6060UL /* Attempt Limit Register */
+#define MAC_MCTYPE 0x6064UL /* MAC Control Type Register */
+#define MAC_ADDR0 0x6080UL /* MAC Address 0 Register */
+#define MAC_ADDR1 0x6084UL /* MAC Address 1 Register */
+#define MAC_ADDR2 0x6088UL /* MAC Address 2 Register */
+#define MAC_ADDR3 0x608CUL /* MAC Address 3 Register */
+#define MAC_ADDR4 0x6090UL /* MAC Address 4 Register */
+#define MAC_ADDR5 0x6094UL /* MAC Address 5 Register */
+#define MAC_ADDR6 0x6098UL /* MAC Address 6 Register */
+#define MAC_ADDR7 0x609CUL /* MAC Address 7 Register */
+#define MAC_ADDR8 0x60A0UL /* MAC Address 8 Register */
+#define MAC_AFILT0 0x60A4UL /* Address Filter 0 Register */
+#define MAC_AFILT1 0x60A8UL /* Address Filter 1 Register */
+#define MAC_AFILT2 0x60ACUL /* Address Filter 2 Register */
+#define MAC_AF21MSK 0x60B0UL /* Address Filter 2&1 Mask Reg */
+#define MAC_AF0MSK 0x60B4UL /* Address Filter 0 Mask Reg */
+#define MAC_HASH0 0x60C0UL /* Hash Table 0 Register */
+#define MAC_HASH1 0x60C4UL /* Hash Table 1 Register */
+#define MAC_HASH2 0x60C8UL /* Hash Table 2 Register */
+#define MAC_HASH3 0x60CCUL /* Hash Table 3 Register */
+#define MAC_HASH4 0x60D0UL /* Hash Table 4 Register */
+#define MAC_HASH5 0x60D4UL /* Hash Table 5 Register */
+#define MAC_HASH6 0x60D8UL /* Hash Table 6 Register */
+#define MAC_HASH7 0x60DCUL /* Hash Table 7 Register */
+#define MAC_HASH8 0x60E0UL /* Hash Table 8 Register */
+#define MAC_HASH9 0x60E4UL /* Hash Table 9 Register */
+#define MAC_HASH10 0x60E8UL /* Hash Table 10 Register */
+#define MAC_HASH11 0x60ECUL /* Hash Table 11 Register */
+#define MAC_HASH12 0x60F0UL /* Hash Table 12 Register */
+#define MAC_HASH13 0x60F4UL /* Hash Table 13 Register */
+#define MAC_HASH14 0x60F8UL /* Hash Table 14 Register */
+#define MAC_HASH15 0x60FCUL /* Hash Table 15 Register */
+#define MAC_NCOLL 0x6100UL /* Normal Collision Counter */
+#define MAC_FASUCC 0x6104UL /* First Attmpt. Succ Coll Ctr. */
+#define MAC_ECOLL 0x6108UL /* Excessive Collision Counter */
+#define MAC_LCOLL 0x610CUL /* Late Collision Counter */
+#define MAC_DTIMER 0x6110UL /* Defer Timer */
+#define MAC_PATMPS 0x6114UL /* Peak Attempts Register */
+#define MAC_RFCTR 0x6118UL /* Receive Frame Counter */
+#define MAC_LERR 0x611CUL /* Length Error Counter */
+#define MAC_AERR 0x6120UL /* Alignment Error Counter */
+#define MAC_FCSERR 0x6124UL /* FCS Error Counter */
+#define MAC_RXCVERR 0x6128UL /* RX code Violation Error Ctr */
+#define MAC_RANDSEED 0x6130UL /* Random Number Seed Register */
+#define MAC_SMACHINE 0x6134UL /* State Machine Register */
+
+/* TX MAC Software Reset Command. */
+#define MAC_TXRST_CMD 0x00000001 /* Start sw reset, self-clears */
+
+/* RX MAC Software Reset Command. */
+#define MAC_RXRST_CMD 0x00000001 /* Start sw reset, self-clears */
+
+/* Send Pause Command. */
+#define MAC_SNDPAUSE_TS 0x0000ffff /* The pause_time operand used in
+ * Send_Pause and flow-control
+ * handshakes.
+ */
+#define MAC_SNDPAUSE_SP 0x00010000 /* Setting this bit instructs the MAC
+ * to send a Pause Flow Control
+ * frame onto the network.
+ */
+
+/* TX MAC Status Register. */
+#define MAC_TXSTAT_XMIT 0x00000001 /* Frame Transmitted */
+#define MAC_TXSTAT_URUN 0x00000002 /* TX Underrun */
+#define MAC_TXSTAT_MPE 0x00000004 /* Max Packet Size Error */
+#define MAC_TXSTAT_NCE 0x00000008 /* Normal Collision Cntr Expire */
+#define MAC_TXSTAT_ECE 0x00000010 /* Excess Collision Cntr Expire */
+#define MAC_TXSTAT_LCE 0x00000020 /* Late Collision Cntr Expire */
+#define MAC_TXSTAT_FCE 0x00000040 /* First Collision Cntr Expire */
+#define MAC_TXSTAT_DTE 0x00000080 /* Defer Timer Expire */
+#define MAC_TXSTAT_PCE 0x00000100 /* Peak Attempts Cntr Expire */
+
+/* RX MAC Status Register. */
+#define MAC_RXSTAT_RCV 0x00000001 /* Frame Received */
+#define MAC_RXSTAT_OFLW 0x00000002 /* Receive Overflow */
+#define MAC_RXSTAT_FCE 0x00000004 /* Frame Cntr Expire */
+#define MAC_RXSTAT_ACE 0x00000008 /* Align Error Cntr Expire */
+#define MAC_RXSTAT_CCE 0x00000010 /* CRC Error Cntr Expire */
+#define MAC_RXSTAT_LCE 0x00000020 /* Length Error Cntr Expire */
+#define MAC_RXSTAT_VCE 0x00000040 /* Code Violation Cntr Expire */
+
+/* MAC Control Status Register. */
+#define MAC_CSTAT_PRCV 0x00000001 /* Pause Received */
+#define MAC_CSTAT_PS 0x00000002 /* Paused State */
+#define MAC_CSTAT_NPS 0x00000004 /* Not Paused State */
+#define MAC_CSTAT_PTR 0xffff0000 /* Pause Time Received */
+
+/* The layout of the MAC_{TX,RX,C}MASK registers is identical to that
+ * of MAC_{TX,RX,C}STAT. Bits set in MAC_{TX,RX,C}MASK will prevent
+ * that interrupt type from being signalled to front end of GEM. For
+ * the interrupt to actually get sent to the cpu, it is necessary to
+ * properly set the appropriate GREG_IMASK_{TX,RX,}MAC bits as well.
+ */
+
+/* TX MAC Configuration Register.
+ *
+ * NOTE: The TX MAC Enable bit must be cleared and polled until
+ * zero before any other bits in this register are changed.
+ *
+ * Also, enabling the Carrier Extension feature of GEM is
+ * a 3 step process 1) Set TX Carrier Extension 2) Set
+ * RX Carrier Extension 3) Set Slot Time to 0x200. This
+ * mode must be enabled when in half-duplex at 1Gbps, else
+ * it must be disabled.
+ */
+#define MAC_TXCFG_ENAB 0x00000001 /* TX MAC Enable */
+#define MAC_TXCFG_ICS 0x00000002 /* Ignore Carrier Sense */
+#define MAC_TXCFG_ICOLL 0x00000004 /* Ignore Collisions */
+#define MAC_TXCFG_EIPG0 0x00000008 /* Enable IPG0 */
+#define MAC_TXCFG_NGU 0x00000010 /* Never Give Up */
+#define MAC_TXCFG_NGUL 0x00000020 /* Never Give Up Limit */
+#define MAC_TXCFG_NBO 0x00000040 /* No Backoff */
+#define MAC_TXCFG_SD 0x00000080 /* Slow Down */
+#define MAC_TXCFG_NFCS 0x00000100 /* No FCS */
+#define MAC_TXCFG_TCE 0x00000200 /* TX Carrier Extension */
+
+/* RX MAC Configuration Register.
+ *
+ * NOTE: The RX MAC Enable bit must be cleared and polled until
+ * zero before any other bits in this register are changed.
+ *
+ * Similar rules apply to the Hash Filter Enable bit when
+ * programming the hash table registers, and the Address Filter
+ * Enable bit when programming the address filter registers.
+ */
+#define MAC_RXCFG_ENAB 0x00000001 /* RX MAC Enable */
+#define MAC_RXCFG_SPAD 0x00000002 /* Strip Pad */
+#define MAC_RXCFG_SFCS 0x00000004 /* Strip FCS */
+#define MAC_RXCFG_PROM 0x00000008 /* Promiscuous Mode */
+#define MAC_RXCFG_PGRP 0x00000010 /* Promiscuous Group */
+#define MAC_RXCFG_HFE 0x00000020 /* Hash Filter Enable */
+#define MAC_RXCFG_AFE 0x00000040 /* Address Filter Enable */
+#define MAC_RXCFG_DDE 0x00000080 /* Disable Discard on Error */
+#define MAC_RXCFG_RCE 0x00000100 /* RX Carrier Extension */
+
+/* MAC Control Config Register. */
+#define MAC_MCCFG_SPE 0x00000001 /* Send Pause Enable */
+#define MAC_MCCFG_RPE 0x00000002 /* Receive Pause Enable */
+#define MAC_MCCFG_PMC 0x00000004 /* Pass MAC Control */
+
+/* XIF Configuration Register.
+ *
+ * NOTE: When leaving or entering loopback mode, a global hardware
+ * init of GEM should be performed.
+ */
+#define MAC_XIFCFG_OE 0x00000001 /* MII TX Output Driver Enable */
+#define MAC_XIFCFG_LBCK 0x00000002 /* Loopback TX to RX */
+#define MAC_XIFCFG_DISE 0x00000004 /* Disable RX path during TX */
+#define MAC_XIFCFG_GMII 0x00000008 /* Use GMII clocks + datapath */
+#define MAC_XIFCFG_MBOE 0x00000010 /* Controls MII_BUF_EN pin */
+#define MAC_XIFCFG_LLED 0x00000020 /* Force LINKLED# active (low) */
+#define MAC_XIFCFG_FLED 0x00000040 /* Force FDPLXLED# active (low) */
+
+/* InterPacketGap0 Register. This 8-bit value is used as an extension
+ * to the InterPacketGap1 Register. Specifically it contributes to the
+ * timing of the RX-to-TX IPG. This value is ignored and presumed to
+ * be zero for TX-to-TX IPG calculations and/or when the Enable IPG0 bit
+ * is cleared in the TX MAC Configuration Register.
+ *
+ * This value in this register in terms of media byte time.
+ *
+ * Recommended value: 0x00
+ */
+
+/* InterPacketGap1 Register. This 8-bit value defines the first 2/3
+ * portion of the Inter Packet Gap.
+ *
+ * This value in this register in terms of media byte time.
+ *
+ * Recommended value: 0x08
+ */
+
+/* InterPacketGap2 Register. This 8-bit value defines the second 1/3
+ * portion of the Inter Packet Gap.
+ *
+ * This value in this register in terms of media byte time.
+ *
+ * Recommended value: 0x04
+ */
+
+/* Slot Time Register. This 10-bit value specifies the slot time
+ * parameter in units of media byte time. It determines the physical
+ * span of the network.
+ *
+ * Recommended value: 0x40
+ */
+
+/* Minimum Frame Size Register. This 10-bit register specifies the
+ * smallest sized frame the TXMAC will send onto the medium, and the
+ * RXMAC will receive from the medium.
+ *
+ * Recommended value: 0x40
+ */
+
+/* Maximum Frame and Burst Size Register.
+ *
+ * This register specifies two things. First it specifies the maximum
+ * sized frame the TXMAC will send and the RXMAC will recognize as
+ * valid. Second, it specifies the maximum run length of a burst of
+ * packets sent in half-duplex gigabit modes.
+ *
+ * Recommended value: 0x200005ee
+ */
+#define MAC_MAXFSZ_MFS 0x00007fff /* Max Frame Size */
+#define MAC_MAXFSZ_MBS 0x7fff0000 /* Max Burst Size */
+
+/* PA Size Register. This 10-bit register specifies the number of preamble
+ * bytes which will be transmitted at the beginning of each frame. A
+ * value of two or greater should be programmed here.
+ *
+ * Recommended value: 0x07
+ */
+
+/* Jam Size Register. This 4-bit register specifies the duration of
+ * the jam in units of media byte time.
+ *
+ * Recommended value: 0x04
+ */
+
+/* Attempts Limit Register. This 8-bit register specifies the number
+ * of attempts that the TXMAC will make to transmit a frame, before it
+ * resets its Attempts Counter. After reaching the Attempts Limit the
+ * TXMAC may or may not drop the frame, as determined by the NGU
+ * (Never Give Up) and NGUL (Never Give Up Limit) bits in the TXMAC
+ * Configuration Register.
+ *
+ * Recommended value: 0x10
+ */
+
+/* MAX Control Type Register. This 16-bit register specifies the
+ * "type" field of a MAC Control frame. The TXMAC uses this field to
+ * encapsulate the MAC Control frame for transmission, and the RXMAC
+ * uses it for decoding valid MAC Control frames received from the
+ * network.
+ *
+ * Recommended value: 0x8808
+ */
+
+/* MAC Address Registers. Each of these registers specify the
+ * ethernet MAC of the interface, 16-bits at a time. Register
+ * 0 specifies bits [47:32], register 1 bits [31:16], and register
+ * 2 bits [15:0].
+ *
+ * Registers 3 through and including 5 specify an alternate
+ * MAC address for the interface.
+ *
+ * Registers 6 through and including 8 specify the MAC Control
+ * Address, which must be the reserved multicast address for MAC
+ * Control frames.
+ *
+ * Example: To program primary station address a:b:c:d:e:f into
+ * the chip.
+ * MAC_Address_2 = (a << 8) | b
+ * MAC_Address_1 = (c << 8) | d
+ * MAC_Address_0 = (e << 8) | f
+ */
+
+/* Address Filter Registers. Registers 0 through 2 specify bit
+ * fields [47:32] through [15:0], respectively, of the address
+ * filter. The Address Filter 2&1 Mask Register denotes the 8-bit
+ * nibble mask for Address Filter Registers 2 and 1. The Address
+ * Filter 0 Mask Register denotes the 16-bit mask for the Address
+ * Filter Register 0.
+ */
+
+/* Hash Table Registers. Registers 0 through 15 specify bit fields
+ * [255:240] through [15:0], respectively, of the hash table.
+ */
+
+/* Statistics Registers. All of these registers are 16-bits and
+ * track occurances of a specific event. GEM can be configured
+ * to interrupt the host cpu when any of these counters overflow.
+ * They should all be explicitly initialized to zero when the interface
+ * is brought up.
+ */
+
+/* Random Number Seed Register. This 10-bit value is used as the
+ * RNG seed inside GEM for the CSMA/CD backoff algorithm. It is
+ * recommended to program this register to the 10 LSB of the
+ * interfaces MAC address.
+ */
+
+/* Pause Timer, read-only. This 16-bit timer is used to time the pause
+ * interval as indicated by a received pause flow control frame.
+ * A non-zero value in this timer indicates that the MAC is currently in
+ * the paused state.
+ */
+
+/* MIF Registers */
+#define MIF_BBCLK 0x6200UL /* MIF Bit-Bang Clock */
+#define MIF_BBDATA 0x6204UL /* MIF Bit-Band Data */
+#define MIF_BBOENAB 0x6208UL /* MIF Bit-Bang Output Enable */
+#define MIF_FRAME 0x620CUL /* MIF Frame/Output Register */
+#define MIF_CFG 0x6210UL /* MIF Configuration Register */
+#define MIF_MASK 0x6214UL /* MIF Mask Register */
+#define MIF_STATUS 0x6218UL /* MIF Status Register */
+#define MIF_SMACHINE 0x621CUL /* MIF State Machine Register */
+
+/* MIF Bit-Bang Clock. This 1-bit register is used to generate the
+ * MDC clock waveform on the MII Management Interface when the MIF is
+ * programmed in the "Bit-Bang" mode. Writing a '1' after a '0' into
+ * this register will create a rising edge on the MDC, while writing
+ * a '0' after a '1' will create a falling edge. For every bit that
+ * is transferred on the management interface, both edges have to be
+ * generated.
+ */
+
+/* MIF Bit-Bang Data. This 1-bit register is used to generate the
+ * outgoing data (MDO) on the MII Management Interface when the MIF
+ * is programmed in the "Bit-Bang" mode. The daa will be steered to the
+ * appropriate MDIO based on the state of the PHY_Select bit in the MIF
+ * Configuration Register.
+ */
+
+/* MIF Big-Band Output Enable. THis 1-bit register is used to enable
+ * ('1') or disable ('0') the I-directional driver on the MII when the
+ * MIF is programmed in the "Bit-Bang" mode. The MDIO should be enabled
+ * when data bits are transferred from the MIF to the transceiver, and it
+ * should be disabled when the interface is idle or when data bits are
+ * transferred from the transceiver to the MIF (data portion of a read
+ * instruction). Only one MDIO will be enabled at a given time, depending
+ * on the state of the PHY_Select bit in the MIF Configuration Register.
+ */
+
+/* MIF Configuration Register. This 15-bit register controls the operation
+ * of the MIF.
+ */
+#define MIF_CFG_PSELECT 0x00000001 /* Xcvr slct: 0=mdio0 1=mdio1 */
+#define MIF_CFG_POLL 0x00000002 /* Enable polling mechanism */
+#define MIF_CFG_BBMODE 0x00000004 /* 1=bit-bang 0=frame mode */
+#define MIF_CFG_PRADDR 0x000000f8 /* Xcvr poll register address */
+#define MIF_CFG_MDI0 0x00000100 /* MDIO_0 present or read-bit */
+#define MIF_CFG_MDI1 0x00000200 /* MDIO_1 present or read-bit */
+#define MIF_CFG_PPADDR 0x00007c00 /* Xcvr poll PHY address */
+
+/* MIF Frame/Output Register. This 32-bit register allows the host to
+ * communicate with a transceiver in frame mode (as opposed to big-bang
+ * mode). Writes by the host specify an instrution. After being issued
+ * the host must poll this register for completion. Also, after
+ * completion this register holds the data returned by the transceiver
+ * if applicable.
+ */
+#define MIF_FRAME_ST 0xc0000000 /* STart of frame */
+#define MIF_FRAME_OP 0x30000000 /* OPcode */
+#define MIF_FRAME_PHYAD 0x0f800000 /* PHY ADdress */
+#define MIF_FRAME_REGAD 0x007c0000 /* REGister ADdress */
+#define MIF_FRAME_TAMSB 0x00020000 /* Turn Around MSB */
+#define MIF_FRAME_TALSB 0x00010000 /* Turn Around LSB */
+#define MIF_FRAME_DATA 0x0000ffff /* Instruction Payload */
+
+/* MIF Status Register. This register reports status when the MIF is
+ * operating in the poll mode. The poll status field is auto-clearing
+ * on read.
+ */
+#define MIF_STATUS_DATA 0xffff0000 /* Live image of XCVR reg */
+#define MIF_STATUS_STAT 0x0000ffff /* Which bits have changed */
+
+/* MIF Mask Register. This 16-bit register is used when in poll mode
+ * to say which bits of the polled register will cause an interrupt
+ * when changed.
+ */
+
+/* PCS/Serialink Registers */
+#define PCS_MIICTRL 0x9000UL /* PCS MII Control Register */
+#define PCS_MIISTAT 0x9004UL /* PCS MII Status Register */
+#define PCS_MIIADV 0x9008UL /* PCS MII Advertisement Reg */
+#define PCS_MIILP 0x900CUL /* PCS MII Link Partner Ability */
+#define PCS_CFG 0x9010UL /* PCS Configuration Register */
+#define PCS_SMACHINE 0x9014UL /* PCS State Machine Register */
+#define PCS_ISTAT 0x9018UL /* PCS Interrupt Status Reg */
+#define PCS_DMODE 0x9050UL /* Datapath Mode Register */
+#define PCS_SCTRL 0x9054UL /* Serialink Control Register */
+#define PCS_SOS 0x9058UL /* Shared Output Select Reg */
+#define PCS_SSTATE 0x905CUL /* Serialink State Register */
+
+/* PCD MII Control Register. */
+#define PCS_MIICTRL_SPD 0x00000040 /* Read as one, writes ignored */
+#define PCS_MIICTRL_CT 0x00000080 /* Force COL signal active */
+#define PCS_MIICTRL_DM 0x00000100 /* Duplex mode, forced low */
+#define PCS_MIICTRL_RAN 0x00000200 /* Restart auto-neg, self clear */
+#define PCS_MIICTRL_ISO 0x00000400 /* Read as zero, writes ignored */
+#define PCS_MIICTRL_PD 0x00000800 /* Read as zero, writes ignored */
+#define PCS_MIICTRL_ANE 0x00001000 /* Auto-neg enable */
+#define PCS_MIICTRL_SS 0x00002000 /* Read as zero, writes ignored */
+#define PCS_MIICTRL_WB 0x00004000 /* Wrapback, loopback at 10-bit
+ * input side of Serialink
+ */
+#define PCS_MIICTRL_RST 0x00008000 /* Resets PCS, self clearing */
+
+/* PCS MII Status Register. */
+#define PCS_MIISTAT_EC 0x00000001 /* Ext Capability: Read as zero */
+#define PCS_MIISTAT_JD 0x00000002 /* Jabber Detect: Read as zero */
+#define PCS_MIISTAT_LS 0x00000004 /* Link Status: 1=up 0=down */
+#define PCS_MIISTAT_ANA 0x00000008 /* Auto-neg Ability, always 1 */
+#define PCS_MIISTAT_RF 0x00000010 /* Remote Fault */
+#define PCS_MIISTAT_ANC 0x00000020 /* Auto-neg complete */
+#define PCS_MIISTAT_ES 0x00000100 /* Extended Status, always 1 */
+
+/* PCS MII Advertisement Register. */
+#define PCS_MIIADV_FD 0x00000020 /* Advertise Full Duplex */
+#define PCS_MIIADV_HD 0x00000040 /* Advertise Half Duplex */
+#define PCS_MIIADV_SP 0x00000080 /* Advertise Symmetric Pause */
+#define PCS_MIIADV_AP 0x00000100 /* Advertise Asymmetric Pause */
+#define PCS_MIIADV_RF 0x00003000 /* Remote Fault */
+#define PCS_MIIADV_ACK 0x00004000 /* Read-only */
+#define PCS_MIIADV_NP 0x00008000 /* Next-page, forced low */
+
+/* PCS MII Link Partner Ability Register. This register is equivalent
+ * to the Link Partnet Ability Register of the standard MII register set.
+ * It's layout corresponds to the PCS MII Advertisement Register.
+ */
+
+/* PCS Configuration Register. */
+#define PCS_CFG_ENABLE 0x00000001 /* Must be zero while changing
+ * PCS MII advertisement reg.
+ */
+#define PCS_CFG_SDO 0x00000002 /* Signal detect override */
+#define PCS_CFG_SDL 0x00000004 /* Signal detect active low */
+#define PCS_CFG_JS 0x00000018 /* Jitter-study:
+ * 0 = normal operation
+ * 1 = high-frequency test pattern
+ * 2 = low-frequency test pattern
+ * 3 = reserved
+ */
+#define PCS_CFG_TO 0x00000020 /* 10ms auto-neg timer override */
+
+/* PCS Interrupt Status Register. This register is self-clearing
+ * when read.
+ */
+#define PCS_ISTAT_LSC 0x00000004 /* Link Status Change */
+
+/* Datapath Mode Register. */
+#define PCS_DMODE_SM 0x00000001 /* 1 = use internal Serialink */
+#define PCS_DMODE_ESM 0x00000002 /* External SERDES mode */
+#define PCS_DMODE_MGM 0x00000004 /* MII/GMII mode */
+#define PCS_DMODE_GMOE 0x00000008 /* GMII Output Enable */
+
+/* Serialink Control Register. */
+#define PCS_SCTRL_LOOP 0x00000001 /* Loopback enable */
+#define PCS_SCTRL_ESCD 0x00000002 /* Enable sync char detection */
+#define PCS_SCTRL_LOCK 0x00000004 /* Lock to reference clock */
+#define PCS_SCTRL_EMP 0x00000018 /* Output driver emphasis */
+#define PCS_SCTRL_STEST 0x000001c0 /* Self test patterns */
+#define PCS_SCTRL_PDWN 0x00000200 /* Software power-down */
+#define PCS_SCTRL_RXZ 0x00000c00 /* PLL input to Serialink */
+#define PCS_SCTRL_RXP 0x00003000 /* PLL input to Serialink */
+#define PCS_SCTRL_TXZ 0x0000c000 /* PLL input to Serialink */
+#define PCS_SCTRL_TXP 0x00030000 /* PLL input to Serialink */
+
+/* Shared Output Select Register. For test and debug, allows multiplexing
+ * test outputs into the PROM address pins. Set to zero for normal
+ * operation.
+ */
+#define PCS_SOS_PADDR 0x00000003 /* PROM Address */
+
+/* PROM Image Space */
+#define PROM_START 0x100000UL /* Expansion ROM run time access*/
+#define PROM_SIZE 0x0fffffUL /* Size of ROM */
+#define PROM_END 0x200000UL /* End of ROM */
+
+/* MII phy registers */
+#define PHY_CTRL 0x00
+#define PHY_STAT 0x01
+#define PHY_ADV 0x04
+#define PHY_LPA 0x05
+
+#define PHY_CTRL_FDPLX 0x0100 /* Full duplex */
+#define PHY_CTRL_ISO 0x0400 /* Isloate MII from PHY */
+#define PHY_CTRL_ANRES 0x0200 /* Auto-negotiation restart */
+#define PHY_CTRL_ANENAB 0x1000 /* Auto-negotiation enable */
+#define PHY_CTRL_SPD100 0x2000 /* Select 100Mbps */
+#define PHY_CTRL_RST 0x8000 /* Reset PHY */
+
+#define PHY_STAT_LSTAT 0x0004 /* Link status */
+#define PHY_STAT_ANEGC 0x0020 /* Auto-negotiation complete */
+
+#define PHY_ADV_10HALF 0x0020
+#define PHY_ADV_10FULL 0x0040
+#define PHY_ADV_100HALF 0x0080
+#define PHY_ADV_100FULL 0x0100
+
+#define PHY_LPA_10HALF 0x0020
+#define PHY_LPA_10FULL 0x0040
+#define PHY_LPA_100HALF 0x0080
+#define PHY_LPA_100FULL 0x0100
+#define PHY_LPA_FAULT 0x2000
+
+/* When it can, GEM internally caches 4 aligned TX descriptors
+ * at a time, so that it can use full cacheline DMA reads.
+ *
+ * Note that unlike HME, there is no ownership bit in the descriptor
+ * control word. The same functionality is obtained via the TX-Kick
+ * and TX-Complete registers. As a result, GEM need not write back
+ * updated values to the TX descriptor ring, it only performs reads.
+ *
+ * Since TX descriptors are never modified by GEM, the driver can
+ * use the buffer DMA address as a place to keep track of allocated
+ * DMA mappings for a transmitted packet.
+ */
+struct gem_txd {
+ u64 control_word;
+ u64 buffer;
+};
+
+#define TXDCTRL_BUFSZ 0x0000000000007fff /* Buffer Size */
+#define TXDCTRL_CSTART 0x00000000001f8000 /* CSUM Start Offset */
+#define TXDCTRL_COFF 0x000000001fe00000 /* CSUM Stuff Offset */
+#define TXDCTRL_CENAB 0x0000000020000000 /* CSUM Enable */
+#define TXDCTRL_EOF 0x0000000040000000 /* End of Frame */
+#define TXDCTRL_SOF 0x0000000080000000 /* Start of Frame */
+#define TXDCTRL_INTME 0x0000000100000000 /* "Interrupt Me" */
+#define TXDCTRL_NOCRC 0x0000000200000000 /* No CRC Present */
+
+/* GEM requires that RX descriptors are provided four at a time,
+ * aligned. Also, the RX ring may not wrap around. This means that
+ * there will be at least 4 unused desciptor entries in the middle
+ * of the RX ring at all times.
+ *
+ * Similar to HME, GEM assumes that it can write garbage bytes before
+ * the beginning of the buffer and right after the end in order to DMA
+ * whole cachelines.
+ *
+ * Unlike for TX, GEM does update the status word in the RX descriptors
+ * when packets arrive. Therefore an ownership bit does exist in the
+ * RX descriptors. It is advisory, GEM clears it but does not check
+ * it in any way. So when buffers are posted to the RX ring (via the
+ * RX Kick register) by the driver it must make sure the buffers are
+ * truly ready and that the ownership bits are set properly.
+ *
+ * Even though GEM modifies the RX descriptors, it guarentees that the
+ * buffer DMA address field will stay the same when it performs these
+ * updates. Therefore it can be used to keep track of DMA mappings
+ * by the host driver just as in the TX descriptor case above.
+ */
+struct gem_rxd {
+ u64 status_word;
+ u64 buffer;
+};
+
+#define RXDCTRL_TCPCSUM 0x000000000000ffff /* TCP Pseudo-CSUM */
+#define RXDCTRL_BUFSZ 0x000000007fff0000 /* Buffer Size */
+#define RXDCTRL_OWN 0x0000000080000000 /* GEM owns this entry */
+#define RXDCTRL_HASHVAL 0x0ffff00000000000 /* Hash Value */
+#define RXDCTRL_HPASS 0x1000000000000000 /* Passed Hash Filter */
+#define RXDCTRL_ALTMAC 0x2000000000000000 /* Matched ALT MAC */
+#define RXDCTRL_BAD 0x4000000000000000 /* Frame has bad CRC */
+
+#define RXDCTRL_FRESH \
+ ((((RX_BUF_ALLOC_SIZE - RX_OFFSET) << 16) & RXDCTRL_BUFSZ) | \
+ RXDCTRL_OWN)
+
+#define TX_RING_SIZE 128
+#define RX_RING_SIZE 128
+
+#if TX_RING_SIZE == 32
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_32
+#elif TX_RING_SIZE == 64
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_64
+#elif TX_RING_SIZE == 128
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_128
+#elif TX_RING_SIZE == 256
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_256
+#elif TX_RING_SIZE == 512
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_512
+#elif TX_RING_SIZE == 1024
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_1K
+#elif TX_RING_SIZE == 2048
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_2K
+#elif TX_RING_SIZE == 4096
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_4K
+#elif TX_RING_SIZE == 8192
+#define TXDMA_CFG_BASE TXDMA_CFG_RINGSZ_8K
+#else
+#error TX_RING_SIZE value is illegal...
+#endif
+
+#if RX_RING_SIZE == 32
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_32
+#elif RX_RING_SIZE == 64
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_64
+#elif RX_RING_SIZE == 128
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_128
+#elif RX_RING_SIZE == 256
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_256
+#elif RX_RING_SIZE == 512
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_512
+#elif RX_RING_SIZE == 1024
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_1K
+#elif RX_RING_SIZE == 2048
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_2K
+#elif RX_RING_SIZE == 4096
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_4K
+#elif RX_RING_SIZE == 8192
+#define RXDMA_CFG_BASE RXDMA_CFG_RINGSZ_8K
+#else
+#error RX_RING_SIZE is illegal...
+#endif
+
+#define NEXT_TX(N) (((N) + 1) & (TX_RING_SIZE - 1))
+#define NEXT_RX(N) (((N) + 1) & (RX_RING_SIZE - 1))
+
+#define TX_BUFFS_AVAIL(GP) \
+ (((GP)->tx_old <= (GP)->tx_new) ? \
+ (GP)->tx_old + (TX_RING_SIZE - 1) - (GP)->tx_new : \
+ (GP)->tx_old - (GP)->tx_new - 1)
+
+#define RX_OFFSET 2
+#define RX_BUF_ALLOC_SIZE (1546 + RX_OFFSET + 64)
+
+#define RX_COPY_THRESHOLD 256
+
+struct gem_init_block {
+ struct gem_txd txd[TX_RING_SIZE];
+ struct gem_rxd rxd[RX_RING_SIZE];
+};
+
+enum gem_phy_type {
+ phy_mii_mdio0,
+ phy_mii_mdio1,
+ phy_serialink,
+ phy_serdes,
+};
+
+enum link_state {
+ aneg_wait,
+ force_wait,
+};
+
+struct gem {
+ spinlock_t lock;
+ unsigned long regs;
+ int rx_new, rx_old;
+ int tx_new, tx_old;
+
+ struct gem_init_block *init_block;
+
+ struct sk_buff *rx_skbs[RX_RING_SIZE];
+ struct sk_buff *tx_skbs[RX_RING_SIZE];
+
+ struct net_device_stats net_stats;
+
+ enum gem_phy_type phy_type;
+ int tx_fifo_sz;
+ int rx_fifo_sz;
+ int rx_pause_off;
+ int rx_pause_on;
+ int mii_phy_addr;
+
+ /* Diagnostic counters and state. */
+ u64 pause_entered;
+ u16 pause_last_time_recvd;
+
+ struct timer_list link_timer;
+ int timer_ticks;
+ enum link_state lstate;
+
+ dma_addr_t gblock_dvma;
+ struct pci_dev *pdev;
+ struct net_device *dev;
+};
+
+#define ALIGNED_RX_SKB_ADDR(addr) \
+ ((((unsigned long)(addr) + (64UL - 1UL)) & ~(64UL - 1UL)) - (unsigned long)(addr))
+static __inline__ struct sk_buff *gem_alloc_skb(int size, int gfp_flags)
+{
+ struct sk_buff *skb = alloc_skb(size + 64, gfp_flags);
+
+ if (skb) {
+ int offset = (int) ALIGNED_RX_SKB_ADDR(skb->data);
+ if (offset)
+ skb_reserve(skb, offset);
+ }
+
+ return skb;
+}
+
+#endif /* _SUNGEM_H */
static inline void
TLan_SetTimer( struct net_device *dev, u32 ticks, u32 type )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
unsigned long flags = 0;
if (!in_irq())
static void __devexit tlan_remove_one( struct pci_dev *pdev)
{
struct net_device *dev = pci_get_drvdata( pdev );
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
unregister_netdev( dev );
while( tlan_have_eisa ) {
dev = TLan_Eisa_Devices;
- priv = (TLanPrivateInfo *) dev->priv;
+ priv = dev->priv;
if (priv->dmaStorage) {
kfree(priv->dmaStorage);
}
int i;
TLanPrivateInfo *priv;
- priv = (TLanPrivateInfo *) dev->priv;
+ priv = dev->priv;
if (!priv->is_eisa) /* EISA devices have already requested IO */
if (!request_region( dev->base_addr, 0x10, TLanSignature )) {
static int TLan_Open( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
int err;
priv->tlanRev = TLan_DioRead8( dev->base_addr, TLAN_DEF_REVISION );
static int TLan_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 *data = (u16 *)&rq->ifr_data;
u32 phy = priv->phy[priv->phyNum];
static int TLan_StartTx( struct sk_buff *skb, struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
TLanList *tail_list;
u8 *tail_buffer;
int pad;
int type;
TLanPrivateInfo *priv;
- dev = (struct net_device *) dev_id;
- priv = (TLanPrivateInfo *) dev->priv;
+ dev = dev_id;
+ priv = dev->priv;
spin_lock(&priv->lock);
static int TLan_Close(struct net_device *dev)
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
netif_stop_queue(dev);
priv->neg_be_verbose = 0;
static struct net_device_stats *TLan_GetStats( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
int i;
/* Should only read stats if open ? */
u32 TLan_HandleTxEOF( struct net_device *dev, u16 host_int )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
int eoc = 0;
TLanList *head_list;
u32 ack = 0;
u32 TLan_HandleRxEOF( struct net_device *dev, u16 host_int )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u32 ack = 0;
int eoc = 0;
u8 *head_buffer;
u32 TLan_HandleTxEOC( struct net_device *dev, u16 host_int )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
TLanList *head_list;
u32 ack = 1;
u32 TLan_HandleStatusCheck( struct net_device *dev, u16 host_int )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u32 ack;
u32 error;
u8 net_sts;
u32 TLan_HandleRxEOC( struct net_device *dev, u16 host_int )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
TLanList *head_list;
u32 ack = 1;
void TLan_Timer( unsigned long data )
{
struct net_device *dev = (struct net_device *) data;
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u32 elapsed;
unsigned long flags = 0;
void TLan_ResetLists( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
int i;
TLanList *list;
struct sk_buff *skb;
void TLan_FreeLists( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
int i;
TLanList *list;
struct sk_buff *skb;
void TLan_ReadAndClearStats( struct net_device *dev, int record )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u32 tx_good, tx_under;
u32 rx_good, rx_over;
u32 def_tx, crc, code;
void
TLan_ResetAdapter( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
int i;
u32 addr;
u32 data;
void
TLan_FinishReset( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u8 data;
u32 phy;
u8 sio;
void TLan_PhyPrint( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 i, data0, data1, data2, data3, phy;
phy = priv->phy[priv->phyNum];
void TLan_PhyDetect( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 control;
u16 hi;
u16 lo;
void TLan_PhyPowerDown( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 value;
TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Powering down PHY(s).\n", dev->name );
void TLan_PhyPowerUp( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 value;
TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Powering up PHY.\n", dev->name );
void TLan_PhyReset( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 phy;
u16 value;
void TLan_PhyStartLink( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 ability;
u16 control;
u16 data;
void TLan_PhyFinishAutoNeg( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 an_adv;
u16 an_lpa;
u16 data;
void TLan_PhyMonitor( struct net_device *dev )
{
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
u16 phy;
u16 phy_status;
u32 i;
int err;
int minten;
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
unsigned long flags = 0;
err = FALSE;
u16 sio;
int minten;
unsigned long flags = 0;
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
outw(TLAN_NET_SIO, dev->base_addr + TLAN_DIO_ADR);
sio = dev->base_addr + TLAN_DIO_DATA + TLAN_NET_SIO;
int TLan_EeReadByte( struct net_device *dev, u8 ee_addr, u8 *data )
{
int err;
- TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
+ TLanPrivateInfo *priv = dev->priv;
unsigned long flags = 0;
int ret=0;
}
}
- if(rpl->SkbStat == SKB_DATA_COPY
- || rpl->SkbStat == SKB_DMA_DIRECT)
+ if(skb && (rpl->SkbStat == SKB_DATA_COPY
+ || rpl->SkbStat == SKB_DMA_DIRECT))
{
if(rpl->SkbStat == SKB_DATA_COPY)
- memmove(skb->data, ReceiveDataPtr, Length);
+ memcpy(skb->data, ReceiveDataPtr, Length);
/* Deliver frame to system */
rpl->Skb = NULL;
if (pci_flags & PCI_USES_MASTER)
pci_set_master (pdev);
- dev = init_etherdev(NULL, sizeof(*np));
+ dev = alloc_etherdev(sizeof(*np));
if (dev == NULL) {
printk (KERN_ERR "init_ethernet failed for card #%d\n",
card_idx);
}
SET_MODULE_OWNER(dev);
- if (pci_request_regions(pdev, dev->name))
+ if (pci_request_regions(pdev, "via-rhine"))
goto err_out_free_netdev;
#ifndef USE_IO
ioaddr = (long) ioremap (ioaddr, io_size);
if (!ioaddr) {
printk (KERN_ERR "ioremap failed for device %s, region 0x%X @ 0x%X\n",
- dev->name, io_size,
+ pdev->slot_name, io_size,
pci_resource_start (pdev, 1));
goto err_out_free_res;
}
#endif
- printk(KERN_INFO "%s: %s at 0x%lx, ",
- dev->name, via_rhine_chip_info[chip_id].name, ioaddr);
-
/* Ideally we would read the EEPROM but access may be locked. */
for (i = 0; i < 6; i++)
dev->dev_addr[i] = readb(ioaddr + StationAddr + i);
- for (i = 0; i < 5; i++)
- printk("%2.2x:", dev->dev_addr[i]);
- printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], pdev->irq);
/* Reset the chip to erase previous misconfiguration. */
writew(CmdReset, ioaddr + ChipCmd);
dev->tx_timeout = via_rhine_tx_timeout;
dev->watchdog_timeo = TX_TIMEOUT;
+ i = register_netdev(dev);
+ if (i)
+ goto err_out_unmap;
+
+ printk(KERN_INFO "%s: %s at 0x%lx, ",
+ dev->name, via_rhine_chip_info[chip_id].name, ioaddr);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], pdev->irq);
+
pci_set_drvdata(pdev, dev);
if (np->drv_flags & CanHaveMII) {
return 0;
+err_out_unmap:
#ifndef USE_IO
-/* note this is ifdef'd because the ioremap is ifdef'd...
- * so additional exit conditions above this must move
- * pci_release_regions outside of the ifdef */
+ iounmap((void *)ioaddr);
err_out_free_res:
- pci_release_regions(pdev);
#endif
+ pci_release_regions(pdev);
err_out_free_netdev:
- unregister_netdev (dev);
kfree (dev);
err_out:
return -ENODEV;
after the Tx thread. */
static void via_rhine_interrupt(int irq, void *dev_instance, struct pt_regs *rgs)
{
- struct net_device *dev = (struct net_device *)dev_instance;
+ struct net_device *dev = dev_instance;
long ioaddr;
u32 intr_status;
int boguscnt = max_interrupt_work;
static void __devexit via_rhine_remove_one (struct pci_dev *pdev)
{
struct net_device *dev = pci_get_drvdata(pdev);
- struct netdev_private *np = (struct netdev_private *)(dev->priv);
+ struct netdev_private *np = dev->priv;
unregister_netdev(dev);
dev->last_rx = jiffies;
sc->stats.rx_packets++;
+ sc->stats.rx_bytes += len;
LMC_CONSOLE_LOG("recv", skb->data, len);
irq = pdev->irq;
- if(pci_set_dma_mask(pdev,0xFFFFffff)) {
+ if (pci_set_dma_mask(pdev,0xFFFFffff)) {
printk(KERN_WARNING "Winbond-840: Device %s disabled due to DMA limitations.\n",
- pdev->name);
+ pdev->slot_name);
return -EIO;
}
- dev = init_etherdev(NULL, sizeof(*np));
+ dev = alloc_etherdev(sizeof(*np));
if (!dev)
return -ENOMEM;
SET_MODULE_OWNER(dev);
- if (pci_request_regions(pdev, dev->name))
+ if (pci_request_regions(pdev, "winbond-840"))
goto err_out_netdev;
#ifdef USE_IO_OPS
goto err_out_free_res;
#endif
- printk(KERN_INFO "%s: %s at 0x%lx, ",
- dev->name, pci_id_tbl[chip_idx].name, ioaddr);
-
/* Warning: broken for big-endian machines. */
for (i = 0; i < 3; i++)
((u16 *)dev->dev_addr)[i] = le16_to_cpu(eeprom_read(ioaddr, i));
- for (i = 0; i < 5; i++)
- printk("%2.2x:", dev->dev_addr[i]);
- printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
-
/* Reset the chip to erase previous misconfiguration.
No hold time required! */
writel(0x00000001, ioaddr + PCIBusCfg);
dev->tx_timeout = &tx_timeout;
dev->watchdog_timeo = TX_TIMEOUT;
+ i = register_netdev(dev);
+ if (i)
+ goto err_out_cleardev;
+
+ printk(KERN_INFO "%s: %s at 0x%lx, ",
+ dev->name, pci_id_tbl[chip_idx].name, ioaddr);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
+
if (np->drv_flags & CanHaveMII) {
int phy, phy_idx = 0;
for (phy = 1; phy < 32 && phy_idx < MII_CNT; phy++) {
find_cnt++;
return 0;
+err_out_cleardev:
+ pci_set_drvdata(pdev, NULL);
#ifndef USE_IO_OPS
+ iounmap((void *)ioaddr);
err_out_free_res:
- pci_release_regions(pdev);
#endif
+ pci_release_regions(pdev);
err_out_netdev:
- unregister_netdev (dev);
kfree (dev);
return -ENODEV;
}
i = pci_enable_device(pdev);
if (i) return i;
- dev = init_etherdev(NULL, sizeof(*np));
+ dev = alloc_etherdev(sizeof(*np));
if (!dev) {
printk (KERN_ERR "yellowfin: cannot allocate ethernet device\n");
return -ENOMEM;
#endif
irq = pdev->irq;
- printk(KERN_INFO "%s: %s type %8x at 0x%lx, ",
- dev->name, pci_id_tbl[chip_idx].name, inl(ioaddr + ChipRev), ioaddr);
-
if (drv_flags & IsGigabit)
for (i = 0; i < 6; i++)
dev->dev_addr[i] = inb(ioaddr + StnAddr + i);
for (i = 0; i < 6; i++)
dev->dev_addr[i] = read_eeprom(ioaddr, ee_offset + i);
}
- for (i = 0; i < 5; i++)
- printk("%2.2x:", dev->dev_addr[i]);
- printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
/* Reset the chip. */
outl(0x80000000, ioaddr + DMACtrl);
if (mtu)
dev->mtu = mtu;
+ i = register_netdev(dev);
+ if (i)
+ goto err_out_cleardev;
+
+ printk(KERN_INFO "%s: %s type %8x at 0x%lx, ",
+ dev->name, pci_id_tbl[chip_idx].name, inl(ioaddr + ChipRev), ioaddr);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
+
if (np->drv_flags & HasMII) {
int phy, phy_idx = 0;
for (phy = 0; phy < 32 && phy_idx < MII_CNT; phy++) {
return 0;
+err_out_cleardev:
+ pci_set_drvdata(pdev, NULL);
#ifndef USE_IO_OPS
+ iounmap((void *)ioaddr);
err_out_free_res:
- pci_release_regions(pdev);
#endif
+ pci_release_regions(pdev);
err_out_free_netdev:
- unregister_netdev (dev);
kfree (dev);
return -ENODEV;
}
\f
static int yellowfin_open(struct net_device *dev)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
long ioaddr = dev->base_addr;
int i;
static void yellowfin_timer(unsigned long data)
{
struct net_device *dev = (struct net_device *)data;
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
long ioaddr = dev->base_addr;
int next_tick = 60*HZ;
static void yellowfin_tx_timeout(struct net_device *dev)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
long ioaddr = dev->base_addr;
printk(KERN_WARNING "%s: Yellowfin transmit timed out at %d/%d Tx "
/* Initialize the Rx and Tx rings, along with various 'dev' bits. */
static void yellowfin_init_ring(struct net_device *dev)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
int i;
yp->tx_full = 0;
static int yellowfin_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
unsigned entry;
netif_stop_queue (dev);
after the Tx thread. */
static void yellowfin_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
{
- struct net_device *dev = (struct net_device *)dev_instance;
+ struct net_device *dev = dev_instance;
struct yellowfin_private *yp;
long ioaddr;
int boguscnt = max_interrupt_work;
#endif
ioaddr = dev->base_addr;
- yp = (struct yellowfin_private *)dev->priv;
+ yp = dev->priv;
spin_lock (&yp->lock);
for clarity and better register allocation. */
static int yellowfin_rx(struct net_device *dev)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
int entry = yp->cur_rx % RX_RING_SIZE;
int boguscnt = yp->dirty_rx + RX_RING_SIZE - yp->cur_rx;
static void yellowfin_error(struct net_device *dev, int intr_status)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
printk(KERN_ERR "%s: Something Wicked happened! %4.4x.\n",
dev->name, intr_status);
static int yellowfin_close(struct net_device *dev)
{
long ioaddr = dev->base_addr;
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
int i;
netif_stop_queue (dev);
static struct net_device_stats *yellowfin_get_stats(struct net_device *dev)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
return &yp->stats;
}
static void set_rx_mode(struct net_device *dev)
{
- struct yellowfin_private *yp = (struct yellowfin_private *)dev->priv;
+ struct yellowfin_private *yp = dev->priv;
long ioaddr = dev->base_addr;
u16 cfg_value = inw(ioaddr + Cnfg);
static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
- struct yellowfin_private *np = (void *)dev->priv;
+ struct yellowfin_private *np = dev->priv;
long ioaddr = dev->base_addr;
u16 *data = (u16 *)&rq->ifr_data;
+2001-03-02 Tim Waugh <twaugh@redhat.com>
+
+ * ieee1284_ops.c (parport_ieee1284_read_nibble): Reset nAutoFd
+ on timeout. Matches 2.2.x behaviour.
+
+2001-03-02 Andrew Morton <andrewm@uow.edu.au>
+
+ * parport_pc.c (registered_parport): New static variable.
+ (parport_pc_find_ports): Set it when we register PCI driver.
+ (init_module): Unregister PCI driver if necessary when we
+ fail.
+
+2001-03-02 Tim Waugh <twaugh@redhat.com>
+
+ * ieee1284_ops.c (parport_ieee1284_write_compat): Don't use
+ down_trylock to reset the IRQ count. Don't even use sema_init,
+ because it's not even necessary to reset the count. I can't
+ remember why we ever did.
+
2001-01-04 Peter Osterlund <peter.osterlund@mailbox.swipnet.se>
* ieee1284.c (parport_negotiate): Fix missing printk argument.
0001 EBUS
1000 EBUS
1001 Happy Meal
+ 1100 RIO EBUS
+ 1101 RIO GEM
+ 1102 RIO 1394
+ 1103 RIO USB
+ 2bad GEM
5000 Simba Advanced PCI Bridge
5043 SunPCI Co-processor
- 8000 PCI Bus Module
+ 8000 Psycho PCI Bus Module
+ 8001 Schizo PCI Bus Module
a000 Ultra IIi
+ a001 Ultra IIe
108f Systemsoft
1090 Encore Computer Corporation
1091 Intergraph Corporation
subdir-y += audio
subdir-m += audio
-obj-$(CONFIG_SPARCAUDIO) += audio/sparcaudio.o
+
+# This is grotty but works around some problems with modules.
+ifeq ($(CONFIG_SPARCAUDIO),y)
+obj-y += audio/sparcaudio.o
+endif
include $(TOPDIR)/Rules.make
-/* $Id: cs4231.c,v 1.44 2001/02/13 01:16:59 davem Exp $
+/* $Id: cs4231.c,v 1.45 2001/03/23 08:16:13 davem Exp $
* drivers/sbus/audio/cs4231.c
*
* Copyright 1996, 1997, 1998, 1999 Derrick J Brashear (shadow@andrew.cmu.edu)
}
#endif
+#ifdef EB4231_SUPPORT
+static int __init ebus_cs4231_p(struct linux_ebus_device *edev)
+{
+ if (!strcmp(edev->prom_name, "SUNW,CS4231"))
+ return 1;
+ if (!strcmp(edev->prom_name, "audio")) {
+ char compat[16];
+
+ prom_getstring(edev->prom_node, "compatible",
+ compat, sizeof(compat));
+ compat[15] = '\0';
+ if (!strcmp(compat, "SUNW,CS4231"))
+ return 1;
+ }
+
+ return 0;
+}
+#endif
+
/* Probe for the cs4231 chip and then attach the driver. */
#ifdef MODULE
int init_module(void)
#ifdef EB4231_SUPPORT
for_each_ebus(ebus) {
for_each_ebusdev(edev, ebus) {
- if (!strcmp(edev->prom_name, "SUNW,CS4231")) {
+ if (ebus_cs4231_p(edev)) {
/* Don't go over the max number of drivers. */
if (num_drivers >= MAX_DRIVERS)
continue;
obj-$(CONFIG_ENVCTRL) += envctrl.o
obj-$(CONFIG_DISPLAY7SEG) += display7seg.o
obj-$(CONFIG_WATCHDOG_CP1XXX) += cpwatchdog.o
+obj-$(CONFIG_WATCHDOG_RIO) += riowatchdog.o
obj-$(CONFIG_OBP_FLASH) += flash.o
obj-$(CONFIG_SUN_OPENPROMIO) += openprom.o
obj-$(CONFIG_SUN_MOSTEK_RTC) += rtc.o
-/* $Id: aurora.c,v 1.10 2000/12/07 04:35:38 anton Exp $
+/* $Id: aurora.c,v 1.11 2001/03/08 01:43:30 davem Exp $
* linux/drivers/sbus/char/aurora.c -- Aurora multiport driver
*
* Copyright (c) 1999 by Oliver Aldulea (oli@bv.ro)
0, SPIN_LOCK_UNLOCKED, 0, 0, 0, 0,
};
-struct timer_list wd_timer;
+static struct timer_list wd_timer;
static int wd0_timeout = 0;
static int wd1_timeout = 0;
/* Forward declarations of internal methods
*/
-void wd_dumpregs(void);
-void wd_interrupt(int irq, void *dev_id, struct pt_regs *regs);
-void wd_toggleintr(struct wd_timer* pTimer, int enable);
-void wd_pingtimer(struct wd_timer* pTimer);
-void wd_starttimer(struct wd_timer* pTimer);
-void wd_resetbrokentimer(struct wd_timer* pTimer);
-void wd_stoptimer(struct wd_timer* pTimer);
-void wd_brokentimer(unsigned long data);
-int wd_getstatus(struct wd_timer* pTimer);
+static void wd_dumpregs(void);
+static void wd_interrupt(int irq, void *dev_id, struct pt_regs *regs);
+static void wd_toggleintr(struct wd_timer* pTimer, int enable);
+static void wd_pingtimer(struct wd_timer* pTimer);
+static void wd_starttimer(struct wd_timer* pTimer);
+static void wd_resetbrokentimer(struct wd_timer* pTimer);
+static void wd_stoptimer(struct wd_timer* pTimer);
+static void wd_brokentimer(unsigned long data);
+static int wd_getstatus(struct wd_timer* pTimer);
/* PLD expects words to be written in LSB format,
* so we must flip all words prior to writing them to regs
*/
-inline unsigned short flip_word(unsigned short word)
+static inline unsigned short flip_word(unsigned short word)
{
return ((word & 0xff) << 8) | ((word >> 8) & 0xff);
}
case WDIOC_GETSUPPORT:
if(copy_to_user((struct watchdog_info *)arg,
(struct watchdog_info *)&info,
- sizeof(struct watchdog_info *))) {
+ sizeof(struct watchdog_info))) {
return(-EFAULT);
}
break;
+ case WDIOC_GETSTATUS:
+ case WDIOC_GETBOOTSTATUS:
+ if (put_user(0, (int *) arg))
+ return -EFAULT;
+ break;
case WDIOC_KEEPALIVE:
wd_pingtimer(pTimer);
break;
return(-EINVAL);
}
- wd_pingtimer(pTimer);
- return(count);
+ if (ppos != &file->f_pos)
+ return -ESPIPE;
+
+ if (count) {
+ wd_pingtimer(pTimer);
+ return 1;
+ }
+ return 0;
}
static ssize_t wd_read(struct file * file, char * buffer,
#endif /* ifdef WD_DEBUG */
}
-void wd_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static void wd_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
/* Only WD0 will interrupt-- others are NMI and we won't
* see them here....
static struct miscdevice wd1_miscdev = { WD1_MINOR, WD1_DEVNAME, &wd_fops };
static struct miscdevice wd2_miscdev = { WD2_MINOR, WD2_DEVNAME, &wd_fops };
-void wd_dumpregs(void)
+static void wd_dumpregs(void)
{
/* Reading from downcounters initiates watchdog countdown--
* Example is included below for illustration purposes.
* pTimer - pointer to timer device, or NULL to indicate all timers
* enable - non-zero to enable interrupts, zero to disable
*/
-void wd_toggleintr(struct wd_timer* pTimer, int enable)
+static void wd_toggleintr(struct wd_timer* pTimer, int enable)
{
unsigned char curregs = wd_readb(&wd_dev.regs->pld_regs.intr_mask);
unsigned char setregs =
*
* pTimer - pointer to timer device
*/
-void wd_pingtimer(struct wd_timer* pTimer)
+static void wd_pingtimer(struct wd_timer* pTimer)
{
if(wd_readb(&pTimer->regs->status) & WD_S_RUNNING) {
wd_readb(&pTimer->regs->dcntr);
*
* pTimer - pointer to timer device
*/
-void wd_stoptimer(struct wd_timer* pTimer)
+static void wd_stoptimer(struct wd_timer* pTimer)
{
if(wd_readb(&pTimer->regs->status) & WD_S_RUNNING) {
wd_toggleintr(pTimer, WD_INTR_OFF);
* pTimer - pointer to timer device
* limit - limit (countdown) value in 1/10th seconds
*/
-void wd_starttimer(struct wd_timer* pTimer)
+static void wd_starttimer(struct wd_timer* pTimer)
{
if(wd_dev.isbaddoggie) {
pTimer->runstatus &= ~WD_STAT_BSTOP;
/* Restarts timer with maximum limit value and
* does not unset 'brokenstop' value.
*/
-void wd_resetbrokentimer(struct wd_timer* pTimer)
+static void wd_resetbrokentimer(struct wd_timer* pTimer)
{
wd_toggleintr(pTimer, WD_INTR_ON);
wd_writew(WD_BLIMIT, &pTimer->regs->limit);
/* Timer device initialization helper.
* Returns 0 on success, other on failure
*/
-int wd_inittimer(int whichdog)
+static int wd_inittimer(int whichdog)
{
struct miscdevice *whichmisc;
volatile struct wd_timer_regblk *whichregs;
* interrupts within the PLD so me must continually
* reset the timers ad infinitum.
*/
-void wd_brokentimer(unsigned long data)
+static void wd_brokentimer(unsigned long data)
{
struct wd_device* pDev = (struct wd_device*)data;
int id, tripped = 0;
}
}
-int wd_getstatus(struct wd_timer* pTimer)
+static int wd_getstatus(struct wd_timer* pTimer)
{
unsigned char stat = wd_readb(&pTimer->regs->status);
unsigned char intr = wd_readb(&wd_dev.regs->pld_regs.intr_mask);
-/* $Id: pcikbd.c,v 1.51 2001/02/13 01:17:00 davem Exp $
+/* $Id: pcikbd.c,v 1.53 2001/03/21 00:28:33 davem Exp $
* pcikbd.c: Ultra/AX PC keyboard support.
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
void pcikbd_leds(unsigned char leds)
{
- if(!send_data(KBD_CMD_SET_LEDS) || !send_data(leds))
+ if (!pcikbd_iobase)
+ return;
+ if (!send_data(KBD_CMD_SET_LEDS) || !send_data(leds))
send_data(KBD_CMD_ENABLE);
-
}
static int __init pcikbd_wait_for_input(void)
/* Timer routine to turn off the beep after the interval expires. */
static void pcikbd_kd_nosound(unsigned long __unused)
{
- outl(0, pcibeep_iobase);
+ if (pcibeep_iobase & 0x2UL)
+ outb(0, pcibeep_iobase);
+ else
+ outl(0, pcibeep_iobase);
}
/*
save_flags(flags); cli();
del_timer(&sound_timer);
if (hz) {
- outl(1, pcibeep_iobase);
+ if (pcibeep_iobase & 0x2UL)
+ outb(1, pcibeep_iobase);
+ else
+ outl(1, pcibeep_iobase);
if (ticks) {
sound_timer.expires = jiffies + ticks;
add_timer(&sound_timer);
}
- } else
- outl(0, pcibeep_iobase);
+ } else {
+ if (pcibeep_iobase & 0x2UL)
+ outb(0, pcibeep_iobase);
+ else
+ outl(0, pcibeep_iobase);
+ }
restore_flags(flags);
}
#endif
}
}
}
+#ifdef CONFIG_USB
+ /* We are being called for the sake of USB keyboard
+ * state initialization. So we should check for beeper
+ * device in this case.
+ */
+ edev = 0;
+ for_each_ebus(ebus) {
+ for_each_ebusdev(edev, ebus) {
+ if (!strcmp(edev->prom_name, "beep")) {
+ pcibeep_iobase = edev->resource[0].start;
+ kd_mksound = pcikbd_kd_mksound;
+ printk("8042(speaker): iobase[%016lx]\n", pcibeep_iobase);
+ return;
+ }
+ }
+ }
+
+ /* No beeper found, ok complain. */
+#endif
printk("pcikbd_init_hw: no 8042 found\n");
return;
--- /dev/null
+/* $Id: riowatchdog.c,v 1.1 2001/03/24 06:04:24 davem Exp $
+ * riowatchdog.c - driver for hw watchdog inside Super I/O of RIO
+ *
+ * Copyright (C) 2001 David S. Miller (davem@redhat.com)
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/fs.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/miscdevice.h>
+
+#include <asm/io.h>
+#include <asm/ebus.h>
+#include <asm/bbc.h>
+#include <asm/oplib.h>
+#include <asm/uaccess.h>
+
+#include <asm/watchdog.h>
+
+/* RIO uses the NatSemi Super I/O power management logical device
+ * as its' watchdog.
+ *
+ * When the watchdog triggers, it asserts a line to the BBC (Boot Bus
+ * Controller) of the machine. The BBC can be configured to treat the
+ * assertion of this signal in different ways. It can trigger an XIR
+ * (external CPU reset) to all the processors or it can trigger a true
+ * power-on reset which triggers the RST signal of all devices in the machine.
+ *
+ * The only Super I/O device register we care about is at index
+ * 0x05 (WDTO_INDEX) which is the watchdog time-out in minutes (1-255).
+ * If set to zero, this disables the watchdog. When set, the system
+ * must periodically (before watchdog expires) clear (set to zero) and
+ * re-set the watchdog else it will trigger.
+ *
+ * There are two other indexed watchdog registers inside this Super I/O
+ * logical device, but they are unused. The first, at index 0x06 is
+ * the watchdog control and can be used to make the watchdog timer re-set
+ * when the PS/2 mouse or serial lines show activity. The second, at
+ * index 0x07 is merely a sampling of the line from the watchdog to the
+ * BBC.
+ *
+ * The watchdog device generates no interrupts.
+ */
+
+MODULE_AUTHOR("David S. Miller <davem@redhat.com>");
+MODULE_DESCRIPTION("Hardware watchdog driver for Sun RIO");
+MODULE_SUPPORTED_DEVICE("watchdog");
+
+#define RIOWD_NAME "pmc"
+#define RIOWD_MINOR 215
+
+static spinlock_t riowd_lock = SPIN_LOCK_UNLOCKED;
+
+static void *riowd_regs;
+#define WDTO_INDEX 0x05
+
+static int riowd_timeout = 1; /* in minutes */
+static int riowd_xir = 1; /* watchdog generates XIR? */
+MODULE_PARM(riowd_timeout,"i");
+MODULE_PARM_DESC(riowd_timeout, "Watchdog timeout in minutes");
+MODULE_PARM(riowd_xir,"i");
+MODULE_PARM_DESC(riowd_xir, "Watchdog generates XIR reset if non-zero");
+
+#if 0 /* Currently unused. */
+static u8 riowd_readreg(int index)
+{
+ unsigned long flags;
+ u8 ret;
+
+ spin_lock_irqsave(&riowd_lock, flags);
+ writeb(index, riowd_regs + 0);
+ ret = readb(riowd_regs + 1);
+ spin_unlock_irqrestore(&riowd_lock, flags);
+
+ return ret;
+}
+#endif
+
+static void riowd_writereg(u8 val, int index)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&riowd_lock, flags);
+ writeb(index, riowd_regs + 0);
+ writeb(val, riowd_regs + 1);
+ spin_unlock_irqrestore(&riowd_lock, flags);
+}
+
+static void riowd_pingtimer(void)
+{
+ riowd_writereg(riowd_timeout, WDTO_INDEX);
+}
+
+static void riowd_stoptimer(void)
+{
+ riowd_writereg(0, WDTO_INDEX);
+}
+
+static void riowd_starttimer(void)
+{
+ riowd_writereg(riowd_timeout, WDTO_INDEX);
+}
+
+static int riowd_open(struct inode *inode, struct file *filp)
+{
+ return 0;
+}
+
+static int riowd_release(struct inode *inode, struct file *filp)
+{
+ return 0;
+}
+
+static int riowd_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ static struct watchdog_info info = { 0, 0, "Natl. Semiconductor PC97317" };
+ unsigned int options;
+
+ switch (cmd) {
+ case WDIOC_GETSUPPORT:
+ if (copy_to_user((struct watchdog_info *) arg, &info, sizeof(info)))
+ return -EFAULT;
+ break;
+
+ case WDIOC_GETSTATUS:
+ case WDIOC_GETBOOTSTATUS:
+ if (put_user(0, (int *) arg))
+ return -EFAULT;
+ break;
+
+ case WDIOC_KEEPALIVE:
+ riowd_pingtimer();
+ break;
+
+ case WDIOC_SETOPTIONS:
+ if (copy_from_user(&options, (void *) arg, sizeof(options)))
+ return -EFAULT;
+
+ if (options & WDIOS_DISABLECARD)
+ riowd_stoptimer();
+ else if (options & WDIOS_ENABLECARD)
+ riowd_starttimer();
+ else
+ return -EINVAL;
+
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ return 0;
+}
+
+static ssize_t riowd_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
+{
+ if (ppos != &file->f_pos)
+ return -ESPIPE;
+
+ if (count) {
+ riowd_pingtimer();
+ return 1;
+ }
+
+ return 0;
+}
+
+static ssize_t riowd_read(struct file *file, char *buffer, size_t count, loff_t *ppos)
+{
+ return -EINVAL;
+}
+
+static struct file_operations riowd_fops = {
+ owner: THIS_MODULE,
+ ioctl: riowd_ioctl,
+ open: riowd_open,
+ write: riowd_write,
+ read: riowd_read,
+ release: riowd_release,
+};
+
+static struct miscdevice riowd_miscdev = { RIOWD_MINOR, RIOWD_NAME, &riowd_fops };
+
+static int __init riowd_bbc_init(void)
+{
+ struct linux_ebus *ebus = NULL;
+ struct linux_ebus_device *edev = NULL;
+ void *bbc_regs;
+ u8 val;
+
+ for_each_ebus(ebus) {
+ for_each_ebusdev(edev, ebus) {
+ if (!strcmp(edev->prom_name, "bbc"))
+ goto found_bbc;
+ }
+ }
+
+found_bbc:
+ if (!edev)
+ return -ENODEV;
+ bbc_regs = ioremap(edev->resource[0].start, BBC_REGS_SIZE);
+ if (!bbc_regs)
+ return -ENODEV;
+
+ val = readb(bbc_regs + BBC_WDACTION);
+ if (riowd_xir != 0)
+ val &= ~BBC_WDACTION_RST;
+ else
+ val |= BBC_WDACTION_RST;
+ writeb(val, bbc_regs + BBC_WDACTION);
+
+ iounmap(bbc_regs);
+ return 0;
+}
+
+static int __init riowd_init(void)
+{
+ struct linux_ebus *ebus = NULL;
+ struct linux_ebus_device *edev = NULL;
+
+ for_each_ebus(ebus) {
+ for_each_ebusdev(edev, ebus) {
+ if (!strcmp(edev->prom_name, RIOWD_NAME))
+ goto ebus_done;
+ }
+ }
+
+ebus_done:
+ if (!edev)
+ goto fail;
+
+ riowd_regs = ioremap(edev->resource[0].start, 2);
+ if (riowd_regs == NULL) {
+ printk(KERN_ERR "pmc: Cannot map registers.\n");
+ return -ENODEV;
+ }
+
+ if (riowd_bbc_init()) {
+ printk(KERN_ERR "pmc: Failure initializing BBC config.\n");
+ goto fail;
+ }
+
+ if (misc_register(&riowd_miscdev)) {
+ printk(KERN_ERR "pmc: Cannot register watchdog misc device.\n");
+ goto fail;
+ }
+
+ printk(KERN_INFO "pmc: Hardware watchdog [%i minutes, %s reset], "
+ "regs at %p\n",
+ riowd_timeout, (riowd_xir ? "XIR" : "POR"),
+ riowd_regs);
+
+ return 0;
+
+fail:
+ if (riowd_regs) {
+ iounmap(riowd_regs);
+ riowd_regs = NULL;
+ }
+ return -ENODEV;
+}
+
+static void __exit riowd_cleanup(void)
+{
+ misc_deregister(&riowd_miscdev);
+ iounmap(riowd_regs);
+ riowd_regs = NULL;
+}
+
+module_init(riowd_init);
+module_exit(riowd_cleanup);
-/* $Id: rtc.c,v 1.25 2001/02/13 01:17:00 davem Exp $
+/* $Id: rtc.c,v 1.26 2001/03/14 09:30:31 davem Exp $
*
* Linux/SPARC Real Time Clock Driver
* Copyright (C) 1996 Thomas K. Dyas (tdyas@eden.rutgers.edu)
{
int error;
- if (mstk48t02_regs == 0) {
- /* This diagnostic is a debugging aid... But a useful one. */
- printk(KERN_ERR "rtc: no Mostek in this computer\n");
+ /* It is possible we are being driven by some other RTC chip
+ * and thus another RTC driver is handling things.
+ */
+ if (mstk48t02_regs == 0)
return -ENODEV;
- }
error = misc_register(&rtc_dev);
if (error) {
-/* $Id: sab82532.c,v 1.55 2001/02/13 01:17:00 davem Exp $
+/* $Id: sab82532.c,v 1.56 2001/03/15 02:11:10 davem Exp $
* sab82532.c: ASYNC Driver for the SIEMENS SAB82532 DUSCC.
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
for_each_ebusdev(edev, ebus) {
if (!strcmp(edev->prom_name, "se"))
goto ebus_done;
+
+ if (!strcmp(edev->prom_name, "serial")) {
+ char compat[32];
+ int clen;
+
+ /* On RIO this can be an SE, check it. We could
+ * just check ebus->is_rio, but this is more portable.
+ */
+ clen = prom_getproperty(edev->prom_node, "compatible",
+ compat, sizeof(compat));
+ if (clen > 0) {
+ if (strncmp(compat, "sab82532", 8) == 0) {
+ /* Yep. */
+ goto ebus_done;
+ }
+ }
+ }
}
}
ebus_done:
static inline void __init show_serial_version(void)
{
- char *revision = "$Revision: 1.55 $";
+ char *revision = "$Revision: 1.56 $";
char *version, *p;
version = strchr(revision, ' ');
* For each EBus on this PCI...
*/
while (enode) {
- snode = prom_getchild(enode);
- snode = prom_searchsiblings(snode, "se");
+ int child;
+
+ child = prom_getchild(enode);
+ snode = prom_searchsiblings(child, "se");
if (snode)
goto found;
+ snode = prom_searchsiblings(child, "serial");
+ if (snode) {
+ char compat[32];
+ int clen;
+
+ clen = prom_getproperty(snode, "compatible",
+ compat, sizeof(compat));
+ if (clen > 0) {
+ if (strncmp(compat, "sab82532", 8) == 0)
+ goto found;
+ }
+ }
+
enode = prom_getsibling(enode);
enode = prom_searchsiblings(enode, "ebus");
}
-/* $Id: su.c,v 1.44 2001/02/13 01:17:00 davem Exp $
+/* $Id: su.c,v 1.45 2001/03/15 02:11:10 davem Exp $
* su.c: Small serial driver for keyboard/mouse interface on sparc32/PCI
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
do {
ch = serial_inp(info, UART_RX);
if (info->port_type == SU_PORT_KBD) {
- if(ch == SUNKBD_RESET) {
+ if (ch == SUNKBD_RESET) {
l1a_state.kbd_id = 1;
l1a_state.l1_down = 0;
- } else if(l1a_state.kbd_id) {
+ } else if (l1a_state.kbd_id) {
l1a_state.kbd_id = 0;
- } else if(ch == SUNKBD_L1) {
+ } else if (ch == SUNKBD_L1) {
l1a_state.l1_down = 1;
- } else if(ch == (SUNKBD_L1|SUNKBD_UP)) {
+ } else if (ch == (SUNKBD_L1|SUNKBD_UP)) {
l1a_state.l1_down = 0;
- } else if(ch == SUNKBD_A && l1a_state.l1_down) {
+ } else if (ch == SUNKBD_A && l1a_state.l1_down) {
/* whee... */
batten_down_hatches();
/* Continue execution... */
return;
info->cflag &= ~(CBAUDEX | CBAUD);
- switch(baud) {
+ switch (baud) {
case 1200:
info->cflag |= B1200;
break;
*/
static __inline__ void __init show_su_version(void)
{
- char *revision = "$Revision: 1.44 $";
+ char *revision = "$Revision: 1.45 $";
char *version, *p;
version = strchr(revision, ' ');
return 0;
}
+static int su_node_ok(int node, char *name, int namelen)
+{
+ if (strncmp(name, "su", namelen) == 0 ||
+ strncmp(name, "su_pnp", namelen) == 0)
+ return 1;
+
+ if (strncmp(name, "serial", namelen) == 0) {
+ char compat[32];
+ int clen;
+
+ /* Is it _really_ a 'su' device? */
+ clen = prom_getproperty(node, "compatible", compat, sizeof(compat));
+ if (clen > 0) {
+ if (strncmp(compat, "sab82532", 8) == 0) {
+ /* Nope, Siemens serial, not for us. */
+ return 0;
+ }
+ }
+ return 1;
+ }
+
+ return 0;
+}
+
/*
* We got several platforms which present 'su' in different parts
* of device tree. 'su' may be found under obio, ebus, isa and pci.
for (; sunode != 0; sunode = prom_getsibling(sunode)) {
len = prom_getproperty(sunode, "name", t->prop, SU_PROPSIZE);
if (len <= 1) continue; /* Broken PROM node */
- if (strncmp(t->prop, "su", len) == 0 ||
- strncmp(t->prop, "serial", len) == 0 ||
- strncmp(t->prop, "su_pnp", len) == 0) {
+ if (su_node_ok(sunode, t->prop, len)) {
info = &su_table[t->devices];
if (t->kbnode != 0 && sunode == t->kbnode) {
t->kbx = t->devices;
if (options) {
baud = simple_strtoul(options, NULL, 10);
s = options;
- while(*s >= '0' && *s <= '9')
+ while (*s >= '0' && *s <= '9')
s++;
if (*s) parity = *s++;
if (*s) bits = *s - '0';
/*
* Now construct a cflag setting.
*/
- switch(baud) {
+ switch (baud) {
case 1200:
cflag |= B1200;
break;
cflag |= B9600;
break;
}
- switch(bits) {
+ switch (bits) {
case 7:
cflag |= CS7;
break;
cflag |= CS8;
break;
}
- switch(parity) {
+ switch (parity) {
case 'o': case 'O':
cflag |= PARODD;
break;
-/* $Id: sunserial.c,v 1.75 2000/03/22 02:45:36 davem Exp $
+/* $Id: sunserial.c,v 1.78 2001/03/21 22:43:11 davem Exp $
* serial.c: Serial port driver infrastructure for the Sparc.
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
nop_rs_read_proc
};
-int rs_init(void)
+void rs_init(void)
{
- struct initfunc *init;
- int err = -ENODEV;
+ static int invoked = 0;
- init = rs_ops.rs_init;
- while (init) {
- err = init->init();
- init = init->next;
+ if (!invoked) {
+ struct initfunc *init;
+
+ invoked = 1;
+
+ init = rs_ops.rs_init;
+ while (init) {
+ (void) init->init();
+ init = init->next;
+ }
}
- return err;
}
-__initcall(rs_init);
-
void __init rs_kgdb_hook(int channel)
{
rs_ops.rs_kgdb_hook(channel);
nop_getkeycode
};
+#ifdef CONFIG_USB
+extern void pci_compute_shiftstate(void);
+extern int pci_setkeycode(unsigned int, unsigned int);
+extern int pci_getkeycode(unsigned int);
+extern void pci_setledstate(struct kbd_struct *, unsigned int);
+extern unsigned char pci_getledstate(void);
+extern int pcikbd_init(void);
+#endif
+
int kbd_init(void)
{
struct initfunc *init;
err = init->init();
init = init->next;
}
+#ifdef CONFIG_USB
+ if (!serial_console &&
+ kbd_ops.compute_shiftstate == nop_compute_shiftstate) {
+ printk("kbd_init: Assuming USB keyboard.\n");
+ kbd_ops.compute_shiftstate = pci_compute_shiftstate;
+ kbd_ops.setledstate = pci_setledstate;
+ kbd_ops.getledstate = pci_getledstate;
+ kbd_ops.setkeycode = pci_setkeycode;
+ kbd_ops.getkeycode = pci_getkeycode;
+ pcikbd_init();
+ }
+#endif
return err;
}
-/* $Id: sbus.c,v 1.94 2001/02/13 07:34:40 davem Exp $
+/* $Id: sbus.c,v 1.95 2001/03/15 02:11:10 davem Exp $
* sbus.c: SBus support routines.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
}
extern void register_proc_sparc_ioport(void);
+extern void firetruck_init(void);
+extern void rs_init(void);
void __init sbus_init(void)
{
prom_halt();
} else {
#ifdef __sparc_v9__
- extern void firetruck_init(void);
firetruck_init();
#endif
}
sun4d_init_sbi_irq();
}
+ rs_init();
+
#ifdef __sparc_v9__
if (sparc_cpu_model == sun4u) {
- extern void firetruck_init(void);
-
firetruck_init();
}
#endif
#include <linux/smp.h>
#define cpuid smp_processor_id()
-char kernel_version[] = UTS_RELEASE;
MODULE_AUTHOR ("American Megatrends Inc.");
MODULE_DESCRIPTION ("AMI MegaRAID driver");
#include <linux/smp.h>
#define cpuid smp_processor_id()
-char kernel_version[] = UTS_RELEASE;
MODULE_AUTHOR ("American Megatrends Inc.");
MODULE_DESCRIPTION ("AMI MegaRAID driver");
{
struct i810_card *card;
- if (!pci_dma_supported(pci_dev, I810_DMA_MASK)) {
+ if (pci_enable_device(pci_dev))
+ return -EIO;
+
+ if (pci_set_dma_mask(pci_dev, I810_DMA_MASK)) {
printk(KERN_ERR "intel810: architecture does not support"
" 32bit PCI busmaster DMA\n");
return -ENODEV;
}
- if (pci_enable_device(pci_dev))
- return -EIO;
if ((card = kmalloc(sizeof(struct i810_card), GFP_KERNEL)) == NULL) {
printk(KERN_ERR "i810_audio: out of memory\n");
return -ENOMEM;
return -ENODEV;
}
pci_dev->driver_data = card;
- pci_dev->dma_mask = I810_DMA_MASK;
return 0;
}
usb_rcvbulkpipe (camera->dev, camera->inEP),
camera->buf, len, &count, HZ*10);
- dbg ("read (%d) - 0x%x %d", len, retval, count);
+ dbg ("read (%Zd) - 0x%x %d", len, retval, count);
if (!retval) {
if (copy_to_user (buf, camera->buf, count))
break;
interruptible_sleep_on_timeout (&camera->wait, RETRY_TIMEOUT);
- dbg ("read (%d) - retry", len);
+ dbg ("read (%Zd) - retry", len);
}
up (&camera->sem);
return retval;
}
done:
up (&camera->sem);
- dbg ("wrote %d", bytes_written);
+ dbg ("wrote %Zd", bytes_written);
return bytes_written;
}
return -1;
}
+ le16_to_cpus(&hub->descriptor->wHubCharacteristics);
+
hub->nports = dev->maxchild = hub->descriptor->bNbrPorts;
info("%d port%s detected", hub->nports, (hub->nports == 1) ? "" : "s");
if (ret < 0)
err("unable to get device descriptor (error=%d)", ret);
else
- err("USB device descriptor short read (expected %i, got %i)", sizeof(dev->descriptor), ret);
+ err("USB device descriptor short read (expected %Zi, got %i)", sizeof(dev->descriptor), ret);
clear_bit(dev->devnum, &dev->bus->devmap.devicemap);
dev->devnum = -1;
MODULE_AUTHOR("Mark McClelland <mwm@i.am> & Bret Wallach & Orion Sky Lawlor <olawlor@acm.org> & Kevin Moore & Charl P. Botha <cpbotha@ieee.org> & Claudio Matsuoka <claudio@conectiva.com>");
MODULE_DESCRIPTION("OV511 USB Camera Driver");
-char kernel_version[] = UTS_RELEASE;
-
static struct usb_driver ov511_driver;
/* I know, I know, global variables suck. This is only a temporary hack */
/* --------------------------------------------------------------------- */
-static void *plusb_probe (struct usb_device *usbdev, unsigned int ifnum)
+static void *plusb_probe (struct usb_device *usbdev, unsigned int ifnum, const struct usb_device_id *id)
{
plusb_t *s;
if (result == -EPIPE) { /* No hope */
if(usb_clear_halt(dev, scn->bulk_in_ep)) {
- err("read_scanner(%d): Failure to clear endpoint halt condition (%d).", scn_minor, ret);
+ err("read_scanner(%d): Failure to clear endpoint halt condition (%Zd).", scn_minor, ret);
}
ret = result;
break;
priv = serial->port->private = kmalloc(sizeof(struct ftdi_private), GFP_KERNEL);
if (!priv){
- err(__FUNCTION__"- kmalloc(%d) failed.", sizeof(struct ftdi_private));
+ err(__FUNCTION__"- kmalloc(%Zd) failed.", sizeof(struct ftdi_private));
return -ENOMEM;
}
priv = serial->port->private = kmalloc(sizeof(struct ftdi_private), GFP_KERNEL);
if (!priv){
- err(__FUNCTION__"- kmalloc(%d) failed.", sizeof(struct ftdi_private));
+ err(__FUNCTION__"- kmalloc(%Zd) failed.", sizeof(struct ftdi_private));
return -ENOMEM;
}
#include <linux/init.h>
#include <linux/malloc.h>
#include <linux/fcntl.h>
+#include <linux/tty.h>
#include <linux/tty_driver.h>
#include <linux/tty_flip.h>
-#include <linux/tty.h>
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/serial.h>
od = kmalloc( sizeof(struct omninet_data), GFP_KERNEL );
if( !od ) {
- err(__FUNCTION__"- kmalloc(%d) failed.", sizeof(struct omninet_data));
+ err(__FUNCTION__"- kmalloc(%Zd) failed.", sizeof(struct omninet_data));
--port->open_count;
port->active = 0;
spin_unlock_irqrestore (&port->port_lock, flags);
if (err < 0)
err("unable to get device descriptor (error=%d)", err);
else
- err("USB device descriptor short read (expected %i, got %i)",
+ err("USB device descriptor short read (expected %Zi, got %i)",
sizeof(dev->descriptor), err);
clear_bit(dev->devnum, &dev->bus->devmap.devicemap);
return 0;
i = usb_bulk_msg(usbdev, usb_sndbulkpipe(usbdev, 1), (void *)buf, length, &rlen, HZ*20);
if (i)
- printk(KERN_ERR "uss720: sendbulk ep 1 buf %p len %u rlen %u\n", buf, length, rlen);
+ printk(KERN_ERR "uss720: sendbulk ep 1 buf %p len %Zu rlen %u\n", buf, length, rlen);
change_mode(pp, ECR_PS2);
return rlen;
#endif
return 0;
i = usb_bulk_msg(usbdev, usb_sndbulkpipe(usbdev, 1), (void *)buffer, len, &rlen, HZ*20);
if (i)
- printk(KERN_ERR "uss720: sendbulk ep 1 buf %p len %u rlen %u\n", buffer, len, rlen);
+ printk(KERN_ERR "uss720: sendbulk ep 1 buf %p len %Zu rlen %u\n", buffer, len, rlen);
change_mode(pp, ECR_PS2);
return rlen;
}
return 0;
i = usb_bulk_msg(usbdev, usb_rcvbulkpipe(usbdev, 2), buffer, len, &rlen, HZ*20);
if (i)
- printk(KERN_ERR "uss720: recvbulk ep 2 buf %p len %u rlen %u\n", buffer, len, rlen);
+ printk(KERN_ERR "uss720: recvbulk ep 2 buf %p len %Zu rlen %u\n", buffer, len, rlen);
change_mode(pp, ECR_PS2);
return rlen;
}
return 0;
i = usb_bulk_msg(usbdev, usb_sndbulkpipe(usbdev, 1), (void *)buffer, len, &rlen, HZ*20);
if (i)
- printk(KERN_ERR "uss720: sendbulk ep 1 buf %p len %u rlen %u\n", buffer, len, rlen);
+ printk(KERN_ERR "uss720: sendbulk ep 1 buf %p len %Zu rlen %u\n", buffer, len, rlen);
change_mode(pp, ECR_PS2);
return rlen;
}
-/* $Id: creatorfb.c,v 1.33 2001/02/13 01:17:14 davem Exp $
+/* $Id: creatorfb.c,v 1.34 2001/03/16 10:22:02 davem Exp $
* creatorfb.c: Creator/Creator3D frame buffer driver
*
* Copyright (C) 1997,1998,1999 Jakub Jelinek (jj@ultra.linux.cz)
static char idstring[60] __initdata = { 0 };
+static int __init creator_apply_upa_parent_ranges(int parent, struct linux_prom64_registers *regs)
+{
+ struct linux_prom64_ranges ranges[PROMREG_MAX];
+ char name[128];
+ int len, i;
+
+ prom_getproperty(parent, "name", name, sizeof(name));
+ if (strcmp(name, "upa") != 0)
+ return 0;
+
+ len = prom_getproperty(parent, "ranges", (void *) ranges, sizeof(ranges));
+ if (len <= 0)
+ return 1;
+
+ len /= sizeof(struct linux_prom64_ranges);
+ for (i = 0; i < len; i++) {
+ struct linux_prom64_ranges *rng = &ranges[i];
+ u64 phys_addr = regs->phys_addr;
+
+ if (phys_addr >= rng->ot_child_base &&
+ phys_addr < (rng->ot_child_base + rng->or_size)) {
+ regs->phys_addr -= rng->ot_child_base;
+ regs->phys_addr += rng->ot_parent_base;
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
char __init *creatorfb_init(struct fb_info_sbusfb *fb)
{
struct fb_fix_screeninfo *fix = &fb->fix;
if (prom_getproperty(fb->prom_node, "reg", (void *) regs, sizeof(regs)) <= 0)
return NULL;
+
+ if (creator_apply_upa_parent_ranges(fb->prom_parent, ®s[0]))
+ return NULL;
disp->dispsw_data = (void *)kmalloc(16 * sizeof(u32), GFP_KERNEL);
if (disp->dispsw_data == NULL)
return FBTYPE_NOTYPE;
}
+#ifdef CONFIG_FB_CREATOR
+static void creator_fb_scan_siblings(int root)
+{
+ int node, child;
+
+ child = prom_getchild(root);
+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb"))
+ sbusfb_init_fb(node, root, FBTYPE_CREATOR, NULL);
+ for (node = prom_searchsiblings(child, "SUNW,afb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,afb"))
+ sbusfb_init_fb(node, root, FBTYPE_CREATOR, NULL);
+}
+
+static void creator_fb_scan(void)
+{
+ int root;
+
+ creator_fb_scan_siblings(prom_root_node);
+
+ root = prom_getchild(prom_root_node);
+ for (root = prom_searchsiblings(root, "upa"); root;
+ root = prom_searchsiblings(prom_getsibling(root), "upa"))
+ creator_fb_scan_siblings(root);
+}
+#endif
+
int __init sbusfb_init(void)
{
int type;
if (!con_is_present()) return -ENXIO;
#ifdef CONFIG_FB_CREATOR
- {
- int root, node;
- root = prom_getchild(prom_root_node);
- for (node = prom_searchsiblings(root, "SUNW,ffb"); node;
- node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb"))
- sbusfb_init_fb(node, prom_root_node, FBTYPE_CREATOR, NULL);
- for (node = prom_searchsiblings(root, "SUNW,afb"); node;
- node = prom_searchsiblings(prom_getsibling(node), "SUNW,afb"))
- sbusfb_init_fb(node, prom_root_node, FBTYPE_CREATOR, NULL);
- }
+ creator_fb_scan();
#endif
#ifdef CONFIG_SUN4
sbusfb_init_fb(0, 0, FBTYPE_SUN2BW, NULL);
return retval;
fail:
+ if (fstype->fs_flags & FS_SINGLE)
+ put_filesystem(fstype);
if (list_empty(&sb->s_mounts))
kill_super(sb, 0);
goto unlock_out;
+++ /dev/null
-#ifndef _I386_PGALLOC_2LEVEL_H
-#define _I386_PGALLOC_2LEVEL_H
-
-/*
- * traditional i386 two-level paging, page table allocation routines:
- * We don't have any real pmd's, and this code never triggers because
- * the pgd will always be present..
- */
-#define pmd_alloc_one_fast() ({ BUG(); ((pmd_t *)1); })
-#define pmd_alloc_one() ({ BUG(); ((pmd_t *)2); })
-#define pmd_free_slow(x) do { } while (0)
-#define pmd_free_fast(x) do { } while (0)
-#define pmd_free(x) do { } while (0)
-#define pgd_populate(pmd, pte) BUG()
-
-#endif /* _I386_PGALLOC_2LEVEL_H */
+++ /dev/null
-#ifndef _I386_PGALLOC_3LEVEL_H
-#define _I386_PGALLOC_3LEVEL_H
-
-/*
- * Intel Physical Address Extension (PAE) Mode - three-level page
- * tables on PPro+ CPUs. Page-table allocation routines.
- *
- * Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
- */
-
-extern __inline__ pmd_t *pmd_alloc_one(void)
-{
- pmd_t *ret = (pmd_t *)__get_free_page(GFP_KERNEL);
-
- if (ret)
- memset(ret, 0, PAGE_SIZE);
- return ret;
-}
-
-extern __inline__ pmd_t *pmd_alloc_one_fast(void)
-{
- unsigned long *ret;
-
- if ((ret = pmd_quicklist) != NULL) {
- pmd_quicklist = (unsigned long *)(*ret);
- ret[0] = 0;
- pgtable_cache_size--;
- } else
- ret = (unsigned long *)get_pmd_slow();
- return (pmd_t *)ret;
-}
-
-extern __inline__ void pmd_free_fast(pmd_t *pmd)
-{
- *(unsigned long *)pmd = (unsigned long) pmd_quicklist;
- pmd_quicklist = (unsigned long *) pmd;
- pgtable_cache_size++;
-}
-
-extern __inline__ void pmd_free_slow(pmd_t *pmd)
-{
- free_page((unsigned long)pmd);
-}
-
-#endif /* _I386_PGALLOC_3LEVEL_H */
#define pte_quicklist (current_cpu_data.pte_quick)
#define pgtable_cache_size (current_cpu_data.pgtable_cache_sz)
-#define pmd_populate(pmd, pte) set_pmd(pmd, __pmd(_PAGE_TABLE + __pa(pte)))
-
-#if CONFIG_X86_PAE
-# include <asm/pgalloc-3level.h>
-#else
-# include <asm/pgalloc-2level.h>
-#endif
+#define pmd_populate(pmd, pte) \
+ set_pmd(pmd, __pmd(_PAGE_TABLE + __pa(pte)))
/*
- * Allocate and free page tables. The xxx_kernel() versions are
- * used to allocate a kernel page table - this turns on ASN bits
- * if any.
+ * Allocate and free page tables.
*/
+#if CONFIG_X86_PAE
+
+extern void *kmalloc(size_t, int);
+extern void kfree(const void *);
+
extern __inline__ pgd_t *get_pgd_slow(void)
{
- pgd_t *ret = (pgd_t *)__get_free_page(GFP_KERNEL);
+ int i;
+ pgd_t *pgd = kmalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
+
+ if (pgd) {
+ for (i = 0; i < USER_PTRS_PER_PGD; i++) {
+ unsigned long pmd = __get_free_page(GFP_KERNEL);
+ if (!pmd)
+ goto out_oom;
+ clear_page(pmd);
+ set_pgd(pgd + i, __pgd(1 + __pa(pmd)));
+ }
+ memcpy(pgd + USER_PTRS_PER_PGD, swapper_pg_dir + USER_PTRS_PER_PGD, (PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+ }
+ return pgd;
+out_oom:
+ for (i--; i >= 0; i--)
+ free_page((unsigned long)__va(pgd_val(pgd[i])-1));
+ kfree(pgd);
+ return NULL;
+}
- if (ret) {
-#if CONFIG_X86_PAE
- int i;
- for (i = 0; i < USER_PTRS_PER_PGD; i++)
- __pgd_clear(ret + i);
#else
- memset(ret, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
-#endif
- memcpy(ret + USER_PTRS_PER_PGD, swapper_pg_dir + USER_PTRS_PER_PGD, (PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+
+extern __inline__ pgd_t *get_pgd_slow(void)
+{
+ pgd_t *pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+
+ if (pgd) {
+ memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+ memcpy(pgd + USER_PTRS_PER_PGD, swapper_pg_dir + USER_PTRS_PER_PGD, (PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
}
- return ret;
+ return pgd;
}
+#endif
+
extern __inline__ pgd_t *get_pgd_fast(void)
{
unsigned long *ret;
extern __inline__ void free_pgd_slow(pgd_t *pgd)
{
+#if CONFIG_X86_PAE
+ int i;
+
+ for (i = 0; i < USER_PTRS_PER_PGD; i++)
+ free_page((unsigned long)__va(pgd_val(pgd[i])-1));
+ kfree(pgd);
+#else
free_page((unsigned long)pgd);
+#endif
}
-static inline pte_t *pte_alloc_one(void)
+static inline pte_t *pte_alloc_one(unsigned long address)
{
pte_t *pte;
pte = (pte_t *) __get_free_page(GFP_KERNEL);
- clear_page(pte);
+ if (pte)
+ clear_page(pte);
return pte;
}
-static inline pte_t *pte_alloc_one_fast(void)
+static inline pte_t *pte_alloc_one_fast(unsigned long address)
{
unsigned long *ret;
free_page((unsigned long)pte);
}
-#define pte_free_kernel(pte) pte_free_slow(pte)
#define pte_free(pte) pte_free_slow(pte)
#define pgd_free(pgd) free_pgd_slow(pgd)
#define pgd_alloc() get_pgd_fast()
/*
* allocating and freeing a pmd is trivial: the 1-entry pmd is
* inside the pgd, so has no extra memory associated with it.
- * (In the PAE case we free the page.)
+ * (In the PAE case we free the pmds as part of the pgd.)
*/
-#define pmd_free_one(pmd) free_pmd_slow(pmd)
-#define pmd_free_kernel pmd_free
-#define pmd_alloc_kernel pmd_alloc
+#define pmd_alloc_one_fast() ({ BUG(); ((pmd_t *)1); })
+#define pmd_alloc_one() ({ BUG(); ((pmd_t *)2); })
+#define pmd_free_slow(x) do { } while (0)
+#define pmd_free_fast(x) do { } while (0)
+#define pmd_free(x) do { } while (0)
+#define pgd_populate(pmd, pte) BUG()
extern int do_check_pgt_cache(int, int);
#define pgd_ERROR(e) \
printk("%s:%d: bad pgd %p(%016Lx).\n", __FILE__, __LINE__, &(e), pgd_val(e))
-/*
- * Subtle, in PAE mode we cannot have zeroes in the top level
- * page directory, the CPU enforces this. (ie. the PGD entry
- * always has to have the present bit set.) The CPU caches
- * the 4 pgd entries internally, so there is no extra memory
- * load on TLB miss, despite one more level of indirection.
- */
-#define EMPTY_PGD (__pa(empty_zero_page) + 1)
-#define pgd_none(x) (pgd_val(x) == EMPTY_PGD)
+extern inline int pgd_none(pgd_t pgd) { return 0; }
extern inline int pgd_bad(pgd_t pgd) { return 0; }
-extern inline int pgd_present(pgd_t pgd) { return !pgd_none(pgd); }
+extern inline int pgd_present(pgd_t pgd) { return 1; }
/* Rules for using set_pte: the pte being assigned *must* be
* either not present or in a state where the hardware will
set_64bit((unsigned long long *)(pgdptr),pgd_val(pgdval))
/*
- * Pentium-II errata A13: in PAE mode we explicitly have to flush
- * the TLB via cr3 if the top-level pgd is changed... This was one tough
- * thing to find out - guess i should first read all the documentation
- * next time around ;)
+ * Pentium-II erratum A13: in PAE mode we explicitly have to flush
+ * the TLB via cr3 if the top-level pgd is changed...
+ * We do not let the generic code free and clear pgd entries due to
+ * this erratum.
*/
-extern inline void __pgd_clear (pgd_t * pgd)
-{
- set_pgd(pgd, __pgd(EMPTY_PGD));
-}
-
-extern inline void pgd_clear (pgd_t * pgd)
-{
- __pgd_clear(pgd);
- __flush_tlb();
-}
+extern inline void pgd_clear (pgd_t * pgd) { }
#define pgd_page(pgd) \
((unsigned long) __va(pgd_val(pgd) & PAGE_MASK))
/* page table for 0-4MB for everybody */
extern unsigned long pg0[1024];
-/*
- * Handling allocation failures during page table setup.
- */
-extern void __handle_bad_pmd(pmd_t * pmd);
-extern void __handle_bad_pmd_kernel(pmd_t * pmd);
-
#define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE))
#define pte_clear(xp) do { set_pte(xp, __pte(0)); } while (0)
extern void __bad_pte(pmd_t *pmd);
-/* We don't use pmd cache, so this is a dummy routine */
-extern __inline__ pmd_t *get_pmd_fast(void)
-{
- return (pmd_t *)0;
-}
-
-extern __inline__ void free_pmd_fast(pmd_t *pmd)
-{
-}
-
-extern __inline__ void free_pmd_slow(pmd_t *pmd)
-{
-}
-
-/*
- * allocating and freeing a pmd is trivial: the 1-entry pmd is
- * inside the pgd, so has no extra memory associated with it.
- */
-extern inline void pmd_free(pmd_t * pmd)
-{
-}
-
-extern inline pmd_t * pmd_alloc(pgd_t * pgd, unsigned long address)
-{
- return (pmd_t *) pgd;
-}
-
-#define pmd_free_kernel pmd_free
-#define pmd_alloc_kernel pmd_alloc
-#define pte_alloc_kernel pte_alloc
-
extern __inline__ pgd_t *get_pgd_slow(void)
{
- pgd_t *ret, *init;
- /*if ( (ret = (pgd_t *)get_zero_page_fast()) == NULL )*/
- if ( (ret = (pgd_t *)__get_free_page(GFP_KERNEL)) != NULL )
- memset (ret, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
- if (ret) {
- init = pgd_offset(&init_mm, 0);
- memcpy (ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
- (PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
- }
+ pgd_t *ret;
+
+ if ((ret = (pgd_t *)__get_free_page(GFP_KERNEL)) != NULL)
+ clear_page(ret);
return ret;
}
free_page((unsigned long)pgd);
}
-extern pte_t *get_pte_slow(pmd_t *pmd, unsigned long address_preadjusted);
+#define pgd_free(pgd) free_pgd_fast(pgd)
+#define pgd_alloc() get_pgd_fast()
+
+/*
+ * We don't have any real pmd's, and this code never triggers because
+ * the pgd will always be present..
+ */
+#define pmd_alloc_one_fast() ({ BUG(); ((pmd_t *)1); })
+#define pmd_alloc_one() ({ BUG(); ((pmd_t *)2); })
+#define pmd_free(x) do { } while (0)
+#define pgd_populate(pmd, pte) BUG()
-extern __inline__ pte_t *get_pte_fast(void)
+static inline pte_t *pte_alloc_one(unsigned long address)
+{
+ pte_t *pte;
+ extern int mem_init_done;
+ extern void *early_get_page(void);
+
+ if (mem_init_done)
+ pte = (pte_t *) __get_free_page(GFP_KERNEL);
+ else
+ pte = (pte_t *) early_get_page();
+ if (pte != NULL)
+ clear_page(pte);
+ return pte;
+}
+
+static inline pte_t *pte_alloc_one_fast(unsigned long address)
{
unsigned long *ret;
return (pte_t *)ret;
}
-extern __inline__ void free_pte_fast(pte_t *pte)
+extern __inline__ void pte_free_fast(pte_t *pte)
{
*(unsigned long **)pte = pte_quicklist;
pte_quicklist = (unsigned long *) pte;
pgtable_cache_size++;
}
-extern __inline__ void free_pte_slow(pte_t *pte)
+extern __inline__ void pte_free_slow(pte_t *pte)
{
free_page((unsigned long)pte);
}
-#define pte_free_kernel(pte) free_pte_fast(pte)
-#define pte_free(pte) free_pte_fast(pte)
-#define pgd_free(pgd) free_pgd_fast(pgd)
-#define pgd_alloc() get_pgd_fast()
+#define pte_free(pte) pte_free_slow(pte)
-extern inline pte_t * pte_alloc(pmd_t * pmd, unsigned long address)
-{
- address = (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
- if (pmd_none(*pmd)) {
- pte_t * page = (pte_t *) get_pte_fast();
-
- if (!page)
- return get_pte_slow(pmd, address);
- pmd_val(*pmd) = (unsigned long) page;
- return page + address;
- }
- if (pmd_bad(*pmd)) {
- __bad_pte(pmd);
- return NULL;
- }
- return (pte_t *) pmd_page(*pmd) + address;
-}
+#define pmd_populate(pmd, pte) (pmd_val(*(pmd)) = (unsigned long) (pte))
extern int do_check_pgt_cache(int, int);
#endif
};
-#define __RWSEM_INITIALIZER(name, rd, wr) \
+#define RW_LOCK_BIAS 2 /* XXX bogus */
+#define __RWSEM_INITIALIZER(name, count) \
{ \
SPIN_LOCK_UNLOCKED, \
- (rd), (wr), \
+ (count) == 1, (count) == 0, \
__WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \
__SEM_DEBUG_INIT(name) \
}
-#define __DECLARE_RWSEM_GENERIC(name, rd, wr) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name, rd, wr)
+#define __DECLARE_RWSEM_GENERIC(name, count) \
+ struct rw_semaphore name = __RWSEM_INITIALIZER(name, count)
-#define DECLARE_RWSEM(name) __DECLARE_RWSEM_GENERIC(name, 0, 0)
-#define DECLARE_RWSEM_READ_LOCKED(name) __DECLARE_RWSEM_GENERIC(name, 1, 0)
-#define DECLARE_RWSEM_WRITE_LOCKED(name) __DECLARE_RWSEM_GENERIC(name, 0, 1)
+#define DECLARE_RWSEM(name) __DECLARE_RWSEM_GENERIC(name, RW_LOCK_BIAS)
+#define DECLARE_RWSEM_READ_LOCKED(name) __DECLARE_RWSEM_GENERIC(name, RW_LOCK_BIAS-1)
+#define DECLARE_RWSEM_WRITE_LOCKED(name) __DECLARE_RWSEM_GENERIC(name, 0)
extern inline void init_rwsem(struct rw_semaphore *sem)
{
/* atomic.h: These still suck, but the I-cache hit rate is higher.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
+ * Copyright (C) 2000 Anton Blanchard (anton@linuxcare.com.au)
*/
#ifndef __ARCH_SPARC_ATOMIC__
#define atomic_set(v, i) (((v)->counter) = ((i) << 8))
#endif
-/* Make sure gcc doesn't try to be clever and move things around
- * on us. We need to use _exactly_ the address the user gave us,
- * not some alias that contains the same information.
- */
-#define __atomic_fool_gcc(x) ((struct { int a[100]; } *)x)
-
-static __inline__ void atomic_add(int i, atomic_t *v)
-{
- register atomic_t *ptr asm("g1");
- register int increment asm("g2");
- ptr = (atomic_t *) __atomic_fool_gcc(v);
- increment = i;
-
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_add
- add %%o7, 8, %%o7
-" : "=&r" (increment)
- : "0" (increment), "r" (ptr)
- : "g3", "g4", "g7", "memory", "cc");
-}
-
-static __inline__ void atomic_sub(int i, atomic_t *v)
+static __inline__ int __atomic_add(int i, atomic_t *v)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
register int increment asm("g2");
- ptr = (atomic_t *) __atomic_fool_gcc(v);
- increment = i;
-
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_sub
- add %%o7, 8, %%o7
-" : "=&r" (increment)
- : "0" (increment), "r" (ptr)
- : "g3", "g4", "g7", "memory", "cc");
-}
-
-static __inline__ int atomic_add_return(int i, atomic_t *v)
-{
- register atomic_t *ptr asm("g1");
- register int increment asm("g2");
-
- ptr = (atomic_t *) __atomic_fool_gcc(v);
+ ptr = &v->counter;
increment = i;
__asm__ __volatile__("
return increment;
}
-static __inline__ int atomic_sub_return(int i, atomic_t *v)
+static __inline__ int __atomic_sub(int i, atomic_t *v)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
register int increment asm("g2");
- ptr = (atomic_t *) __atomic_fool_gcc(v);
+ ptr = &v->counter;
increment = i;
__asm__ __volatile__("
return increment;
}
-#define atomic_dec_return(v) atomic_sub_return(1,(v))
-#define atomic_inc_return(v) atomic_add_return(1,(v))
+#define atomic_add(i, v) ((void)__atomic_add((i), (v)))
+#define atomic_sub(i, v) ((void)__atomic_sub((i), (v)))
+
+#define atomic_dec_return(v) __atomic_sub(1, (v))
+#define atomic_inc_return(v) __atomic_add(1, (v))
-#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0)
-#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0)
+#define atomic_sub_and_test(i, v) (__atomic_sub((i), (v)) == 0)
+#define atomic_dec_and_test(v) (__atomic_sub(1, (v)) == 0)
-#define atomic_inc(v) atomic_add(1,(v))
-#define atomic_dec(v) atomic_sub(1,(v))
+#define atomic_inc(v) ((void)__atomic_add(1, (v)))
+#define atomic_dec(v) ((void)__atomic_sub(1, (v)))
-#define atomic_add_negative(i, v) (atomic_add_return((i), (v)) < 0)
+#define atomic_add_negative(i, v) (__atomic_add((i), (v)) < 0)
#endif /* !(__KERNEL__) */
static inline void down(struct semaphore * sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
register int increment asm("g2");
#if WAITQUEUE_DEBUG
CHECK_MAGIC(sem->__magic);
#endif
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &(sem->count.counter);
increment = 1;
__asm__ __volatile__("
static inline int down_interruptible(struct semaphore * sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
register int increment asm("g2");
#if WAITQUEUE_DEBUG
CHECK_MAGIC(sem->__magic);
#endif
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &(sem->count.counter);
increment = 1;
__asm__ __volatile__("
static inline int down_trylock(struct semaphore * sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
register int increment asm("g2");
#if WAITQUEUE_DEBUG
CHECK_MAGIC(sem->__magic);
#endif
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &(sem->count.counter);
increment = 1;
__asm__ __volatile__("
static inline void up(struct semaphore * sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
register int increment asm("g2");
#if WAITQUEUE_DEBUG
CHECK_MAGIC(sem->__magic);
#endif
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &(sem->count.counter);
increment = 1;
__asm__ __volatile__("
static inline void down_read(struct rw_semaphore *sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
#if WAITQUEUE_DEBUG
CHECK_MAGIC(sem->__magic);
#endif
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &sem->count;
__asm__ __volatile__("
mov %%o7, %%g4
static inline void down_write(struct rw_semaphore *sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
#if WAITQUEUE_DEBUG
CHECK_MAGIC(sem->__magic);
#endif
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &sem->count;
__asm__ __volatile__("
mov %%o7, %%g4
*/
static inline void __up_read(struct rw_semaphore *sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &sem->count;
__asm__ __volatile__("
mov %%o7, %%g4
*/
static inline void __up_write(struct rw_semaphore *sem)
{
- register atomic_t *ptr asm("g1");
+ register volatile int *ptr asm("g1");
- ptr = (atomic_t *) __atomic_fool_gcc(sem);
+ ptr = &sem->count;
__asm__ __volatile__("
mov %%o7, %%g4
-/* $Id: asi.h,v 1.2 2001/03/01 21:28:37 davem Exp $ */
+/* $Id: asi.h,v 1.4 2001/03/15 02:08:46 davem Exp $ */
#ifndef _SPARC64_ASI_H
#define _SPARC64_ASI_H
* UltraSparc-III specific ASIs.
*/
#define ASI_PHYS_USE_EC 0x14 /* PADDR, E-cachable */
-#define ASI_PHYS_BYPASS_EC_E 0x15 /* PADDR, E-cachable, E-bit */
+#define ASI_PHYS_BYPASS_EC_E 0x15 /* PADDR, E-bit */
#define ASI_PHYS_USE_EC_L 0x1c /* PADDR, E-cachable, little endian */
-#define ASI_PHYS_BYPASS_EC_E_L 0x1d /* PADDR, E-cachable, E-bit, little endian */
+#define ASI_PHYS_BYPASS_EC_E_L 0x1d /* PADDR, E-bit, little endian */
#define ASI_NUCLEUS_QUAD_LDD 0x24 /* Cachable, qword load */
#define ASI_NUCLEUS_QUAD_LDD_L 0x2c /* Cachable, qword load, little endian */
#define ASI_PCACHE_DATA_STATUS 0x30 /* (III) PCache data status RAM diag */
#define ASI_EC_W 0x76 /* E-cache diag write access */
#define ASI_UDB_ERROR_W 0x77 /* External UDB error registers write */
#define ASI_UDB_CONTROL_W 0x77 /* External UDB control registers write */
-#define ASI_UDB_INTR_W 0x77 /* External UDB IRQ vector dispatch write */
+#define ASI_INTR_W 0x77 /* IRQ vector dispatch write */
#define ASI_INTR_DATAN_W 0x77 /* (III) Outgoing irq vector data reg N */
#define ASI_INTR_DISPATCH_W 0x77 /* (III) Interrupt vector dispatch */
#define ASI_BLK_AIUPL 0x78 /* Primary, user, little, blk ld/st */
#define ASI_UDBL_ERROR_R 0x7f /* External UDB error registers read low */
#define ASI_UDBH_CONTROL_R 0x7f /* External UDB control registers read hi */
#define ASI_UDBL_CONTROL_R 0x7f /* External UDB control registers read low */
-#define ASI_UDB_INTR_R 0x7f /* External UDB IRQ vector dispatch read */
+#define ASI_INTR_R 0x7f /* IRQ vector dispatch read */
#define ASI_INTR_DATAN_R 0x7f /* (III) Incoming irq vector data reg N */
#define ASI_PST8_P 0xc0 /* Primary, 8 8-bit, partial */
#define ASI_PST8_S 0xc1 /* Secondary, 8 8-bit, partial */
--- /dev/null
+/* $Id: bbc.h,v 1.1 2001/03/24 06:03:03 davem Exp $
+ * bbc.h: Defines for BootBus Controller found on UltraSPARC-III
+ * systems.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ */
+
+#ifndef _SPARC64_BBC_H
+#define _SPARC64_BBC_H
+
+/* Register sizes are indicated by "B" (Byte, 1-byte),
+ * "H" (Half-word, 2 bytes), "W" (Word, 4 bytes) or
+ * "Q" (Quad, 8 bytes) inside brackets.
+ */
+
+#define BBC_AID 0x00 /* [B] Agent ID */
+#define BBC_DEVP 0x01 /* [B] Device Present */
+#define BBC_ARB 0x02 /* [B] Arbitration */
+#define BBC_QUIESCE 0x03 /* [B] Quiesce */
+#define BBC_WDACTION 0x04 /* [B] Watchdog Action */
+#define BBC_SPG 0x06 /* [B] Soft POR Gen */
+#define BBC_SXG 0x07 /* [B] Soft XIR Gen */
+#define BBC_PSRC 0x08 /* [W] POR Source */
+#define BBC_XSRC 0x0c /* [B] XIR Source */
+#define BBC_CSC 0x0d /* [B] Clock Synthesizers Control*/
+#define BBC_ES_CTRL 0x0e /* [H] Energy Star Control */
+#define BBC_ES_ACT 0x10 /* [W] E* Assert Change Time */
+#define BBC_ES_DACT 0x14 /* [B] E* De-Assert Change Time */
+#define BBC_ES_DABT 0x15 /* [B] E* De-Assert Bypass Time */
+#define BBC_ES_ABT 0x16 /* [H] E* Assert Bypass Time */
+#define BBC_ES_PST 0x18 /* [W] E* PLL Settle Time */
+#define BBC_ES_FSL 0x1c /* [W] E* Frequency Switch Latency*/
+#define BBC_EBUST 0x20 /* [Q] EBUS Timing */
+#define BBC_JTAG_CMD 0x28 /* [W] JTAG+ Command */
+#define BBC_JTAG_CTRL 0x2c /* [B] JTAG+ Control */
+#define BBC_I2C_SEL 0x2d /* [B] I2C Selection */
+#define BBC_I2C_0_S1 0x2e /* [B] I2C ctrlr-0 reg S1 */
+#define BBC_I2C_0_S0 0x2f /* [B] I2C ctrlr-0 regs S0,S0',S2,S3*/
+#define BBC_I2C_1_S1 0x30 /* [B] I2C ctrlr-1 reg S1 */
+#define BBC_I2C_1_S0 0x31 /* [B] I2C ctrlr-1 regs S0,S0',S2,S3*/
+#define BBC_KBD_BEEP 0x32 /* [B] Keyboard Beep */
+#define BBC_KBD_BCNT 0x34 /* [W] Keyboard Beep Counter */
+
+#define BBC_REGS_SIZE 0x40
+
+/* There is a 2K scratch ram area at offset 0x80000 but I doubt
+ * we will use it for anything.
+ */
+
+/* Agent ID register. This register shows the Safari Agent ID
+ * for the processors. The value returned depends upon which
+ * cpu is reading the register.
+ */
+#define BBC_AID_ID 0x07 /* Safari ID */
+#define BBC_AID_RESV 0xf8 /* Reserved */
+
+/* Device Present register. One can determine which cpus are actually
+ * present in the machine by interrogating this register.
+ */
+#define BBC_DEVP_CPU0 0x01 /* Processor 0 present */
+#define BBC_DEVP_CPU1 0x02 /* Processor 1 present */
+#define BBC_DEVP_CPU2 0x04 /* Processor 2 present */
+#define BBC_DEVP_CPU3 0x08 /* Processor 3 present */
+#define BBC_DEVP_RESV 0xf0 /* Reserved */
+
+/* Arbitration register. This register is used to block access to
+ * the BBC from a particular cpu.
+ */
+#define BBC_ARB_CPU0 0x01 /* Enable cpu 0 BBC arbitratrion */
+#define BBC_ARB_CPU1 0x02 /* Enable cpu 1 BBC arbitratrion */
+#define BBC_ARB_CPU2 0x04 /* Enable cpu 2 BBC arbitratrion */
+#define BBC_ARB_CPU3 0x08 /* Enable cpu 3 BBC arbitratrion */
+#define BBC_ARB_RESV 0xf0 /* Reserved */
+
+/* Quiesce register. Bus and BBC segments for cpus can be disabled
+ * with this register, ie. for hot plugging.
+ */
+#define BBC_QUIESCE_S02 0x01 /* Quiesce Safari segment for cpu 0 and 2 */
+#define BBC_QUIESCE_S13 0x02 /* Quiesce Safari segment for cpu 1 and 3 */
+#define BBC_QUIESCE_B02 0x04 /* Quiesce BBC segment for cpu 0 and 2 */
+#define BBC_QUIESCE_B13 0x08 /* Quiesce BBC segment for cpu 1 and 3 */
+#define BBC_QUIESCE_FD0 0x10 /* Disable Fatal_Error[0] reporting */
+#define BBC_QUIESCE_FD1 0x20 /* Disable Fatal_Error[1] reporting */
+#define BBC_QUIESCE_FD2 0x40 /* Disable Fatal_Error[2] reporting */
+#define BBC_QUIESCE_FD3 0x80 /* Disable Fatal_Error[3] reporting */
+
+/* Watchdog Action register. When the watchdog device timer expires
+ * a line is enabled to the BBC. The action BBC takes when this line
+ * is asserted can be controlled by this regiser.
+ */
+#define BBC_WDACTION_RST 0x01 /* When set, watchdog causes system reset.
+ * When clear, all cpus receive XIR reset.
+ */
+#define BBC_WDACTION_RESV 0xfe /* Reserved */
+
+/* Soft_POR_GEN register. The POR (Power On Reset) signal may be asserted
+ * for specific processors or all processors via this register.
+ */
+#define BBC_SPG_CPU0 0x01 /* Assert POR for processor 0 */
+#define BBC_SPG_CPU1 0x02 /* Assert POR for processor 1 */
+#define BBC_SPG_CPU2 0x04 /* Assert POR for processor 2 */
+#define BBC_SPG_CPU3 0x08 /* Assert POR for processor 3 */
+#define BBC_SPG_CPUALL 0x10 /* Reset all processors and reset
+ * the entire system.
+ */
+#define BBC_SPG_RESV 0xe0 /* Reserved */
+
+/* Soft_XIR_GEN register. The XIR (eXternally Initiated Reset) signal
+ * may be asserted to specific processors via this register.
+ */
+#define BBC_SXG_CPU0 0x01 /* Assert XIR for processor 0 */
+#define BBC_SXG_CPU1 0x02 /* Assert XIR for processor 1 */
+#define BBC_SXG_CPU2 0x04 /* Assert XIR for processor 2 */
+#define BBC_SXG_CPU3 0x08 /* Assert XIR for processor 3 */
+#define BBC_SXG_RESV 0xf0 /* Reserved */
+
+/* POR Source register. One may identify the cause of the most recent
+ * reset by reading this register.
+ */
+#define BBC_PSRC_SPG0 0x0001 /* CPU 0 reset via BBC_SPG register */
+#define BBC_PSRC_SPG1 0x0002 /* CPU 1 reset via BBC_SPG register */
+#define BBC_PSRC_SPG2 0x0004 /* CPU 2 reset via BBC_SPG register */
+#define BBC_PSRC_SPG3 0x0008 /* CPU 3 reset via BBC_SPG register */
+#define BBC_PSRC_SPGSYS 0x0010 /* System reset via BBC_SPG register */
+#define BBC_PSRC_JTAG 0x0020 /* System reset via JTAG+ */
+#define BBC_PSRC_BUTTON 0x0040 /* System reset via push-button dongle */
+#define BBC_PSRC_PWRUP 0x0080 /* System reset via power-up */
+#define BBC_PSRC_FE0 0x0100 /* CPU 0 reported Fatal_Error */
+#define BBC_PSRC_FE1 0x0200 /* CPU 1 reported Fatal_Error */
+#define BBC_PSRC_FE2 0x0400 /* CPU 2 reported Fatal_Error */
+#define BBC_PSRC_FE3 0x0800 /* CPU 3 reported Fatal_Error */
+#define BBC_PSRC_FE4 0x1000 /* Schizo reported Fatal_Error */
+#define BBC_PSRC_FE5 0x2000 /* Safari device 5 reported Fatal_Error */
+#define BBC_PSRC_FE6 0x4000 /* CPMS reported Fatal_Error */
+#define BBC_PSRC_SYNTH 0x8000 /* System reset when on-board clock synthesizers
+ * were updated.
+ */
+#define BBC_PSRC_WDT 0x10000 /* System reset via Super I/O watchdog */
+#define BBC_PSRC_RSC 0x20000 /* System reset via RSC remote monitoring
+ * device
+ */
+
+/* XIR Source register. The source of an XIR event sent to a processor may
+ * be determined via this register.
+ */
+#define BBC_XSRC_SXG0 0x01 /* CPU 0 received XIR via Soft_XIR_GEN reg */
+#define BBC_XSRC_SXG1 0x02 /* CPU 1 received XIR via Soft_XIR_GEN reg */
+#define BBC_XSRC_SXG2 0x04 /* CPU 2 received XIR via Soft_XIR_GEN reg */
+#define BBC_XSRC_SXG3 0x08 /* CPU 3 received XIR via Soft_XIR_GEN reg */
+#define BBC_XSRC_JTAG 0x10 /* All CPUs received XIR via JTAG+ */
+#define BBC_XSRC_W_OR_B 0x20 /* All CPUs received XIR either because:
+ * a) Super I/O watchdog fired, or
+ * b) XIR push button was activated
+ */
+#define BBC_XSRC_RESV 0xc0 /* Reserved */
+
+/* Clock Synthesizers Control register. This register provides the big-bang
+ * programming interface to the two clock synthesizers of the machine.
+ */
+#define BBC_CSC_SLOAD 0x01 /* Directly connected to S_LOAD pins */
+#define BBC_CSC_SDATA 0x02 /* Directly connected to S_DATA pins */
+#define BBC_CSC_SCLOCK 0x04 /* Directly connected to S_CLOCK pins */
+#define BBC_CSC_RESV 0x78 /* Reserved */
+#define BBC_CSC_RST 0x80 /* Generate system reset when S_LOAD==1 */
+
+/* Energy Star Control register. This register is used to generate the
+ * clock frequency change trigger to the main system devices (Schizo and
+ * the processors). The transition occurs when bits in this register
+ * go from 0 to 1, only one bit must be set at once else no action
+ * occurs. Basically the sequence of events is:
+ * a) Choose new frequency: full, 1/2 or 1/32
+ * b) Program this desired frequency into the cpus and Schizo.
+ * c) Set the same value in this register.
+ * d) 16 system clocks later, clear this register.
+ */
+#define BBC_ES_CTRL_1_1 0x01 /* Full frequency */
+#define BBC_ES_CTRL_1_2 0x02 /* 1/2 frequency */
+#define BBC_ES_CTRL_1_32 0x20 /* 1/32 frequency */
+#define BBC_ES_RESV 0xdc /* Reserved */
+
+/* Energy Star Assert Change Time register. This determines the number
+ * of BBC clock cycles (which is half the system frequency) between
+ * the detection of FREEZE_ACK being asserted and the assertion of
+ * the CLK_CHANGE_L[2:0] signals.
+ */
+#define BBC_ES_ACT_VAL 0xff
+
+/* Energy Star Assert Bypass Time register. This determines the number
+ * of BBC clock cycles (which is half the system frequency) between
+ * the assertion of the CLK_CHANGE_L[2:0] signals and the assertion of
+ * the ESTAR_PLL_BYPASS signal.
+ */
+#define BBC_ES_ABT_VAL 0xffff
+
+/* Energy Star PLL Settle Time register. This determines the number of
+ * BBC clock cycles (which is half the system frequency) between the
+ * de-assertion of CLK_CHANGE_L[2:0] and the de-assertion of the FREEZE_L
+ * signal.
+ */
+#define BBC_ES_PST_VAL 0xffffffff
+
+/* Energy Star Frequency Switch Latency register. This is the number of
+ * BBC clocks between the de-assertion of CLK_CHANGE_L[2:0] and the first
+ * edge of the Safari clock at the new frequency.
+ */
+#define BBC_ES_FSL_VAL 0xffffffff
+
+/* Keyboard Beep control register. This is a simple enabler for the audio
+ * beep sound.
+ */
+#define BBC_KBD_BEEP_ENABLE 0x01 /* Enable beep */
+#define BBC_KBD_BEEP_RESV 0xfe /* Reserved */
+
+/* Keyboard Beep Counter register. There is a free-running counter inside
+ * the BBC which runs at half the system clock. The bit set in this register
+ * determines when the audio sound is generated. So for example if bit
+ * 10 is set, the audio beep will oscillate at 1/(2**12). The keyboard beep
+ * generator automatically selects a different bit to use if the system clock
+ * is changed via Energy Star.
+ */
+#define BBC_KBD_BCNT_BITS 0x0007fc00
+#define BBC_KBC_BCNT_RESV 0xfff803ff
+
+#endif /* _SPARC64_BBC_H */
+
-/* $Id: dcr.h,v 1.3 2001/03/01 23:23:33 davem Exp $ */
+/* $Id: dcr.h,v 1.4 2001/03/09 17:56:37 davem Exp $ */
#ifndef _SPARC64_DCR_H
#define _SPARC64_DCR_H
/* UltraSparc-III Dispatch Control Register, ASR 0x12 */
+#define DCR_OBS 0x0000000000000fc0 /* Observability Bus Controls */
#define DCR_BPE 0x0000000000000020 /* Branch Predict Enable */
-#define DCR_RPE 0x0000000000000010 /* Return Address Prediction Enable*/
+#define DCR_RPE 0x0000000000000010 /* Return Address Prediction Enable */
#define DCR_SI 0x0000000000000008 /* Single Instruction Disable */
+#define DCR_IFPOE 0x0000000000000002 /* IRQ FP Operation Enable */
#define DCR_MS 0x0000000000000001 /* Multi-Scalar dispatch */
#endif /* _SPARC64_DCR_H */
-/* $Id: ebus.h,v 1.9 1999/08/30 10:14:37 davem Exp $
+/* $Id: ebus.h,v 1.10 2001/03/14 05:00:55 davem Exp $
* ebus.h: PCI to Ebus pseudo driver software state.
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
struct pci_pbm_info *parent;
struct pci_dev *self;
int index;
+ int is_rio;
int prom_node;
char prom_name[64];
struct linux_prom_ebus_ranges ebus_ranges[PROMREG_MAX];
-/* $Id: elf.h,v 1.25 2000/07/12 01:27:08 davem Exp $ */
+/* $Id: elf.h,v 1.28 2001/03/24 09:36:02 davem Exp $ */
#ifndef __ASM_SPARC64_ELF_H
#define __ASM_SPARC64_ELF_H
instruction set this cpu supports. */
/* On Ultra, we support all of the v8 capabilities. */
-#define ELF_HWCAP (HWCAP_SPARC_FLUSH | HWCAP_SPARC_STBAR | \
- HWCAP_SPARC_SWAP | HWCAP_SPARC_MULDIV | \
- HWCAP_SPARC_V9)
+#define ELF_HWCAP ((HWCAP_SPARC_FLUSH | HWCAP_SPARC_STBAR | \
+ HWCAP_SPARC_SWAP | HWCAP_SPARC_MULDIV | \
+ HWCAP_SPARC_V9) | \
+ ((tlb_type == cheetah) ? HWCAP_SPARC_ULTRA3 : 0))
/* This yields a string that ld.so will use to load implementation
specific libraries for optimization. This is more specific in
if (flags & SPARC_FLAG_32BIT) { \
pgd_t *pgd0 = ¤t->mm->pgd[0]; \
if (pgd_none (*pgd0)) { \
- pmd_t *page = get_pmd_fast(); \
+ pmd_t *page = pmd_alloc_one_fast(); \
if (!page) \
- (void) get_pmd_slow(pgd0, 0); \
- else \
- pgd_set(pgd0, page); \
+ page = pmd_alloc_one(); \
+ pgd_set(pgd0, page); \
} \
pgd_cache = pgd_val(*pgd0) << 11UL; \
} \
__asm__ __volatile__( \
- "stxa\t%0, [%1] %2" \
+ "stxa\t%0, [%1] %2\n\t" \
+ "membar #Sync" \
: /* no outputs */ \
: "r" (pgd_cache), \
"r" (TSB_REG), \
-/* $Id: floppy.h,v 1.28 2000/02/18 13:50:54 davem Exp $
+/* $Id: floppy.h,v 1.29 2001/03/24 00:07:23 davem Exp $
* asm-sparc64/floppy.h: Sparc specific parts of the Floppy driver.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
#endif /* CONFIG_PCI */
+#ifdef CONFIG_PCI
+static int __init ebus_fdthree_p(struct linux_ebus_device *edev)
+{
+ if (!strcmp(edev->prom_name, "fdthree"))
+ return 1;
+ if (!strcmp(edev->prom_name, "floppy")) {
+ char compat[16];
+ prom_getstring(edev->prom_node,
+ "compatible",
+ compat, sizeof(compat));
+ compat[15] = '\0';
+ if (!strcmp(compat, "fdthree"))
+ return 1;
+ }
+ return 0;
+}
+#endif
+
static unsigned long __init sun_floppy_init(void)
{
char state[128];
for_each_ebus(ebus) {
for_each_ebusdev(edev, ebus) {
- if (!strcmp(edev->prom_name, "fdthree"))
+ if (ebus_fdthree_p(edev))
goto ebus_done;
}
}
-/* $Id: iommu.h,v 1.9 1999/09/21 14:39:39 davem Exp $
+/* $Id: iommu.h,v 1.10 2001/03/08 09:55:56 davem Exp $
* iommu.h: Definitions for the sun5 IOMMU.
*
* Copyright (C) 1996, 1999 David S. Miller (davem@caip.rutgers.edu)
#define IOPTE_STBUF 0x1000000000000000 /* DVMA can use streaming buffer */
#define IOPTE_INTRA 0x0800000000000000 /* SBUS slot-->slot direct transfer */
#define IOPTE_CONTEXT 0x07ff800000000000 /* Context number */
-#define IOPTE_PAGE 0x00007fffffffe000 /* Physical page number (PA[40:13]) */
+#define IOPTE_PAGE 0x00007fffffffe000 /* Physical page number (PA[42:13]) */
#define IOPTE_CACHE 0x0000000000000010 /* Cached (in UPA E-cache) */
#define IOPTE_WRITE 0x0000000000000002 /* Writeable */
-/* $Id: irq.h,v 1.19 2000/06/26 19:40:27 davem Exp $
+/* $Id: irq.h,v 1.20 2001/03/09 01:31:40 davem Exp $
* irq.h: IRQ registers on the 64-bit Sparc.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
/* IMAP/ICLR register defines */
#define IMAP_VALID 0x80000000 /* IRQ Enabled */
-#define IMAP_TID 0x7c000000 /* UPA TargetID */
+#define IMAP_TID_UPA 0x7c000000 /* UPA TargetID */
+#define IMAP_AID_SAFARI 0x7c000000 /* Safari AgentID */
+#define IMAP_NID_SAFARI 0x03e00000 /* Safari NodeID */
#define IMAP_IGN 0x000007c0 /* IRQ Group Number */
#define IMAP_INO 0x0000003f /* IRQ Number */
#define IMAP_INR 0x000007ff /* Full interrupt number*/
#include <asm/io.h>
#ifndef RTC_PORT
-#define RTC_PORT(x) (0x70 + (x))
-#define RTC_ALWAYS_BCD 1 /* RTC operates in binary mode */
+#ifdef CONFIG_PCI
+extern unsigned long ds1287_regs;
+#else
+#define ds1287_regs (0UL)
+#endif
+#define RTC_PORT(x) (ds1287_regs + (x))
+#define RTC_ALWAYS_BCD 0
#endif
/*
-/* $Id: mmu_context.h,v 1.45 2000/08/12 13:25:52 davem Exp $ */
+/* $Id: mmu_context.h,v 1.47 2001/03/22 07:26:04 davem Exp $ */
#ifndef __SPARC64_MMU_CONTEXT_H
#define __SPARC64_MMU_CONTEXT_H
"mov %3, %%g4\n\t" \
"mov %0, %%g7\n\t" \
"stxa %1, [%%g4] %2\n\t" \
+ "membar #Sync\n\t" \
"wrpr %%g0, 0x096, %%pstate" \
: /* no outputs */ \
: "r" (paddr), "r" (pgd_cache),\
"flush %%g6" \
: /* No outputs */ \
: "r" (CTX_HWBITS((__mm)->context)), \
- "r" (0x10), "i" (0x58))
+ "r" (0x10), "i" (ASI_DMMU))
-/* Clean out potential stale TLB entries due to previous
- * users of this TLB context. We flush TLB contexts
- * lazily on sparc64.
- */
-#define clean_secondary_context() \
- __asm__ __volatile__("stxa %%g0, [%0] %1\n\t" \
- "stxa %%g0, [%0] %2\n\t" \
- "flush %%g6" \
- : /* No outputs */ \
- : "r" (0x50), "i" (0x5f), "i" (0x57))
+extern void __flush_tlb_mm(unsigned long, unsigned long);
/* Switch the current MM context. */
static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk, int cpu)
*/
if (!ctx_valid || !(mm->cpu_vm_mask & vm_mask)) {
mm->cpu_vm_mask |= vm_mask;
- clean_secondary_context();
+ __flush_tlb_mm(CTX_HWBITS(mm->context), SECONDARY_CONTEXT);
}
}
spin_unlock(&mm->page_table_lock);
spin_unlock(&mm->page_table_lock);
load_secondary_context(mm);
- clean_secondary_context();
+ __flush_tlb_mm(CTX_HWBITS(mm->context), SECONDARY_CONTEXT);
reload_tlbmiss_state(current, mm);
}
-/* $Id: openprom.h,v 1.8 2000/08/12 19:55:25 anton Exp $ */
+/* $Id: openprom.h,v 1.9 2001/03/16 10:22:02 davem Exp $ */
#ifndef __SPARC64_OPENPROM_H
#define __SPARC64_OPENPROM_H
unsigned int or_size;
};
+struct linux_prom64_ranges {
+ unsigned long ot_child_base; /* Bus feels this */
+ unsigned long ot_parent_base; /* CPU looks from here */
+ unsigned long or_size;
+};
+
/* Ranges and reg properties are a bit different for PCI. */
struct linux_prom_pci_registers {
unsigned int phys_hi;
-/* $Id: parport.h,v 1.9 2000/03/16 07:47:27 davem Exp $
+/* $Id: parport.h,v 1.10 2001/03/24 00:18:57 davem Exp $
* parport.h: sparc64 specific parport initialization and dma.
*
* Copyright (C) 1999 Eddie C. Dost (ecd@skynet.be)
return res;
}
+static int ebus_ecpp_p(struct linux_ebus_device *edev)
+{
+ if (!strcmp(edev->prom_name, "ecpp"))
+ return 1;
+ if (!strcmp(edev->prom_name, "parallel")) {
+ char compat[19];
+ prom_getstring(edev->prom_node,
+ "compatible",
+ compat, sizeof(compat));
+ compat[18] = '\0';
+ if (!strcmp(compat, "ecpp"))
+ return 1;
+ if (!strcmp(compat, "ns87317-ecpp") &&
+ !strcmp(compat + 13, "ecpp"))
+ return 1;
+ }
+ return 0;
+}
+
static int parport_pc_find_nonpci_ports (int autoirq, int autodma)
{
struct linux_ebus *ebus;
for_each_ebus(ebus) {
for_each_ebusdev(edev, ebus) {
- if (!strcmp(edev->prom_name, "ecpp")) {
+ if (ebus_ecpp_p(edev)) {
unsigned long base = edev->resource[0].start;
unsigned long config = edev->resource[1].start;
-/* $Id: pgalloc.h,v 1.15 2001/03/04 18:31:00 davem Exp $ */
+/* $Id: pgalloc.h,v 1.18 2001/03/24 09:36:01 davem Exp $ */
#ifndef _SPARC64_PGALLOC_H
#define _SPARC64_PGALLOC_H
#define flush_page_to_ram(page) do { } while (0)
/*
- * icache doesnt snoop local stores and we don't use block commit stores
- * (which invalidate icache lines) during module load, so we need this.
+ * On spitfire, the icache doesn't snoop local stores and we don't
+ * use block commit stores (which invalidate icache lines) during
+ * module load, so we need this.
*/
extern void flush_icache_range(unsigned long start, unsigned long end);
extern void __flush_dcache_page(void *addr, int flush_icache);
#define flush_dcache_page(page) \
-do { if ((page)->mapping && !(page)->mapping->i_mmap && !(page)->mapping->i_mmap_shared) \
+do { if ((page)->mapping && \
+ !((page)->mapping->i_mmap) && \
+ !((page)->mapping->i_mmap_shared)) \
set_bit(PG_dcache_dirty, &(page)->flags); \
else \
__flush_dcache_page((page)->virtual, \
- (page)->mapping != NULL); \
+ ((tlb_type == spitfire) && \
+ (page)->mapping != NULL)); \
} while(0)
extern void __flush_dcache_range(unsigned long start, unsigned long end);
#endif /* ! CONFIG_SMP */
#define VPTE_BASE_SPITFIRE 0xfffffffe00000000
+#if 1
+#define VPTE_BASE_CHEETAH VPTE_BASE_SPITFIRE
+#else
#define VPTE_BASE_CHEETAH 0xffe0000000000000
+#endif
extern __inline__ void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start,
unsigned long end)
vpte_base + (s >> (PAGE_SHIFT - 3)),
vpte_base + (e >> (PAGE_SHIFT - 3)));
}
+#undef VPTE_BASE_SPITFIRE
+#undef VPTE_BASE_CHEETAH
/* Page table allocation/freeing. */
#ifdef CONFIG_SMP
#endif /* CONFIG_SMP */
-extern pmd_t *get_pmd_slow(pgd_t *pgd, unsigned long address_premasked);
+#define pgd_populate(PGD, PMD) pgd_set(PGD, PMD)
+
+extern __inline__ pmd_t *pmd_alloc_one(void)
+{
+ pmd_t *pmd = (pmd_t *)__get_free_page(GFP_KERNEL);
+ if (pmd)
+ memset(pmd, 0, PAGE_SIZE);
+ return pmd;
+}
-extern __inline__ pmd_t *get_pmd_fast(void)
+extern __inline__ pmd_t *pmd_alloc_one_fast(void)
{
unsigned long *ret;
int color = 0;
free_page((unsigned long)pmd);
}
-extern pte_t *get_pte_slow(pmd_t *pmd, unsigned long address_preadjusted,
- unsigned long color);
+#define pmd_populate(PMD, PTE) pmd_set(PMD, PTE)
+
+extern pte_t *pte_alloc_one(unsigned long address);
-extern __inline__ pte_t *get_pte_fast(unsigned long color)
+extern __inline__ pte_t *pte_alloc_one_fast(unsigned long address)
{
+ unsigned long color = (address >> (PAGE_SHIFT + 10)) & 0x1UL;
unsigned long *ret;
if((ret = (unsigned long *)pte_quicklist[color]) != NULL) {
free_page((unsigned long)pte);
}
-#define pte_free_kernel(pte) free_pte_fast(pte)
#define pte_free(pte) free_pte_fast(pte)
-#define pmd_free_kernel(pmd) free_pmd_fast(pmd)
#define pmd_free(pmd) free_pmd_fast(pmd)
#define pgd_free(pgd) free_pgd_fast(pgd)
#define pgd_alloc() get_pgd_fast()
-extern inline pte_t * pte_alloc(pmd_t *pmd, unsigned long address)
-{
- address = (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
- if (pmd_none(*pmd)) {
- /* Be careful, address can be just about anything... */
- unsigned long color = (((unsigned long)pmd)>>2UL) & 0x1UL;
- pte_t *page = get_pte_fast(color);
-
- if (!page)
- return get_pte_slow(pmd, address, color);
- pmd_set(pmd, page);
- return page + address;
- }
- return (pte_t *) pmd_page(*pmd) + address;
-}
-
-extern inline pmd_t * pmd_alloc(pgd_t *pgd, unsigned long address)
-{
- address = (address >> PMD_SHIFT) & (REAL_PTRS_PER_PMD - 1);
- if (pgd_none(*pgd)) {
- pmd_t *page = get_pmd_fast();
-
- if (!page)
- return get_pmd_slow(pgd, address);
- pgd_set(pgd, page);
- return page + address;
- }
- return (pmd_t *) pgd_page(*pgd) + address;
-}
-
-#define pte_alloc_kernel(pmd, addr) pte_alloc(pmd, addr)
-#define pmd_alloc_kernel(pgd, addr) pmd_alloc(pgd, addr)
-
extern int do_check_pgt_cache(int, int);
#endif /* _SPARC64_PGALLOC_H */
-/* $Id: pgtable.h,v 1.137 2001/03/02 03:12:01 davem Exp $
+/* $Id: pgtable.h,v 1.138 2001/03/08 09:55:56 davem Exp $
* pgtable.h: SpitFire page table operations.
*
* Copyright 1996,1997 David S. Miller (davem@caip.rutgers.edu)
*/
#define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval))
+/* XXX All of this needs to be rethought so we can take advantage
+ * XXX cheetah's full 64-bit virtual address space, ie. no more hole
+ * XXX in the middle like on spitfire. -DaveM
+ */
+
/* PMD_SHIFT determines the size of the area a second-level page table can map */
#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3))
#define PMD_SIZE (1UL << PMD_SHIFT)
#define _PAGE_SZ8K 0x0000000000000000 /* 8K Page */
#define _PAGE_NFO 0x1000000000000000 /* No Fault Only */
#define _PAGE_IE 0x0800000000000000 /* Invert Endianness */
-#define _PAGE_SN 0x0000800000000000 /* Snoop */
+#define _PAGE_SN 0x0000800000000000 /* (Cheetah) Snoop */
#define _PAGE_PADDR_SF 0x000001FFFFFFE000 /* (Spitfire) Phys Address [40:13] */
#define _PAGE_PADDR 0x000007FFFFFFE000 /* (Cheetah) Phys Address [42:13] */
#define _PAGE_SOFT 0x0000000000001F80 /* Software bits */
-/* $Id: processor.h,v 1.68 2000/12/31 10:05:43 davem Exp $
+/* $Id: processor.h,v 1.69 2001/03/08 22:08:51 davem Exp $
* include/asm-sparc64/processor.h
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
/* D$ line 2, 3, 4 */
struct pt_regs *kregs;
unsigned long *utraps;
- unsigned char gsr[7];
- unsigned char __pad3;
+ unsigned long gsr[7];
unsigned long xfsr[7];
struct reg_window reg_window[NSWINS];
0, 0, 0, 0, \
/* fault_address, fpsaved, __pad2, kregs, */ \
0, { 0 }, 0, 0, \
-/* utraps, gsr, __pad3, xfsr, */ \
- 0, { 0 }, 0, { 0 }, \
+/* utraps, gsr, xfsr, */ \
+ 0, { 0 }, { 0 }, \
/* reg_window */ \
{ { { 0, }, { 0, } }, }, \
/* rwbuf_stkptrs */ \
#include <linux/threads.h>
#include <asm/asi.h>
#include <asm/starfire.h>
+#include <asm/spitfire.h>
#ifndef __ASSEMBLY__
/* PROM provided per-processor information we need
extern __inline__ int hard_smp_processor_id(void)
{
- if(this_is_starfire != 0) {
+ if (tlb_type == cheetah) {
+ unsigned long safari_config;
+ __asm__ __volatile__("ldxa [%%g0] %1, %0"
+ : "=r" (safari_config)
+ : "i" (ASI_SAFARI_CONFIG));
+ return ((safari_config >> 17) & 0x3ff);
+ } else if (this_is_starfire != 0) {
return starfire_hard_smp_processor_id();
} else {
unsigned long upaconfig;
-/* $Id: spitfire.h,v 1.11 2001/03/03 10:34:45 davem Exp $
+/* $Id: spitfire.h,v 1.14 2001/03/22 07:26:04 davem Exp $
* spitfire.h: SpitFire/BlackBird/Cheetah inline MMU operations.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
extern enum ultra_tlb_layout tlb_type;
+#define SPARC64_USE_STICK (tlb_type == cheetah)
+
#define SPITFIRE_HIGHEST_LOCKED_TLBENT (64 - 1)
#define CHEETAH_HIGHEST_LOCKED_TLBENT (16 - 1)
extern __inline__ void spitfire_put_isfsr(unsigned long sfsr)
{
- __asm__ __volatile__("stxa %0, [%1] %2" :
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
+ : /* no outputs */
: "r" (sfsr), "r" (TLB_SFSR), "i" (ASI_IMMU));
}
extern __inline__ void spitfire_put_dsfsr(unsigned long sfsr)
{
- __asm__ __volatile__("stxa %0, [%1] %2" :
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
+ : /* no outputs */
: "r" (sfsr), "r" (TLB_SFSR), "i" (ASI_DMMU));
}
extern __inline__ void spitfire_set_primary_context(unsigned long ctx)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (ctx & 0x3ff),
"r" (PRIMARY_CONTEXT), "i" (ASI_DMMU));
- membar("#Sync");
+ __asm__ __volatile__ ("membar #Sync" : : : "memory");
}
extern __inline__ unsigned long spitfire_get_secondary_context(void)
extern __inline__ void spitfire_set_secondary_context(unsigned long ctx)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (ctx & 0x3ff),
"r" (SECONDARY_CONTEXT), "i" (ASI_DMMU));
- membar("#Sync");
+ __asm__ __volatile__ ("membar #Sync" : : : "memory");
}
/* The data cache is write through, so this just invalidates the
*/
extern __inline__ void spitfire_put_dcache_tag(unsigned long addr, unsigned long tag)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (tag), "r" (addr), "i" (ASI_DCACHE_TAG));
- membar("#Sync");
+ __asm__ __volatile__ ("membar #Sync" : : : "memory");
}
/* The instruction cache lines are flushed with this, but note that
*/
extern __inline__ void spitfire_put_icache_tag(unsigned long addr, unsigned long tag)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (tag), "r" (addr), "i" (ASI_IC_TAG));
}
extern __inline__ void spitfire_put_dtlb_data(int entry, unsigned long data)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (data), "r" (entry << 3),
"i" (ASI_DTLB_DATA_ACCESS));
extern __inline__ void spitfire_put_itlb_data(int entry, unsigned long data)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (data), "r" (entry << 3),
"i" (ASI_ITLB_DATA_ACCESS));
/* Context level flushes. */
extern __inline__ void spitfire_flush_dtlb_primary_context(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x40), "i" (ASI_DMMU_DEMAP));
}
extern __inline__ void spitfire_flush_itlb_primary_context(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x40), "i" (ASI_IMMU_DEMAP));
}
extern __inline__ void spitfire_flush_dtlb_secondary_context(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x50), "i" (ASI_DMMU_DEMAP));
}
extern __inline__ void spitfire_flush_itlb_secondary_context(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x50), "i" (ASI_IMMU_DEMAP));
}
extern __inline__ void spitfire_flush_dtlb_nucleus_context(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x60), "i" (ASI_DMMU_DEMAP));
}
extern __inline__ void spitfire_flush_itlb_nucleus_context(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x60), "i" (ASI_IMMU_DEMAP));
}
/* Page level flushes. */
extern __inline__ void spitfire_flush_dtlb_primary_page(unsigned long page)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (page), "i" (ASI_DMMU_DEMAP));
}
extern __inline__ void spitfire_flush_itlb_primary_page(unsigned long page)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (page), "i" (ASI_IMMU_DEMAP));
}
extern __inline__ void spitfire_flush_dtlb_secondary_page(unsigned long page)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (page | 0x10), "i" (ASI_DMMU_DEMAP));
}
extern __inline__ void spitfire_flush_itlb_secondary_page(unsigned long page)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (page | 0x10), "i" (ASI_IMMU_DEMAP));
}
extern __inline__ void spitfire_flush_dtlb_nucleus_page(unsigned long page)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (page | 0x20), "i" (ASI_DMMU_DEMAP));
}
extern __inline__ void spitfire_flush_itlb_nucleus_page(unsigned long page)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (page | 0x20), "i" (ASI_IMMU_DEMAP));
}
/* Cheetah has "all non-locked" tlb flushes. */
extern __inline__ void cheetah_flush_dtlb_all(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x80), "i" (ASI_DMMU_DEMAP));
}
extern __inline__ void cheetah_flush_itlb_all(void)
{
- __asm__ __volatile__("stxa %%g0, [%0] %1"
+ __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (0x80), "i" (ASI_IMMU_DEMAP));
}
extern __inline__ void cheetah_put_ldtlb_data(int entry, unsigned long data)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (data),
"r" ((0 << 16) | (entry << 3)),
extern __inline__ void cheetah_put_litlb_data(int entry, unsigned long data)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (data),
"r" ((0 << 16) | (entry << 3)),
extern __inline__ void cheetah_put_dtlb_data(int entry, unsigned long data)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (data),
"r" ((2 << 16) | (entry << 3)),
extern __inline__ void cheetah_put_itlb_data(int entry, unsigned long data)
{
- __asm__ __volatile__("stxa %0, [%1] %2"
+ __asm__ __volatile__("stxa %0, [%1] %2\n\t"
+ "membar #Sync"
: /* No outputs */
: "r" (data), "r" ((2 << 16) | (entry << 3)),
"i" (ASI_ITLB_DATA_ACCESS));
#define HWCAP_SPARC_SWAP 4
#define HWCAP_SPARC_MULDIV 8
#define HWCAP_SPARC_V9 16
-
+#define HWCAP_SPARC_ULTRA3 32
/*
* 68k ELF relocation types
struct hh_cache *hh);
extern int eth_header_parse(struct sk_buff *skb,
unsigned char *haddr);
-extern struct net_device * init_etherdev(struct net_device *, int);
+extern struct net_device *init_etherdev(struct net_device *dev, int sizeof_priv);
+extern struct net_device *alloc_etherdev(int sizeof_priv);
-static __inline__ void eth_copy_and_sum (struct sk_buff *dest, unsigned char *src, int len, int base)
+static inline void eth_copy_and_sum (struct sk_buff *dest, unsigned char *src, int len, int base)
{
memcpy (dest->data, src, len);
}
-/* Check that the ethernet address (MAC) is not 00:00:00:00:00:00 and is not
- * a multicast address. Return true if the address is valid.
+/**
+ * is_valid_ether_addr - Determine if the given Ethernet address is valid
+ * @addr: Pointer to a six-byte array containing the Ethernet address
+ *
+ * Check that the Ethernet address (MAC) is not 00:00:00:00:00:00, is not
+ * a multicast address, and is not FF:FF:FF:FF:FF:FF.
+ *
+ * Return true if the address is valid.
*/
-static __inline__ int is_valid_ether_addr( u8 *addr )
+static inline int is_valid_ether_addr( u8 *addr )
{
- return !(addr[0]&1) && memcmp( addr, "\0\0\0\0\0\0", 6);
+ const char zaddr[6] = {0,};
+
+ return !(addr[0]&1) && memcmp( addr, zaddr, 6);
}
#endif
u32 reserved[4];
};
+/* these strings are set to whatever the driver author decides... */
+struct ethtool_drvinfo {
+ u32 cmd;
+ char driver[32]; /* driver short name, "tulip", "eepro100" */
+ char version[32]; /* driver version string */
+ char fw_version[32]; /* firmware version string, if applicable */
+ char bus_info[32]; /* Bus info for this interface. For PCI
+ * devices, use pci_dev->slot_name. */
+ char reserved1[32];
+ char reserved2[32];
+};
/* CMDs currently supported */
-#define ETHTOOL_GSET 0x00000001 /* Get settings, non-privileged. */
+#define ETHTOOL_GSET 0x00000001 /* Get settings. */
#define ETHTOOL_SSET 0x00000002 /* Set settings, privileged. */
+#define ETHTOOL_GDRVINFO 0x00000003 /* Get driver info. */
/* compatibility with older code */
#define SPARC_ETH_GSET ETHTOOL_GSET
extern int fc_rebuild_header(struct sk_buff *skb);
//extern unsigned short fc_type_trans(struct sk_buff *skb, struct net_device *dev);
-extern struct net_device * init_fcdev(struct net_device *, int);
+extern struct net_device *init_fcdev(struct net_device *dev, int sizeof_priv);
+extern struct net_device *alloc_fcdev(int sizeof_priv);
+extern int register_fcdev(struct net_device *dev);
+extern void unregister_fcdev(struct net_device *dev);
#endif
extern int fddi_rebuild_header(struct sk_buff *skb);
extern unsigned short fddi_type_trans(struct sk_buff *skb,
struct net_device *dev);
-extern struct net_device * init_fddidev(struct net_device *, int);
+extern struct net_device *init_fddidev(struct net_device *dev, int sizeof_priv);
+extern struct net_device *alloc_fddidev(int sizeof_priv);
#endif
#endif /* _LINUX_FDDIDEVICE_H */
extern void hippi_net_init(void);
void hippi_setup(struct net_device *dev);
-extern struct net_device *init_hippi_dev(struct net_device *, int);
+extern struct net_device *init_hippi_dev(struct net_device *dev, int sizeof_priv);
+extern struct net_device *alloc_hippi_dev(int sizeof_priv);
+extern int register_hipdev(struct net_device *dev);
extern void unregister_hipdev(struct net_device *dev);
#endif
#define ARPHRD_X25 271 /* CCITT X.25 */
#define ARPHRD_HWX25 272 /* Boards with X.25 in firmware */
#define ARPHRD_PPP 512
-#define ARPHRD_HDLC 513 /* (Cisco) HDLC */
+#define ARPHRD_CISCO 513 /* Cisco HDLC */
+#define ARPHRD_HDLC ARPHRD_CISCO
#define ARPHRD_LAPB 516 /* LAPB */
#define ARPHRD_DDCMP 517 /* Digital's DDCMP protocol */
+#define ARPHRD_RAWHDLC 518 /* Raw HDLC */
#define ARPHRD_TUNNEL 768 /* IPIP tunnel */
#define ARPHRD_TUNNEL6 769 /* IPIP6 tunnel */
struct divert_blk;
+#define HAVE_ALLOC_NETDEV /* feature macro: alloc_xxxdev
+ functions are available. */
+
#define NET_XMIT_SUCCESS 0
#define NET_XMIT_DROP 1 /* skb dropped */
#define NET_XMIT_CN 2 /* congestion notification */
/* Support for loadable net-drivers */
extern int register_netdev(struct net_device *dev);
extern void unregister_netdev(struct net_device *dev);
-extern int register_trdev(struct net_device *dev);
-extern void unregister_trdev(struct net_device *dev);
-extern int register_fcdev(struct net_device *dev);
-extern void unregister_fcdev(struct net_device *dev);
/* Functions used for multicast support */
extern void dev_mc_upload(struct net_device *dev);
extern int dev_mc_delete(struct net_device *dev, void *addr, int alen, int all);
#define PCI_VENDOR_ID_SUN 0x108e
#define PCI_DEVICE_ID_SUN_EBUS 0x1000
#define PCI_DEVICE_ID_SUN_HAPPYMEAL 0x1001
+#define PCI_DEVICE_ID_SUN_RIO_EBUS 0x1100
+#define PCI_DEVICE_ID_SUN_RIO_GEM 0x1101
+#define PCI_DEVICE_ID_SUN_RIO_1394 0x1102
+#define PCI_DEVICE_ID_SUN_RIO_USB 0x1103
+#define PCI_DEVICE_ID_SUN_GEM 0x2bad
#define PCI_DEVICE_ID_SUN_SIMBA 0x5000
#define PCI_DEVICE_ID_SUN_PBM 0x8000
+#define PCI_DEVICE_ID_SUN_SCHIZO 0x8001
#define PCI_DEVICE_ID_SUN_SABRE 0xa000
+#define PCI_DEVICE_ID_SUN_HUMMINGBIRD 0xa001
#define PCI_VENDOR_ID_CMD 0x1095
#define PCI_DEVICE_ID_CMD_640 0x0640
};
/* /proc/sys/net/ipx */
+enum {
+ NET_IPX_PPROP_BROADCASTING=1,
+ NET_IPX_FORWARDING=2
+};
/* /proc/sys/net/appletalk */
void *saddr, unsigned len);
extern int tr_rebuild_header(struct sk_buff *skb);
extern unsigned short tr_type_trans(struct sk_buff *skb, struct net_device *dev);
-extern struct net_device * init_trdev(struct net_device *, int);
+extern struct net_device *init_trdev(struct net_device *dev, int sizeof_priv);
+extern struct net_device *alloc_trdev(int sizeof_priv);
+extern int register_trdev(struct net_device *dev);
+extern void unregister_trdev(struct net_device *dev);
#endif
#define ipx_broadcast_node "\377\377\377\377\377\377"
#define ipx_this_node "\0\0\0\0\0\0"
+#define IPX_MAX_PPROP_HOPS 8
+
struct ipxhdr
{
__u16 ipx_checksum __attribute__ ((packed));
#define IPX_TYPE_SAP 0x04 /* may also be 0 */
#define IPX_TYPE_SPX 0x05 /* SPX protocol */
#define IPX_TYPE_NCP 0x11 /* $lots for docs on this (SPIT) */
-#define IPX_TYPE_PPROP 0x14 /* complicated flood fill brdcast [Not supported] */
+#define IPX_TYPE_PPROP 0x14 /* complicated flood fill brdcast */
ipx_address ipx_dest __attribute__ ((packed));
ipx_address ipx_source __attribute__ ((packed));
};
unsigned char ir_routed;
unsigned char ir_router_node[IPX_NODE_LEN];
struct ipx_route *ir_next;
+ atomic_t refcnt;
} ipx_route;
#ifdef __KERNEL__
u8 ipx_tctrl;
u32 ipx_dest_net;
u32 ipx_source_net;
- int last_hop_index;
+ struct {
+ u32 netnum;
+ int index;
+ } last_hop;
};
#endif
#define IPX_MIN_EPHEMERAL_SOCKET 0x4000
} else {
/* One of our sibling threads was faster, back out. */
page_cache_release(new_page);
+ return 1;
}
/* no need to invalidate: a not-present page shouldn't be cached */
pte_t *new;
/* "fast" allocation can happen without dropping the lock.. */
- new = pte_alloc_one_fast();
+ new = pte_alloc_one_fast(address);
if (!new) {
spin_unlock(&mm->page_table_lock);
- new = pte_alloc_one();
+ new = pte_alloc_one(address);
spin_lock(&mm->page_table_lock);
if (!new)
return NULL;
return __kmem_cache_alloc(flags & GFP_DMA ?
csizep->cs_dmacachep : csizep->cs_cachep, flags);
}
- BUG(); // too big size
return NULL;
}
dev->proc_entry->proc_fops = &proc_dev_atm_operations;
dev->proc_entry->owner = THIS_MODULE;
return 0;
- kfree(dev->proc_entry);
fail0:
kfree(dev->proc_name);
fail1:
return -EINVAL;
/* user issuing the ioctl must be a super one :) */
- if (!suser())
+ if (!capable(CAP_SYS_ADMIN))
return -EPERM;
/* Device must have a divert_blk member NOT null */
struct sock *sk = sock->sk;
struct dn_scp *scp = DN_SK(sk);
struct linkinfo_dn link;
- int r_len = *optlen;
+ int r_len;
void *r_data = NULL;
- int val;
+ unsigned int val;
+ if(get_user(r_len , optlen))
+ return -EFAULT;
+
switch(optname) {
case DSO_CONDATA:
if (r_len > sizeof(struct optdata_dn))
/*
* NET3 IP device support routines.
*
- * Version: $Id: devinet.c,v 1.40 2001/02/05 06:03:47 davem Exp $
+ * Version: $Id: devinet.c,v 1.41 2001/02/18 09:26:26 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
return NOTIFY_DONE;
}
-struct notifier_block ip_netdev_notifier={
- inetdev_event,
- NULL,
- 0
+struct notifier_block ip_netdev_notifier = {
+ notifier_call: inetdev_event,
};
#ifdef CONFIG_RTNETLINK
if (ipq_hash[i] == NULL)
continue;
- write_lock(&ipfrag_lock);
+ read_lock(&ipfrag_lock);
if ((qp = ipq_hash[i]) != NULL) {
/* find the oldest queue for this hash bucket */
while (qp->next)
qp = qp->next;
- __ipq_unlink(qp);
- write_unlock(&ipfrag_lock);
+ atomic_inc(&qp->refcnt);
+ read_unlock(&ipfrag_lock);
spin_lock(&qp->lock);
- if (del_timer(&qp->timer))
- atomic_dec(&qp->refcnt);
- qp->last_in |= COMPLETE;
+ if (!(qp->last_in&COMPLETE))
+ ipq_kill(qp);
spin_unlock(&qp->lock);
ipq_put(qp);
progress = 1;
continue;
}
- write_unlock(&ipfrag_lock);
+ read_unlock(&ipfrag_lock);
}
} while (progress);
}
/*
* sysctl_net_ipv4.c: sysctl interface to net IPV4 subsystem.
*
- * $Id: sysctl_net_ipv4.c,v 1.47 2000/10/19 15:51:02 davem Exp $
+ * $Id: sysctl_net_ipv4.c,v 1.48 2001/02/23 01:39:05 davem Exp $
*
* Begun April 1, 1996, Mike Shaver.
* Added /proc/sys/net/ipv4 directory entry (empty =) ). [MS]
extern int inet_peer_gc_mintime;
extern int inet_peer_gc_maxtime;
+#ifdef CONFIG_SYSCTL
static int tcp_retr1_max = 255;
-
static int ip_local_port_range_min[] = { 1, 1 };
static int ip_local_port_range_max[] = { 65535, 65535 };
+#endif
struct ipv4_config ipv4_config;
* Authors:
* Pedro Roque <roque@di.fc.ul.pt>
*
- * $Id: ip6_fib.c,v 1.22 2000/09/12 00:38:34 davem Exp $
+ * $Id: ip6_fib.c,v 1.23 2001/03/19 20:31:17 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
#endif
static void fib6_prune_clones(struct fib6_node *fn, struct rt6_info *rt);
-static void fib6_repair_tree(struct fib6_node *fn);
+static struct fib6_node * fib6_repair_tree(struct fib6_node *fn);
/*
* A routing update causes an increase of the serial number on the
* is the node we want to try and remove.
*/
-static void fib6_repair_tree(struct fib6_node *fn)
+static struct fib6_node * fib6_repair_tree(struct fib6_node *fn)
{
int children;
int nstate;
}
#endif
atomic_inc(&fn->leaf->rt6i_ref);
- return;
+ return fn->parent;
}
pn = fn->parent;
node_free(fn);
if (pn->fn_flags&RTN_RTINFO || SUBTREE(pn))
- return;
+ return pn;
rt6_release(pn->leaf);
pn->leaf = NULL;
if (fn->leaf == NULL) {
fn->fn_flags &= ~RTN_RTINFO;
rt6_stats.fib_route_nodes--;
- fib6_repair_tree(fn);
+ fn = fib6_repair_tree(fn);
+ }
+
+ if (atomic_read(&rt->rt6i_ref) != 1) {
+ /* This route is used as dummy address holder in some split
+ * nodes. It is not leaked, but it still holds other resources,
+ * which must be released in time. So, scan ascendant nodes
+ * and replace dummy references to this route with references
+ * to still alive ones.
+ */
+ while (fn) {
+ if (!(fn->fn_flags&RTN_RTINFO) && fn->leaf == rt) {
+ fn->leaf = fib6_find_prefix(fn);
+ atomic_inc(&fn->leaf->rt6i_ref);
+ rt6_release(rt);
+ }
+ fn = fn->parent;
+ }
+ /* No more references are possiible at this point. */
+ if (atomic_read(&rt->rt6i_ref) != 1) BUG();
}
#ifdef CONFIG_RTNETLINK
*
* Based on linux/net/ipv4/ip_sockglue.c
*
- * $Id: ipv6_sockglue.c,v 1.34 2000/11/28 13:44:28 davem Exp $
+ * $Id: ipv6_sockglue.c,v 1.36 2001/02/26 05:59:07 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
if (optlen == 0)
goto update;
+ /* 1K is probably excessive
+ * 1K is surely not enough, 2K per standard header is 16K.
+ */
+ retv = -EINVAL;
+ if (optlen > 64*1024)
+ break;
+
opt = sock_kmalloc(sk, sizeof(*opt) + optlen, GFP_KERNEL);
retv = -ENOBUFS;
if (opt == NULL)
if (ip6_frag_hash[i] == NULL)
continue;
- write_lock(&ip6_frag_lock);
+ read_lock(&ip6_frag_lock);
if ((fq = ip6_frag_hash[i]) != NULL) {
/* find the oldest queue for this hash bucket */
while (fq->next)
fq = fq->next;
- __fq_unlink(fq);
- write_unlock(&ip6_frag_lock);
+ atomic_inc(&fq->refcnt);
+ read_unlock(&ip6_frag_lock);
spin_lock(&fq->lock);
- if (del_timer(&fq->timer))
- atomic_dec(&fq->refcnt);
- fq->last_in |= COMPLETE;
+ if (!(fq->last_in&COMPLETE))
+ fq_kill(fq);
spin_unlock(&fq->lock);
fq_put(fq);
progress = 1;
continue;
}
- write_unlock(&ip6_frag_lock);
+ read_unlock(&ip6_frag_lock);
}
} while (progress);
}
struct sock *sk2, **skp;
struct tcp_tw_bucket *tw;
- write_lock(&head->lock);
+ write_lock_bh(&head->lock);
for(skp = &(head + tcp_ehash_size)->chain; (sk2=*skp)!=NULL; skp = &sk2->next) {
tw = (struct tcp_tw_bucket*)sk2;
* Arnaldo Carvalho de Melo <acme@conectiva.com.br>,
* December, 2000
* Revision 044: Call ipxitf_hold on NETDEV_UP (acme)
+ * Revision 045: fix PPROP routing bug (acme)
+ * Revision 046: Further fixes to PPROP, ipxitf_create_internal was
+ * doing an unneeded MOD_INC_USE_COUNT, implement
+ * sysctl for ipx_pprop_broacasting, fix the ipx sysctl
+ * handling, making it dynamic, some cleanups, thanks to
+ * Petr Vandrovec for review and good suggestions. (acme)
*
* Protect the module by a MOD_INC_USE_COUNT/MOD_DEC_USE_COUNT
* pair. Also, now usage count is managed this way
*/
#include <linux/config.h>
-#if defined (CONFIG_IPX) || defined (CONFIG_IPX_MODULE)
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/if_arp.h>
-#ifdef MODULE
-static void ipx_proto_finito(void);
-#endif /* def MODULE */
+extern void ipx_register_sysctl(void);
+extern void ipx_unregister_sysctl(void);
/* Configuration Variables */
static unsigned char ipxcfg_max_hops = 16;
static char ipxcfg_auto_select_primary;
static char ipxcfg_auto_create_interfaces;
+static int sysctl_ipx_pprop_broadcasting = 1;
/* Global Variables */
static struct datalink_proto *p8022_datalink;
static ipx_interface *ipx_primary_net;
static ipx_interface *ipx_internal_net;
+#define IPX_SKB_CB(__skb) ((struct ipx_cb *)&((__skb)->cb[0]))
+
#undef IPX_REFCNT_DEBUG
#ifdef IPX_REFCNT_DEBUG
atomic_t ipx_sock_nr;
return skb2;
}
-/* caller must hold a reference to intrfc */
+/* caller must hold a reference to intrfc and the skb has to be unshared */
static int ipxitf_send(ipx_interface *intrfc, struct sk_buff *skb, char *node)
{
struct ipxhdr *ipx = skb->nh.ipxh;
- struct ipx_cb *cb = (struct ipx_cb *) skb->cb;
struct net_device *dev = intrfc->if_dev;
struct datalink_proto *dl = intrfc->if_dlink;
char dest_node[IPX_NODE_LEN];
int send_to_wire = 1;
int addr_len;
+
+ ipx->ipx_tctrl = IPX_SKB_CB(skb)->ipx_tctrl;
+ ipx->ipx_dest.net = IPX_SKB_CB(skb)->ipx_dest_net;
+ ipx->ipx_source.net = IPX_SKB_CB(skb)->ipx_source_net;
+
+ /* see if we need to include the netnum in the route list */
+ if (IPX_SKB_CB(skb)->last_hop.index >= 0) {
+ u32 *last_hop = (u32 *)(((u8 *) skb->data) +
+ sizeof(struct ipxhdr) +
+ IPX_SKB_CB(skb)->last_hop.index *
+ sizeof(u32));
+ *last_hop = IPX_SKB_CB(skb)->last_hop.netnum;
+ IPX_SKB_CB(skb)->last_hop.index = -1;
+ }
/*
* We need to know how many skbuffs it will take to send out this
* up clones.
*/
- if (cb->ipx_dest_net == intrfc->if_netnum) {
+ if (ipx->ipx_dest.net == intrfc->if_netnum) {
/*
* To our own node, loop and free the original.
* The internal net will receive on all node address.
* We are still charging the sender. Which is right - the driver
* free will handle this fairly.
*/
- if (cb->ipx_source_net != intrfc->if_netnum) {
+ if (ipx->ipx_source.net != intrfc->if_netnum) {
/*
* Unshare the buffer before modifying the count in
* case its a flood or tcpdump
skb = skb_unshare(skb, GFP_ATOMIC);
if (!skb)
return 0;
- if (++(cb->ipx_tctrl) > ipxcfg_max_hops)
+ if (++ipx->ipx_tctrl > ipxcfg_max_hops)
send_to_wire = 0;
}
if (!skb)
return 0;
- ipx->ipx_tctrl = cb->ipx_tctrl;
- ipx->ipx_dest.net = cb->ipx_dest_net;
- ipx->ipx_source.net = cb->ipx_source_net;
- /* see if we need to include the netnum in the route list */
- if (cb->last_hop_index >= 0) {
- u32 *last_hop = (u32 *)(((u8 *) skb->data) +
- sizeof(struct ipxhdr) + cb->last_hop_index *
- sizeof(u32));
- *last_hop = intrfc->if_netnum;
- }
-
/* set up data link and physical headers */
skb->dev = dev;
skb->protocol = htons(ETH_P_IPX);
static const char * ipx_frame_name(unsigned short);
static const char * ipx_device_name(ipx_interface *);
+static void ipxitf_discover_netnum(ipx_interface *intrfc, struct sk_buff *skb);
+static int ipxitf_pprop(ipx_interface *intrfc, struct sk_buff *skb);
static int ipxitf_rcv(ipx_interface *intrfc, struct sk_buff *skb)
{
struct ipxhdr *ipx = skb->nh.ipxh;
- struct ipx_cb *cb = (struct ipx_cb *) skb->cb;
int ret = 0;
ipxitf_hold(intrfc);
/* See if we should update our network number */
- if (!intrfc->if_netnum && /* net number of intrfc not known yet */
- cb->ipx_source_net == cb->ipx_dest_net && /* intra packet */
- cb->ipx_source_net) {
- ipx_interface *i = ipxitf_find_using_net(cb->ipx_source_net);
- /* NB: NetWare servers lie about their hop count so we
- * dropped the test based on it. This is the best way
- * to determine this is a 0 hop count packet.
- */
- if (!i) {
- intrfc->if_netnum = cb->ipx_source_net;
- ipxitf_add_local_route(intrfc);
- } else {
- printk(KERN_WARNING "IPX: Network number collision %lx\n %s %s and %s %s\n",
- (long unsigned int) htonl(cb->ipx_source_net),
- ipx_device_name(i),
- ipx_frame_name(i->if_dlink_type),
- ipx_device_name(intrfc),
- ipx_frame_name(intrfc->if_dlink_type));
- ipxitf_put(i);
- }
- }
+ if (!intrfc->if_netnum) /* net number of intrfc not known yet */
+ ipxitf_discover_netnum(intrfc, skb);
- cb->last_hop_index = -1;
-
- if (ipx->ipx_type == IPX_TYPE_PPROP && cb->ipx_tctrl < 8 &&
- skb->pkt_type != PACKET_OTHERHOST &&
- /* header + 8 network numbers */
- ntohs(ipx->ipx_pktsize) >= sizeof(struct ipxhdr) + 8 * 4) {
- int i;
- ipx_interface *ifcs;
- struct sk_buff *skb2;
- char *c = ((char *) skb->data) + sizeof(struct ipxhdr);
- u32 *l = (u32 *) c;
-
- /* Dump packet if already seen this net */
- for (i = 0; i < cb->ipx_tctrl; i++)
- if (*l++ == intrfc->if_netnum)
- break;
-
- if (i == cb->ipx_tctrl) {
- /* < 8 hops && input itfc not in list */
- /* insert recvd netnum into list */
- cb->last_hop_index = i;
- cb->ipx_tctrl++;
- /* xmit on all other interfaces... */
- spin_lock_bh(&ipx_interfaces_lock);
- for (ifcs = ipx_interfaces; ifcs;
- ifcs = ifcs->if_next) {
- /* Except unconfigured interfaces */
- if (!ifcs->if_netnum)
- continue;
-
- /* That aren't in the list */
- l = (__u32 *) c;
- for (i = 0; i <= cb->ipx_tctrl; i++)
- if (ifcs->if_netnum == *l++)
- break;
- if (i - 1 == cb->ipx_tctrl) {
- cb->ipx_dest_net = ifcs->if_netnum;
- skb2=skb_clone(skb, GFP_ATOMIC);
- if (skb2)
- ipxrtr_route_skb(skb2);
- }
- }
- spin_unlock_bh(&ipx_interfaces_lock);
- }
+ IPX_SKB_CB(skb)->last_hop.index = -1;
+ if (ipx->ipx_type == IPX_TYPE_PPROP) {
+ ret = ipxitf_pprop(intrfc, skb);
+ if (ret)
+ goto out_free_skb;
}
- if (!cb->ipx_dest_net)
- cb->ipx_dest_net = intrfc->if_netnum;
- if (!cb->ipx_source_net)
- cb->ipx_source_net = intrfc->if_netnum;
+ /* local processing follows */
+ if (!IPX_SKB_CB(skb)->ipx_dest_net)
+ IPX_SKB_CB(skb)->ipx_dest_net = intrfc->if_netnum;
+ if (!IPX_SKB_CB(skb)->ipx_source_net)
+ IPX_SKB_CB(skb)->ipx_source_net = intrfc->if_netnum;
- if (intrfc->if_netnum != cb->ipx_dest_net) {
+ /* it doesn't make sense to route a pprop packet, there's no meaning
+ * in the ipx_dest_net for such packets */
+ if (ipx->ipx_type != IPX_TYPE_PPROP &&
+ intrfc->if_netnum != IPX_SKB_CB(skb)->ipx_dest_net) {
/* We only route point-to-point packets. */
if (skb->pkt_type == PACKET_HOST) {
- skb=skb_unshare(skb, GFP_ATOMIC);
+ skb = skb_unshare(skb, GFP_ATOMIC);
if (skb)
ret = ipxrtr_route_skb(skb);
goto out_intrfc;
return ret;
}
+static void ipxitf_discover_netnum(ipx_interface *intrfc, struct sk_buff *skb)
+{
+ const struct ipx_cb *cb = IPX_SKB_CB(skb);
+
+ /* see if this is an intra packet: source_net == dest_net */
+ if (cb->ipx_source_net == cb->ipx_dest_net && cb->ipx_source_net) {
+ ipx_interface *i = ipxitf_find_using_net(cb->ipx_source_net);
+ /* NB: NetWare servers lie about their hop count so we
+ * dropped the test based on it. This is the best way
+ * to determine this is a 0 hop count packet. */
+ if (!i) {
+ intrfc->if_netnum = cb->ipx_source_net;
+ ipxitf_add_local_route(intrfc);
+ } else {
+ printk(KERN_WARNING "IPX: Network number collision "
+ "%lx\n %s %s and %s %s\n",
+ (unsigned long) htonl(cb->ipx_source_net),
+ ipx_device_name(i),
+ ipx_frame_name(i->if_dlink_type),
+ ipx_device_name(intrfc),
+ ipx_frame_name(intrfc->if_dlink_type));
+ ipxitf_put(i);
+ }
+ }
+}
+
+/**
+ * ipxitf_pprop - Process packet propagation IPX packet type 0x14, used for
+ * NetBIOS broadcasts
+ * @intrfc: IPX interface receiving this packet
+ * @skb: Received packet
+ *
+ * Checks if packet is valid: if its more than %IPX_MAX_PPROP_HOPS hops or if it
+ * is smaller than a IPX header + the room for %IPX_MAX_PPROP_HOPS hops we drop
+ * it, not even processing it locally, if it has exact %IPX_MAX_PPROP_HOPS we
+ * don't broadcast it, but process it locally. See chapter 5 of Novell's "IPX
+ * RIP and SAP Router Specification", Part Number 107-000029-001.
+ *
+ * If it is valid, check if we have pprop broadcasting enabled by the user,
+ * if not, just return zero for local processing.
+ *
+ * If it is enabled check the packet and don't broadcast it if we have already
+ * seen this packet.
+ *
+ * Broadcast: send it to the interfaces that aren't on the packet visited nets
+ * array, just after the IPX header.
+ *
+ * Returns -EINVAL for invalid packets, so that the calling function drops
+ * the packet without local processing. 0 if packet is to be locally processed.
+ */
+static int ipxitf_pprop(ipx_interface *intrfc, struct sk_buff *skb)
+{
+ struct ipxhdr *ipx = skb->nh.ipxh;
+ int i, ret = -EINVAL;
+ ipx_interface *ifcs;
+ char *c;
+ u32 *l;
+
+ /* Illegal packet - too many hops or too short */
+ /* We decide to throw it away: no broadcasting, no local processing.
+ * NetBIOS unaware implementations route them as normal packets -
+ * tctrl <= 15, any data payload... */
+ if (IPX_SKB_CB(skb)->ipx_tctrl > IPX_MAX_PPROP_HOPS ||
+ ntohs(ipx->ipx_pktsize) < sizeof(struct ipxhdr) +
+ IPX_MAX_PPROP_HOPS * sizeof(u32))
+ goto out;
+ /* are we broadcasting this damn thing? */
+ ret = 0;
+ if (!sysctl_ipx_pprop_broadcasting)
+ goto out;
+ /* We do broadcast packet on the IPX_MAX_PPROP_HOPS hop, but we
+ * process it locally. All previous hops broadcasted it, and process it
+ * locally. */
+ if (IPX_SKB_CB(skb)->ipx_tctrl == IPX_MAX_PPROP_HOPS)
+ goto out;
+
+ c = ((u8 *) ipx) + sizeof(struct ipxhdr);
+ l = (u32 *) c;
+
+ /* Don't broadcast packet if already seen this net */
+ for (i = 0; i < IPX_SKB_CB(skb)->ipx_tctrl; i++)
+ if (*l++ == intrfc->if_netnum)
+ goto out;
+
+ /* < IPX_MAX_PPROP_HOPS hops && input interface not in list. Save the
+ * position where we will insert recvd netnum into list, later on,
+ * in ipxitf_send */
+ IPX_SKB_CB(skb)->last_hop.index = i;
+ IPX_SKB_CB(skb)->last_hop.netnum = intrfc->if_netnum;
+ /* xmit on all other interfaces... */
+ spin_lock_bh(&ipx_interfaces_lock);
+ for (ifcs = ipx_interfaces; ifcs; ifcs = ifcs->if_next) {
+ /* Except unconfigured interfaces */
+ if (!ifcs->if_netnum)
+ continue;
+
+ /* That aren't in the list */
+ if (ifcs == intrfc)
+ continue;
+ l = (__u32 *) c;
+ /* don't consider the last entry in the packet list,
+ * it is our netnum, and it is not there yet */
+ for (i = 0; i < IPX_SKB_CB(skb)->ipx_tctrl; i++)
+ if (ifcs->if_netnum == *l++)
+ break;
+ if (i == IPX_SKB_CB(skb)->ipx_tctrl) {
+ struct sk_buff *s = skb_copy(skb, GFP_ATOMIC);
+
+ if (s) {
+ IPX_SKB_CB(s)->ipx_dest_net = ifcs->if_netnum;
+ ipxrtr_route_skb(s);
+ }
+ }
+ }
+ spin_unlock_bh(&ipx_interfaces_lock);
+out: return ret;
+}
+
static void ipxitf_insert(ipx_interface *intrfc)
{
ipx_interface *i;
ipx_primary_net = intrfc;
}
+static ipx_interface *ipxitf_alloc(struct net_device *dev, __u32 netnum,
+ unsigned short dlink_type,
+ struct datalink_proto *dlink,
+ unsigned char internal, int ipx_offset)
+{
+ ipx_interface *intrfc = kmalloc(sizeof(ipx_interface), GFP_ATOMIC);
+
+ if (intrfc) {
+ intrfc->if_dev = dev;
+ intrfc->if_netnum = netnum;
+ intrfc->if_dlink_type = dlink_type;
+ intrfc->if_dlink = dlink;
+ intrfc->if_internal = internal;
+ intrfc->if_ipx_offset = ipx_offset;
+ intrfc->if_sknum = IPX_MIN_EPHEMERAL_SOCKET;
+ intrfc->if_sklist = NULL;
+ atomic_set(&intrfc->refcnt, 1);
+ spin_lock_init(&intrfc->if_sklist_lock);
+ MOD_INC_USE_COUNT;
+ }
+
+ return intrfc;
+}
+
static int ipxitf_create_internal(ipx_interface_definition *idef)
{
ipx_interface *intrfc;
ipxitf_put(intrfc);
return -EADDRINUSE;
}
-
- intrfc = kmalloc(sizeof(ipx_interface),GFP_ATOMIC);
+ intrfc = ipxitf_alloc(NULL, idef->ipx_network, 0, NULL, 1, 0);
if (!intrfc)
return -EAGAIN;
- intrfc->if_dev = NULL;
- intrfc->if_netnum = idef->ipx_network;
- intrfc->if_dlink_type = 0;
- intrfc->if_dlink = NULL;
- intrfc->if_sklist = NULL;
- intrfc->if_internal = 1;
- intrfc->if_ipx_offset = 0;
- intrfc->if_sknum = IPX_MIN_EPHEMERAL_SOCKET;
memcpy((char *)&(intrfc->if_node), idef->ipx_node, IPX_NODE_LEN);
ipx_internal_net = ipx_primary_net = intrfc;
- spin_lock_init(&intrfc->if_sklist_lock);
- atomic_set(&intrfc->refcnt, 1);
- MOD_INC_USE_COUNT;
ipxitf_hold(intrfc);
ipxitf_insert(intrfc);
case IPX_FRAME_NONE:
default:
- break;
+ err = -EPROTONOSUPPORT;
+ goto out_dev;
}
err = -ENETDOWN;
if (dev->addr_len > IPX_NODE_LEN)
goto out_dev;
- err = -EPROTONOSUPPORT;
- if (!datalink)
- goto out_dev;
-
intrfc = ipxitf_find_using_phys(dev, dlink_type);
if (!intrfc) {
/* Ok now create */
- intrfc = kmalloc(sizeof(ipx_interface), GFP_ATOMIC);
+ intrfc = ipxitf_alloc(dev, idef->ipx_network, dlink_type,
+ datalink, 0, dev->hard_header_len +
+ datalink->header_length);
err = -EAGAIN;
if (!intrfc)
goto out_dev;
- intrfc->if_dev = dev;
- intrfc->if_netnum = idef->ipx_network;
- intrfc->if_dlink_type = dlink_type;
- intrfc->if_dlink = datalink;
- intrfc->if_sklist = NULL;
- intrfc->if_sknum = IPX_MIN_EPHEMERAL_SOCKET;
/* Setup primary if necessary */
if (idef->ipx_special == IPX_PRIMARY)
ipx_primary_net = intrfc;
- intrfc->if_internal = 0;
- intrfc->if_ipx_offset = dev->hard_header_len + datalink->header_length;
if (!memcmp(idef->ipx_node, "\000\000\000\000\000\000",
IPX_NODE_LEN)) {
memset(intrfc->if_node, 0, IPX_NODE_LEN);
dev->dev_addr, dev->addr_len);
} else
memcpy(intrfc->if_node, idef->ipx_node, IPX_NODE_LEN);
- spin_lock_init(&intrfc->if_sklist_lock);
- atomic_set(&intrfc->refcnt, 1);
- MOD_INC_USE_COUNT;
ipxitf_hold(intrfc);
ipxitf_insert(intrfc);
}
static ipx_interface *ipxitf_auto_create(struct net_device *dev,
unsigned short dlink_type)
{
- struct datalink_proto *datalink = NULL;
- ipx_interface *intrfc;
+ ipx_interface *intrfc = NULL;
+ struct datalink_proto *datalink;
+
+ if (!dev)
+ goto out;
+
+ /* Check addresses are suitable */
+ if (dev->addr_len > IPX_NODE_LEN)
+ goto out;
switch (htons(dlink_type)) {
case ETH_P_IPX:
break;
default:
- return NULL;
+ goto out;
}
- if (!dev)
- return NULL;
-
- /* Check addresses are suitable */
- if (dev->addr_len > IPX_NODE_LEN)
- return NULL;
+ intrfc = ipxitf_alloc(dev, 0, dlink_type, datalink, 0,
+ dev->hard_header_len + datalink->header_length);
- intrfc = kmalloc(sizeof(ipx_interface), GFP_ATOMIC);
if (intrfc) {
- intrfc->if_dev = dev;
- intrfc->if_netnum = 0;
- intrfc->if_dlink_type = dlink_type;
- intrfc->if_dlink = datalink;
- intrfc->if_sklist = NULL;
- intrfc->if_internal = 0;
- intrfc->if_sknum = IPX_MIN_EPHEMERAL_SOCKET;
- intrfc->if_ipx_offset = dev->hard_header_len +
- datalink->header_length;
memset(intrfc->if_node, 0, IPX_NODE_LEN);
memcpy((char *)&(intrfc->if_node[IPX_NODE_LEN-dev->addr_len]),
dev->dev_addr, dev->addr_len);
spin_lock_init(&intrfc->if_sklist_lock);
atomic_set(&intrfc->refcnt, 1);
- MOD_INC_USE_COUNT;
ipxitf_insert(intrfc);
dev_hold(dev);
}
- return intrfc;
+out: return intrfc;
}
static int ipxitf_ioctl(unsigned int cmd, void *arg)
* *
\**************************************************************************/
+static inline void ipxrtr_hold(ipx_route *rt)
+{
+ atomic_inc(&rt->refcnt);
+}
+
+static inline void ipxrtr_put(ipx_route *rt)
+{
+ if (atomic_dec_and_test(&rt->refcnt))
+ kfree(rt);
+}
+
static ipx_route *ipxrtr_lookup(__u32 net)
{
ipx_route *r;
read_lock_bh(&ipx_routes_lock);
for (r = ipx_routes; r && r->ir_net != net; r = r->ir_next)
;
+ if (r)
+ ipxrtr_hold(r);
read_unlock_bh(&ipx_routes_lock);
return r;
static int ipxrtr_add_route(__u32 network, ipx_interface *intrfc, unsigned char *node)
{
ipx_route *rt;
+ int ret;
/* Get a route structure; either existing or create */
rt = ipxrtr_lookup(network);
if (!rt) {
rt = kmalloc(sizeof(ipx_route),GFP_ATOMIC);
+ ret = -EAGAIN;
if (!rt)
- return -EAGAIN;
+ goto out;
+ atomic_set(&rt->refcnt, 1);
+ ipxrtr_hold(rt);
write_lock_bh(&ipx_routes_lock);
rt->ir_next = ipx_routes;
ipx_routes = rt;
write_unlock_bh(&ipx_routes_lock);
+ } else {
+ ret = -EEXIST;
+ if (intrfc == ipx_internal_net)
+ goto out_put;
}
- else if (intrfc == ipx_internal_net)
- return -EEXIST;
rt->ir_net = network;
rt->ir_intrfc = intrfc;
rt->ir_routed = 1;
}
- return 0;
+ ret = 0;
+out_put:
+ ipxrtr_put(rt);
+out: return ret;
}
static void ipxrtr_del_routes(ipx_interface *intrfc)
for (r = &ipx_routes; (tmp = *r) != NULL;) {
if (tmp->ir_intrfc == intrfc) {
*r = tmp->ir_next;
- kfree(tmp);
+ ipxrtr_put(tmp);
} else
r = &(tmp->ir_next);
}
goto out;
*r = tmp->ir_next;
- kfree(tmp);
+ ipxrtr_put(tmp);
err = 0;
goto out;
}
struct sk_buff *skb;
ipx_interface *intrfc;
struct ipxhdr *ipx;
- struct ipx_cb *cb;
int size;
int ipx_offset;
ipx_route *rt = NULL;
intrfc = ipx_primary_net;
} else {
rt = ipxrtr_lookup(usipx->sipx_network);
+ err = -ENETUNREACH;
if (!rt)
- return -ENETUNREACH;
-
+ goto out;
intrfc = rt->ir_intrfc;
}
ipx_offset = intrfc->if_ipx_offset;
size = sizeof(struct ipxhdr) + len + ipx_offset;
- skb = sock_alloc_send_skb(sk, size, 0, noblock, &err);
+ skb = sock_alloc_send_skb(sk, size, noblock, &err);
if (!skb)
- goto out;
+ goto out_put;
skb_reserve(skb,ipx_offset);
skb->sk = sk;
- cb = (struct ipx_cb *) skb->cb;
/* Fill in IPX header */
ipx = (struct ipxhdr *)skb_put(skb, sizeof(struct ipxhdr));
ipx->ipx_pktsize= htons(len + sizeof(struct ipxhdr));
- cb->ipx_tctrl = 0;
+ IPX_SKB_CB(skb)->ipx_tctrl = 0;
ipx->ipx_type = usipx->sipx_type;
skb->h.raw = (void *)skb->nh.ipxh = ipx;
- cb->last_hop_index = -1;
-
+ IPX_SKB_CB(skb)->last_hop.index = -1;
#ifdef CONFIG_IPX_INTERN
- cb->ipx_source_net = sk->protinfo.af_ipx.intrfc->if_netnum;
+ IPX_SKB_CB(skb)->ipx_source_net = sk->protinfo.af_ipx.intrfc->if_netnum;
memcpy(ipx->ipx_source.node, sk->protinfo.af_ipx.node, IPX_NODE_LEN);
#else
err = ntohs(sk->protinfo.af_ipx.port);
if (err == 0x453 || err == 0x452) {
/* RIP/SAP special handling for mars_nwe */
- cb->ipx_source_net = intrfc->if_netnum;
+ IPX_SKB_CB(skb)->ipx_source_net = intrfc->if_netnum;
memcpy(ipx->ipx_source.node, intrfc->if_node, IPX_NODE_LEN);
} else {
- cb->ipx_source_net = sk->protinfo.af_ipx.intrfc->if_netnum;
+ IPX_SKB_CB(skb)->ipx_source_net =
+ sk->protinfo.af_ipx.intrfc->if_netnum;
memcpy(ipx->ipx_source.node, sk->protinfo.af_ipx.intrfc->if_node, IPX_NODE_LEN);
}
#endif /* CONFIG_IPX_INTERN */
ipx->ipx_source.sock = sk->protinfo.af_ipx.port;
- cb->ipx_dest_net = usipx->sipx_network;
+ IPX_SKB_CB(skb)->ipx_dest_net = usipx->sipx_network;
memcpy(ipx->ipx_dest.node,usipx->sipx_node,IPX_NODE_LEN);
ipx->ipx_dest.sock = usipx->sipx_port;
err = memcpy_fromiovec(skb_put(skb,len),iov,len);
if (err) {
kfree_skb(skb);
- goto out;
+ goto out_put;
}
/* Apply checksum. Not allowed on 802.3 links. */
err = ipxitf_send(intrfc, skb, (rt && rt->ir_routed) ?
rt->ir_router_node : ipx->ipx_dest.node);
-out: ipxitf_put(intrfc);
- return err;
+out_put:
+ ipxitf_put(intrfc);
+ if (rt)
+ ipxrtr_put(rt);
+out: return err;
}
-
+
+/* the skb has to be unshared, we'll end up calling ipxitf_send, that'll
+ * modify the packet */
int ipxrtr_route_skb(struct sk_buff *skb)
{
struct ipxhdr *ipx = skb->nh.ipxh;
- struct ipx_cb *cb = (struct ipx_cb *) skb->cb;
- ipx_route *r = ipxrtr_lookup(cb->ipx_dest_net);
+ ipx_route *r = ipxrtr_lookup(IPX_SKB_CB(skb)->ipx_dest_net);
if (!r) { /* no known route */
kfree_skb(skb);
ipxitf_send(r->ir_intrfc, skb, (r->ir_routed) ?
r->ir_router_node : ipx->ipx_dest.node);
ipxitf_put(r->ir_intrfc);
+ ipxrtr_put(r);
return 0;
}
{
struct sock *sk = sock->sk;
struct sockaddr_ipx *addr;
+ ipx_route *rt;
sk->state = TCP_CLOSE;
sock->state = SS_UNCONNECTED;
/* We can either connect to primary network or somewhere
* we can route to */
- if (!(!addr->sipx_network && ipx_primary_net) &&
- !ipxrtr_lookup(addr->sipx_network))
+ rt = ipxrtr_lookup(addr->sipx_network);
+ if (!rt && !(!addr->sipx_network && ipx_primary_net))
return -ENETUNREACH;
sk->protinfo.af_ipx.dest_addr.net = addr->sipx_network;
sk->state = TCP_ESTABLISHED;
}
+ if (rt)
+ ipxrtr_put(rt);
return 0;
}
/* NULL here for pt means the packet was looped back */
ipx_interface *intrfc;
struct ipxhdr *ipx;
- struct ipx_cb *cb;
u16 ipx_pktsize;
int ret;
ipx->ipx_checksum != ipx_cksum(ipx, ipx_pktsize))
goto drop;
- cb = (struct ipx_cb *) skb->cb;
- cb->ipx_tctrl = ipx->ipx_tctrl;
- cb->ipx_dest_net = ipx->ipx_dest.net;
- cb->ipx_source_net = ipx->ipx_source.net;
+ IPX_SKB_CB(skb)->ipx_tctrl = ipx->ipx_tctrl;
+ IPX_SKB_CB(skb)->ipx_dest_net = ipx->ipx_dest.net;
+ IPX_SKB_CB(skb)->ipx_source_net = ipx->ipx_source.net;
/* Determine what local ipx endpoint this is */
intrfc = ipxitf_find_using_phys(dev, pt->type);
if (!intrfc) {
if (ipxcfg_auto_create_interfaces &&
- ntohl(cb->ipx_dest_net)) {
+ ntohl(IPX_SKB_CB(skb)->ipx_dest_net)) {
intrfc = ipxitf_auto_create(dev, pt->type);
- ipxitf_hold(intrfc);
+ if (intrfc)
+ ipxitf_hold(intrfc);
}
if (!intrfc) /* Not one of ours */
+ /* or invalid packet for auto creation */
goto drop;
}
msg->msg_namelen = sizeof(*sipx);
if (sipx) {
- struct ipx_cb *cb = (struct ipx_cb *) skb->cb;
sipx->sipx_family = AF_IPX;
sipx->sipx_port = ipx->ipx_source.sock;
memcpy(sipx->sipx_node,ipx->ipx_source.node,IPX_NODE_LEN);
- sipx->sipx_network = cb->ipx_source_net;
+ sipx->sipx_network = IPX_SKB_CB(skb)->ipx_source_net;
sipx->sipx_type = ipx->ipx_type;
}
err = copied;
sendmsg: ipx_sendmsg,
recvmsg: ipx_recvmsg,
mmap: sock_no_mmap,
+ sendpage: sock_no_sendpage,
};
#include <linux/smp_lock.h>
static unsigned char ipx_8022_type = 0xE0;
static unsigned char ipx_snap_id[5] = { 0x0, 0x0, 0x0, 0x81, 0x37 };
+static const char banner[] __initdata =
+ KERN_INFO "NET4: Linux IPX 0.46 for NET4.0\n"
+ KERN_INFO "IPX Portions Copyright (c) 1995 Caldera, Inc.\n" \
+ KERN_INFO "IPX Portions Copyright (c) 2000, 2001 Conectiva, Inc.\n";
static int __init ipx_init(void)
{
printk(KERN_CRIT "IPX: Unable to register with SNAP\n");
register_netdevice_notifier(&ipx_dev_notifier);
+ ipx_register_sysctl();
#ifdef CONFIG_PROC_FS
proc_net_create("ipx", 0, ipx_get_info);
proc_net_create("ipx_interface", 0, ipx_interface_get_info);
proc_net_create("ipx_route", 0, ipx_rt_get_info);
#endif
- printk(KERN_INFO "NET4: Linux IPX 0.44 for NET4.0\n");
- printk(KERN_INFO "IPX Portions Copyright (c) 1995 Caldera, Inc.\n");
- printk(KERN_INFO "IPX Portions Copyright (c) 2000 Conectiva, Inc.\n");
+ printk(banner);
return 0;
}
int ipx_if_offset(unsigned long ipx_net_number)
{
ipx_route *rt = ipxrtr_lookup(ipx_net_number);
+ int ret = -ENETUNREACH;
- return rt ? rt->ir_intrfc->if_ipx_offset : -ENETUNREACH;
+ if (!rt)
+ goto out;
+ ret = rt->ir_intrfc->if_ipx_offset;
+ ipxrtr_put(rt);
+out: return ret;
}
/* Export symbols for higher layers */
* sockets be closed from user space.
*/
-#ifdef MODULE
-static void ipx_proto_finito(void)
+static void __exit ipx_proto_finito(void)
{
/* no need to worry about having anything on the ipx_interfaces
* list, when a interface is created we increment the module
proc_net_remove("ipx_route");
proc_net_remove("ipx_interface");
proc_net_remove("ipx");
+ ipx_unregister_sysctl();
unregister_netdevice_notifier(&ipx_dev_notifier);
}
module_exit(ipx_proto_finito);
-#endif /* MODULE */
-#endif /* CONFIG_IPX || CONFIG_IPX_MODULE */
*
* Begun April 1, 1996, Mike Shaver.
* Added /proc/sys/net/ipx directory entry (empty =) ). [MS]
+ * Added /proc/sys/net/ipx/ipx_pprop_broadcasting - acme March 4, 2001
*/
#include <linux/mm.h>
#include <linux/sysctl.h>
+/* From af_ipx.c */
+extern int sysctl_ipx_pprop_broadcasting;
+
+#ifdef CONFIG_SYSCTL
ctl_table ipx_table[] = {
- {0}
+ { NET_IPX_PPROP_BROADCASTING, "ipx_pprop_broadcasting",
+ &sysctl_ipx_pprop_broadcasting, sizeof(int), 0644, NULL,
+ &proc_dointvec },
+ { 0 }
+};
+
+static ctl_table ipx_dir_table[] = {
+ { NET_IPX, "ipx", NULL, 0, 0555, ipx_table },
+ { 0 }
+};
+
+static ctl_table ipx_root_table[] = {
+ { CTL_NET, "net", NULL, 0, 0555, ipx_dir_table },
+ { 0 }
};
+
+static struct ctl_table_header *ipx_table_header;
+
+void ipx_register_sysctl(void)
+{
+ ipx_table_header = register_sysctl_table(ipx_root_table, 1);
+}
+
+void ipx_unregister_sysctl(void)
+{
+ unregister_sysctl_table(ipx_table_header);
+}
+
+#else
+void ipx_register_sysctl(void)
+{
+}
+
+void ipx_unregister_sysctl(void)
+{
+}
+#endif
changes +=DataSending(CPUNR);
changes +=Userspace(CPUNR);
changes +=Logging(CPUNR);
- /* Test for incomming connections _again_, because it is possible
+ /* Test for incoming connections _again_, because it is possible
one came in during the other steps, and the wakeup doesn't happen
then.
*/
/*
The returned string is for READ ONLY, ownership of the memory is NOT
- transfered.
+ transferred.
*/
{
#endif /* CONFIG_INET */
#ifdef CONFIG_TR
-EXPORT_SYMBOL(tr_setup);
EXPORT_SYMBOL(tr_type_trans);
-EXPORT_SYMBOL(register_trdev);
-EXPORT_SYMBOL(unregister_trdev);
-EXPORT_SYMBOL(init_trdev);
-#endif
-
-#ifdef CONFIG_NET_FC
-EXPORT_SYMBOL(register_fcdev);
-EXPORT_SYMBOL(unregister_fcdev);
-EXPORT_SYMBOL(init_fcdev);
#endif
/* Device callback registration */
/* support for loadable net drivers */
#ifdef CONFIG_NET
-EXPORT_SYMBOL(init_etherdev);
EXPORT_SYMBOL(loopback_dev);
EXPORT_SYMBOL(register_netdevice);
EXPORT_SYMBOL(unregister_netdevice);
-EXPORT_SYMBOL(register_netdev);
-EXPORT_SYMBOL(unregister_netdev);
EXPORT_SYMBOL(netdev_state_change);
-EXPORT_SYMBOL(ether_setup);
EXPORT_SYMBOL(dev_new_index);
EXPORT_SYMBOL(dev_get_by_index);
EXPORT_SYMBOL(__dev_get_by_index);
EXPORT_SYMBOL(eth_type_trans);
#ifdef CONFIG_FDDI
EXPORT_SYMBOL(fddi_type_trans);
-EXPORT_SYMBOL(fddi_setup);
-EXPORT_SYMBOL(init_fddidev);
#endif /* CONFIG_FDDI */
#if 0
EXPORT_SYMBOL(eth_copy_and_sum);
#ifdef CONFIG_HIPPI
EXPORT_SYMBOL(hippi_type_trans);
-EXPORT_SYMBOL(init_hippi_dev);
-EXPORT_SYMBOL(unregister_hipdev);
#endif
#ifdef CONFIG_SYSCTL
#endif
#endif
-#if defined(CONFIG_ATALK) || defined(CONFIG_ATALK_MODULE)
-#include<linux/if_ltalk.h>
-EXPORT_SYMBOL(ltalk_setup);
-#endif
-
-
/* Packet scheduler modules want these. */
EXPORT_SYMBOL(qdisc_destroy);
EXPORT_SYMBOL(qdisc_reset);
struct tcindex_filter *f;
if (p->perfect)
- return p->perfect[key].res.classid ? p->perfect+key : NULL;
+ return p->perfect[key].res.class ? p->perfect+key : NULL;
if (!p->h)
return NULL;
for (f = p->h[key % p->hash]; f; f = f->next) {
static unsigned long tcindex_get(struct tcf_proto *tp, u32 handle)
{
+ struct tcindex_data *p = PRIV(tp);
+ struct tcindex_filter_result *r;
+
DPRINTK("tcindex_get(tp %p,handle 0x%08x)\n",tp,handle);
- return (unsigned long) lookup(PRIV(tp),handle);
+ if (p->perfect && handle >= p->alloc_hash)
+ return 0;
+ r = lookup(PRIV(tp),handle);
+ return r && r->res.class ? (unsigned long) r : 0;
}
DPRINTK("tcindex_delete(tp %p,arg 0x%lx),p %p,f %p\n",tp,arg,p,f);
if (p->perfect) {
- if (!r->res.classid)
+ if (!r->res.class)
return -ENOENT;
} else {
int i;
struct tcindex_filter *f;
struct tcindex_filter_result *r = (struct tcindex_filter_result *) *arg;
struct tcindex_filter **walk;
- int hash;
+ int hash,shift;
__u16 mask;
DPRINTK("tcindex_change(tp %p,handle 0x%08x,tca %p,arg %p),opt %p,"
return -EINVAL;
mask = *(__u16 *) RTA_DATA(tb[TCA_TCINDEX_MASK-1]);
}
- if (p->perfect && hash <= mask)
+ if (!tb[TCA_TCINDEX_SHIFT-1])
+ shift = p->shift;
+ else {
+ if (RTA_PAYLOAD(tb[TCA_TCINDEX_SHIFT-1]) < sizeof(__u16))
+ return -EINVAL;
+ shift = *(int *) RTA_DATA(tb[TCA_TCINDEX_SHIFT-1]);
+ }
+ if (p->perfect && hash <= (mask >> shift))
+ return -EBUSY;
+ if (p->perfect && hash > p->alloc_hash)
return -EBUSY;
- if ((p->perfect || p->h) && hash > p->alloc_hash)
+ if (p->h && hash != p->alloc_hash)
return -EBUSY;
p->hash = hash;
p->mask = mask;
- if (tb[TCA_TCINDEX_SHIFT-1]) {
- if (RTA_PAYLOAD(tb[TCA_TCINDEX_SHIFT-1]) < sizeof(__u16))
- return -EINVAL;
- p->shift = *(int *) RTA_DATA(tb[TCA_TCINDEX_SHIFT-1]);
- }
+ p->shift = shift;
if (tb[TCA_TCINDEX_FALL_THROUGH-1]) {
if (RTA_PAYLOAD(tb[TCA_TCINDEX_FALL_THROUGH-1]) < sizeof(int))
return -EINVAL;
tb[TCA_TCINDEX_POLICE-1]);
if (!tb[TCA_TCINDEX_CLASSID-1] && !tb[TCA_TCINDEX_POLICE-1])
return 0;
- if (!p->hash) {
- if (p->mask < PERFECT_HASH_THRESHOLD) {
- p->hash = p->mask+1;
+ if (!hash) {
+ if ((mask >> shift) < PERFECT_HASH_THRESHOLD) {
+ p->hash = (mask >> shift)+1;
} else {
p->hash = DEFAULT_HASH_SIZE;
}
if (!p->perfect && !p->h) {
p->alloc_hash = p->hash;
DPRINTK("hash %d mask %d\n",p->hash,p->mask);
- if (p->hash > p->mask) {
+ if (p->hash > (mask >> shift)) {
p->perfect = kmalloc(p->hash*
sizeof(struct tcindex_filter_result),GFP_KERNEL);
if (!p->perfect)
memset(p->h, 0, p->hash*sizeof(struct tcindex_filter *));
}
}
- if (handle > p->mask)
+ /*
+ * Note: this could be as restrictive as
+ * if (handle & ~(mask >> shift))
+ * but then, we'd fail handles that may become valid after some
+ * future mask change. While this is extremely unlikely to ever
+ * matter, the check below is safer (and also more
+ * backwards-compatible).
+ */
+ if (p->perfect && handle >= p->alloc_hash)
return -EINVAL;
if (p->perfect) {
r = p->perfect+handle;
}
}
#ifdef CONFIG_NET_CLS_POLICE
- if (!tb[TCA_TCINDEX_POLICE-1]) {
- r->police = NULL;
- } else {
- struct tcf_police *police =
- tcf_police_locate(tb[TCA_TCINDEX_POLICE-1],NULL);
+ {
+ struct tcf_police *police;
- tcf_tree_lock(tp);
+ police = tb[TCA_TCINDEX_POLICE-1] ?
+ tcf_police_locate(tb[TCA_TCINDEX_POLICE-1],NULL) : NULL;
+ tcf_tree_lock(tp);
police = xchg(&r->police,police);
- tcf_tree_unlock(tp);
+ tcf_tree_unlock(tp);
tcf_police_release(police);
- }
+ }
#endif
if (r != &new_filter_result)
return 0;
DPRINTK("tcindex_walk(tp %p,walker %p),p %p\n",tp,walker,p);
if (p->perfect) {
for (i = 0; i < p->hash; i++) {
- if (!p->perfect[i].res.classid)
+ if (!p->perfect[i].res.class)
continue;
if (walker->count >= walker->skip) {
if (walker->fn(tp,
}
for (h = 0; h < 16; h++) {
- for (cl = q->classes[h]; cl; cl = cl->next)
+ struct cbq_class *next;
+
+ for (cl = q->classes[h]; cl; cl = next) {
+ next = cl->next;
if (cl != &q->link)
cbq_destroy_class(cl);
+ }
}
qdisc_put_rtab(q->link.R_tab);
static struct Qdisc *dsmark_leaf(struct Qdisc *sch, unsigned long arg)
{
- return NULL;
+ struct dsmark_qdisc_data *p = PRIV(sch);
+
+ return p->q;
}
struct dsmark_qdisc_data *p = PRIV(sch);
struct tcf_result res;
int result;
- int ret;
+ int ret = NET_XMIT_POLICED;
D2PRINTK("dsmark_enqueue(skb %p,sch %p,[qdisc %p])\n",skb,sch,p);
if (p->set_tc_index) {
((ret = p->q->enqueue(skb,p->q)) != 0)) {
sch->stats.drops++;
- return 0;
+ return ret;
}
sch->stats.bytes += skb->len;
sch->stats.packets++;
N(t+delta) = min{B/R, N(t) + delta}
If the first packet in queue has length S, it may be
- transmited only at the time t_* when S/R <= N(t_*),
+ transmitted only at the time t_* when S/R <= N(t_*),
and in this case N(t) jumps:
N(t_* + 0) = N(t_* - 0) - S/R.
struct tc_tbf_qopt *qopt;
struct qdisc_rate_table *rtab = NULL;
struct qdisc_rate_table *ptab = NULL;
- int max_size;
+ int max_size,n;
if (rtattr_parse(tb, TCA_TBF_PTAB, RTA_DATA(opt), RTA_PAYLOAD(opt)) ||
tb[TCA_TBF_PARMS-1] == NULL ||
goto done;
}
- max_size = psched_mtu(sch->dev);
+ for (n = 0; n < 256; n++)
+ if (rtab->data[n] > qopt->buffer) break;
+ max_size = (n << qopt->rate.cell_log)-1;
if (ptab) {
- int n = max_size>>qopt->peakrate.cell_log;
- while (n>0 && ptab->data[n-1] > qopt->mtu) {
- max_size -= (1<<qopt->peakrate.cell_log);
- n--;
- }
+ int size;
+
+ for (n = 0; n < 256; n++)
+ if (ptab->data[n] > qopt->mtu) break;
+ size = (n << qopt->peakrate.cell_log)-1;
+ if (size < max_size) max_size = size;
}
- if (rtab->data[max_size>>qopt->rate.cell_log] > qopt->buffer)
+ if (max_size < 0)
goto done;
sch_tree_lock(sch);
extern ctl_table ipv4_table[];
#endif
-#ifdef CONFIG_IPX
-extern ctl_table ipx_table[];
-#endif
-
extern ctl_table core_table[];
#ifdef CONFIG_NET
#ifdef CONFIG_INET
{NET_IPV4, "ipv4", NULL, 0, 0555, ipv4_table},
#endif
-#ifdef CONFIG_IPX
- {NET_IPX, "ipx", NULL, 0, 0555, ipx_table},
-#endif
#ifdef CONFIG_IPV6
{NET_IPV6, "ipv6", NULL, 0, 0555, ipv6_table},
#endif
* the following common services for the WAN Link Drivers:
* o WAN device managenment (registering, unregistering)
* o Network interface management
-* o Physical connection management (dial-up, incomming calls)
+* o Physical connection management (dial-up, incoming calls)
* o Logical connection management (switched virtual circuits)
* o Protocol encapsulation/decapsulation
*