as root before you can use this. You'll probably also want to
get the user-space microcode_ctl utility to use with this.
-If you have compiled the driver as a module you may need to add
-the following line:
-
-alias char-major-10-184 microcode
-
-to your /etc/modules.conf file.
-
Powertweak
----------
as root.
-If you build ppp support as modules, you will need the following in
-your /etc/modules.conf file:
-
-alias char-major-108 ppp_generic
-alias /dev/ppp ppp_generic
-alias tty-ldisc-3 ppp_async
-alias tty-ldisc-14 ppp_synctty
-alias ppp-compress-21 bsd_comp
-alias ppp-compress-24 ppp_deflate
-alias ppp-compress-26 ppp_deflate
-
If you use devfsd and build ppp support as modules, you will need
the following in your /etc/devfsd.conf file:
- Linux kernel coding style
+ Linux kernel coding style
This is a short document describing the preferred coding style for the
linux kernel. Coding style is very personal, and I won't _force_ my
views on anybody, but this is what goes for anything that I have to be
able to maintain, and I'd prefer it for most other things too. Please
-at least consider the points made here.
+at least consider the points made here.
First off, I'd suggest printing out a copy of the GNU coding standards,
-and NOT read it. Burn them, it's a great symbolic gesture.
+and NOT read it. Burn them, it's a great symbolic gesture.
Anyway, here goes:
Chapter 1: Indentation
-Tabs are 8 characters, and thus indentations are also 8 characters.
+Tabs are 8 characters, and thus indentations are also 8 characters.
There are heretic movements that try to make indentations 4 (or even 2!)
characters deep, and that is akin to trying to define the value of PI to
-be 3.
+be 3.
Rationale: The whole idea behind indentation is to clearly define where
a block of control starts and ends. Especially when you've been looking
at your screen for 20 straight hours, you'll find it a lot easier to see
-how the indentation works if you have large indentations.
+how the indentation works if you have large indentations.
Now, some people will claim that having 8-character indentations makes
the code move too far to the right, and makes it hard to read on a
80-character terminal screen. The answer to that is that if you need
more than 3 levels of indentation, you're screwed anyway, and should fix
-your program.
+your program.
In short, 8-char indents make things easier to read, and have the added
-benefit of warning you when you're nesting your functions too deep.
-Heed that warning.
+benefit of warning you when you're nesting your functions too deep.
+Heed that warning.
+Don't put multiple statements on a single line unless you have
+something to hide:
- Chapter 2: Placing Braces
+ if (condition) do_this;
+ do_something_everytime;
+
+Outside of comments, documentation and except in Kconfig, spaces are never
+used for indentation, and the above example is deliberately broken.
+
+Get a decent editor and don't leave whitespace at the end of lines.
+
+
+ Chapter 2: Breaking long lines and strings
+
+Coding style is all about readability and maintainability using commonly
+available tools.
+
+The limit on the length of lines is 80 columns and this is a hard limit.
+
+Statements longer than 80 columns will be broken into sensible chunks.
+Descendants are always substantially shorter than the parent and are placed
+substantially to the right. The same applies to function headers with a long
+argument list. Long strings are as well broken into shorter strings.
+
+void fun(int a, int b, int c)
+{
+ if (condition)
+ printk(KERN_WARNING "Warning this is a long printk with "
+ "3 parameters a: %u b: %u "
+ "c: %u \n", a, b, c);
+ else
+ next_statement;
+}
+
+ Chapter 3: Placing Braces
The other issue that always comes up in C styling is the placement of
braces. Unlike the indent size, there are few technical reasons to
Heretic people all over the world have claimed that this inconsistency
is ... well ... inconsistent, but all right-thinking people know that
(a) K&R are _right_ and (b) K&R are right. Besides, functions are
-special anyway (you can't nest them in C).
+special anyway (you can't nest them in C).
Note that the closing brace is empty on a line of its own, _except_ in
the cases where it is followed by a continuation of the same statement,
} else {
....
}
-
-Rationale: K&R.
+
+Rationale: K&R.
Also, note that this brace-placement also minimizes the number of empty
(or almost empty) lines, without any loss of readability. Thus, as the
supply of new-lines on your screen is not a renewable resource (think
25-line terminal screens here), you have more empty lines to put
-comments on.
+comments on.
- Chapter 3: Naming
+ Chapter 4: Naming
C is a Spartan language, and so should your naming be. Unlike Modula-2
and Pascal programmers, C programmers do not use cute names like
ThisVariableIsATemporaryCounter. A C programmer would call that
variable "tmp", which is much easier to write, and not the least more
-difficult to understand.
+difficult to understand.
HOWEVER, while mixed-case names are frowned upon, descriptive names for
global variables are a must. To call a global function "foo" is a
-shooting offense.
+shooting offense.
GLOBAL variables (to be used only if you _really_ need them) need to
have descriptive names, as do global functions. If you have a function
that counts the number of active users, you should call that
-"count_active_users()" or similar, you should _not_ call it "cntusr()".
+"count_active_users()" or similar, you should _not_ call it "cntusr()".
Encoding the type of a function into the name (so-called Hungarian
notation) is brain damaged - the compiler knows the types anyway and can
check those, and it only confuses the programmer. No wonder MicroSoft
-makes buggy programs.
+makes buggy programs.
LOCAL variable names should be short, and to the point. If you have
-some random integer loop counter, it should probably be called "i".
+some random integer loop counter, it should probably be called "i".
Calling it "loop_counter" is non-productive, if there is no chance of it
being mis-understood. Similarly, "tmp" can be just about any type of
-variable that is used to hold a temporary value.
+variable that is used to hold a temporary value.
If you are afraid to mix up your local variable names, you have another
-problem, which is called the function-growth-hormone-imbalance syndrome.
-See next chapter.
+problem, which is called the function-growth-hormone-imbalance syndrome.
+See next chapter.
-
- Chapter 4: Functions
+
+ Chapter 5: Functions
Functions should be short and sweet, and do just one thing. They should
fit on one or two screenfuls of text (the ISO/ANSI screen size is 80x24,
-as we all know), and do one thing and do that well.
+as we all know), and do one thing and do that well.
The maximum length of a function is inversely proportional to the
complexity and indentation level of that function. So, if you have a
conceptually simple function that is just one long (but simple)
case-statement, where you have to do lots of small things for a lot of
-different cases, it's OK to have a longer function.
+different cases, it's OK to have a longer function.
However, if you have a complex function, and you suspect that a
less-than-gifted first-year high-school student might not even
maximum limits all the more closely. Use helper functions with
descriptive names (you can ask the compiler to in-line them if you think
it's performance-critical, and it will probably do a better job of it
-than you would have done).
+than you would have done).
Another measure of the function is the number of local variables. They
shouldn't exceed 5-10, or you're doing something wrong. Re-think the
function, and split it into smaller pieces. A human brain can
generally easily keep track of about 7 different things, anything more
and it gets confused. You know you're brilliant, but maybe you'd like
-to understand what you did 2 weeks from now.
+to understand what you did 2 weeks from now.
+
+
+ Chapter 6: Centralized exiting of functions
+Albeit deprecated by some people, the equivalent of the goto statement is
+used frequently by compilers in form of the unconditional jump instruction.
- Chapter 5: Commenting
+The goto statement comes in handy when a function exits from multiple
+locations and some common work such as cleanup has to be done.
+
+The rationale is:
+
+- unconditional statements are easier to understand and follow
+- nesting is reduced
+- errors by not updating individual exit points when making
+ modifications are prevented
+- saves the compiler work to optimize redundant code away ;)
+
+int fun(int )
+{
+ int result = 0;
+ char *buffer = kmalloc(SIZE);
+
+ if (buffer == NULL)
+ return -ENOMEM;
+
+ if (condition1) {
+ while (loop1) {
+ ...
+ }
+ result = 1;
+ goto out;
+ }
+ ...
+out:
+ kfree(buffer);
+ return result;
+}
+
+ Chapter 7: Commenting
Comments are good, but there is also a danger of over-commenting. NEVER
try to explain HOW your code works in a comment: it's much better to
write the code so that the _working_ is obvious, and it's a waste of
-time to explain badly written code.
+time to explain badly written code.
-Generally, you want your comments to tell WHAT your code does, not HOW.
+Generally, you want your comments to tell WHAT your code does, not HOW.
Also, try to avoid putting comments inside a function body: if the
function is so complex that you need to separately comment parts of it,
-you should probably go back to chapter 4 for a while. You can make
+you should probably go back to chapter 5 for a while. You can make
small comments to note or warn about something particularly clever (or
ugly), but try to avoid excess. Instead, put the comments at the head
of the function, telling people what it does, and possibly WHY it does
-it.
+it.
- Chapter 6: You've made a mess of it
+ Chapter 8: You've made a mess of it
That's OK, we all do. You've probably been told by your long-time Unix
user helper that "GNU emacs" automatically formats the C sources for
you, and you've noticed that yes, it does do that, but the defaults it
uses are less than desirable (in fact, they are worse than random
typing - an infinite number of monkeys typing into GNU emacs would never
-make a good program).
+make a good program).
So, you can either get rid of GNU emacs, or change it to use saner
values. To do the latter, you can stick the following in your .emacs file:
to add
(setq auto-mode-alist (cons '("/usr/src/linux.*/.*\\.[ch]$" . linux-c-mode)
- auto-mode-alist))
+ auto-mode-alist))
to your .emacs file if you want to have linux-c-mode switched on
automagically when you edit source files under /usr/src/linux.
everything is lost: use "indent".
Now, again, GNU indent has the same brain-dead settings that GNU emacs
-has, which is why you need to give it a few command line options.
+has, which is why you need to give it a few command line options.
However, that's not too bad, because even the makers of GNU indent
recognize the authority of K&R (the GNU people aren't evil, they are
just severely misguided in this matter), so you just give indent the
-options "-kr -i8" (stands for "K&R, 8 character indents").
+options "-kr -i8" (stands for "K&R, 8 character indents"), or use
+"scripts/Lindent", which indents in the latest style.
"indent" has a lot of options, and especially when it comes to comment
-re-formatting you may want to take a look at the manual page. But
-remember: "indent" is not a fix for bad programming.
+re-formatting you may want to take a look at the man page. But
+remember: "indent" is not a fix for bad programming.
- Chapter 7: Configuration-files
+ Chapter 9: Configuration-files
-For configuration options (arch/xxx/config.in, and all the Config.in files),
+For configuration options (arch/xxx/Kconfig, and all the Kconfig files),
somewhat different indentation is used.
-An indention level of 3 is used in the code, while the text in the config-
-options should have an indention-level of 2 to indicate dependencies. The
-latter only applies to bool/tristate options. For other options, just use
-common sense. An example:
+Help text is indented with 2 spaces.
-if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
- tristate 'Apply nitroglycerine inside the keyboard (DANGEROUS)' CONFIG_BOOM
- if [ "$CONFIG_BOOM" != "n" ]; then
- bool ' Output nice messages when you explode' CONFIG_CHEER
- fi
-fi
+if CONFIG_EXPERIMENTAL
+ tristate CONFIG_BOOM
+ default n
+ help
+ Apply nitroglycerine inside the keyboard (DANGEROUS)
+ bool CONFIG_CHEER
+ depends on CONFIG_BOOM
+ default y
+ help
+ Output nice messages when you explode
+endif
Generally, CONFIG_EXPERIMENTAL should surround all options not considered
stable. All options that are known to trash data (experimental write-
experimental options should be denoted (EXPERIMENTAL).
- Chapter 8: Data structures
+ Chapter 10: Data structures
Data structures that have visibility outside the single-threaded
environment they are created and destroyed in should always have
reference counts. In the kernel, garbage collection doesn't exist (and
outside the kernel garbage collection is slow and inefficient), which
-means that you absolutely _have_ to reference count all your uses.
+means that you absolutely _have_ to reference count all your uses.
Reference counting means that you can avoid locking, and allows multiple
users to have access to the data structure in parallel - and not having
to worry about the structure suddenly going away from under them just
-because they slept or did something else for a while.
+because they slept or did something else for a while.
-Note that locking is _not_ a replacement for reference counting.
+Note that locking is _not_ a replacement for reference counting.
Locking is used to keep data structures coherent, while reference
counting is a memory management technique. Usually both are needed, and
they are not to be confused with each other.
Remember: if another thread can find your data structure, and you don't
have a reference count on it, you almost certainly have a bug.
+
+
+ Chapter 11: Macros, Enums, Inline functions and RTL
+
+Names of macros defining constants and labels in enums are capitalized.
+
+#define CONSTANT 0x12345
+
+Enums are preferred when defining several related constants.
+
+CAPITALIZED macro names are appreciated but macros resembling functions
+may be named in lower case.
+
+Generally, inline functions are preferable to macros resembling functions.
+
+Macros with multiple statements should be enclosed in a do - while block:
+
+#define macrofun(a,b,c) \
+ do { \
+ if (a == 5) \
+ do_this(b,c); \
+ } while (0)
+
+Things to avoid when using macros:
+
+1) macros that affect control flow:
+
+#define FOO(x) \
+ do { \
+ if (blah(x) < 0) \
+ return -EBUGGERED; \
+ } while(0)
+
+is a _very_ bad idea. It looks like a function call but exits the "calling"
+function; don't break the internal parsers of those who will read the code.
+
+2) macros that depend on having a local variable with a magic name:
+
+#define FOO(val) bar(index, val)
+
+might look like a good thing, but it's confusing as hell when one reads the
+code and it's prone to breakage from seemingly innocent changes.
+
+3) macros with arguments that are used as l-values: FOO(x) = y; will
+bite you if somebody e.g. turns FOO into an inline function.
+
+4) forgetting about precedence: macros defining constants using expressions
+must enclose the expression in parentheses. Beware of similar issues with
+macros using parameters.
+
+#define CONSTANT 0x4000
+#define CONSTEXP (CONSTANT | 3)
+
+The cpp manual deals with macros exhaustively. The gcc internals manual also
+covers RTL which is used frequently with assembly language in the kernel.
+
+
+ Chapter 12: Printing kernel messages
+
+Kernel developers like to be seen as literate. Do mind the spelling
+of kernel messages to make a good impression. Do not use crippled
+words like "dont" and use "do not" or "don't" instead.
+
+Kernel messages do not have to be terminated with a period.
+
+Printing numbers in parentheses (%d) adds no value and should be avoided.
+
+
+ Chapter 13: References
+
+The C Programming Language, Second Edition
+by Brian W. Kernighan and Dennis M. Ritchie.
+Prentice Hall, Inc., 1988.
+ISBN 0-13-110362-8 (paperback), 0-13-110370-9 (hardback).
+
+The Practice of Programming
+Brian W. Kernighan, Rob Pike
+Addison-Wesley, 1999, ISBN 0-201-61586-X
+
+GNU manuals - where in compliance with K&R and this text - for cpp, gcc,
+gcc internals and indent, all available from www.gnu.org.
+
+--
+Last updated on 16 February 2004 by a community effort on LKML.
Note the hardware address from the Computone ISA cards installed into
the system. These are required for editing ip2.c or editing
- /etc/modules.conf, or for specification on the modprobe
+ /etc/modprobe.conf, or for specification on the modprobe
command line.
- Note that the /etc/modules.conf file is named /etc/conf.modules
- with older versions of the module utilities.
+ Note that the /etc/modules.conf should be used for older (pre-2.6)
+ kernels.
Software -
c) Set address on ISA cards then:
edit /usr/src/linux/drivers/char/ip2.c if needed
or
- edit /etc/modules.conf if needed (module).
+ edit /etc/modprobe.conf if needed (module).
or both to match this setting.
d) Run "make modules"
e) Run "make modules_install"
selects polled mode). If no base addresses are specified the defaults in
ip2.c are used. If you are autoloading the driver module with kerneld or
kmod the base addresses and interrupt number must also be set in ip2.c
-and recompile or just insert and options line in /etc/modules.conf or both.
+and recompile or just insert and options line in /etc/modprobe.conf or both.
The options line is equivalent to the command line and takes precidence over
what is in ip2.c.
-/etc/modules.conf sample:
+/etc/modprobe.conf sample:
options ip2 io=1,0x328 irq=1,10
alias char-major-71 ip2
alias char-major-72 ip2
CONFIGURATION NOTES
As Triple DES is part of the DES module, for those using modular builds,
-add the following line to /etc/modules.conf:
+add the following line to /etc/modprobe.conf:
alias des3_ede des
--- /dev/null
+Debugging Modules after 2.6.3
+-----------------------------
+
+In almost all distributions, the kernel asks for modules which don't
+exist, such as "net-pf-10" or whatever. Changing "modprobe -q" to
+"succeed" in this case is hacky and breaks some setups, and also we
+want to know if it failed for the fallback code for old aliases in
+fs/char_dev.c, for example.
+
+In the past a debugging message which would fill people's logs was
+emitted. This debugging message has been removed. The correct way
+of debugging module problems is something like this:
+
+echo '#! /bin/sh' > /tmp/modprobe
+echo 'echo "$@" >> /tmp/modprobe.log' >> /tmp/modprobe
+echo 'exec /sbin/modprobe "$@"' >> /tmp/modprobe
+chmod a+x /tmp/modprobe
+echo /tmp/modprobe > /proc/sys/kernel/modprobe
The pcxx driver can be configured using the command line feature while
loading the kernel with LILO or LOADLIN or, if built as a module,
with arguments to insmod and modprobe or with parameters in
-/etc/modules.conf for modprobe and kerneld.
+/etc/modprobe.conf for modprobe and kerneld.
After configuring the driver you need to create the device special files
as described in "Device file creation:" below and set the appropriate
The remaining board still uses ttyD8-ttyD15 and cud8-cud15.
-Example line for /etc/modules.conf for use with kerneld and as default
+Example line for /etc/modprobe.conf for use with kerneld and as default
parameters for modprobe:
options pcxx io=0x200 numports=8
-For kerneld to work you will likely need to add these two lines to your
-/etc/modules.conf:
+For kmod to work you will likely need to add these two lines to your
+/etc/modprobe.conf:
alias char-major-22 pcxx
alias char-major-23 pcxx
modprobe i810fb vram=2 xres=1024 bpp=8 hsync1=30 hsync2=55 vsync1=50 \
vsync2=85 accel=1 mtrr=1
-Or just add the following to /etc/modules.conf
+Or just add the following to /etc/modprobe.conf
options i810fb vram=2 xres=1024 bpp=16 hsync1=30 hsync2=55 vsync1=50 \
vsync2=85 accel=1 mtrr=1
The following mount options are supported:
iocharset=name Character set to use for converting from Unicode to
- ASCII. The default is compiled into the kernel as
- CONFIG_NLS_DEFAULT. Use iocharset=utf8 for UTF8
- translations. This requires CONFIG_NLS_UTF8 to be set
- in the kernel .config file.
+ ASCII. The default is to do no conversion. Use
+ iocharset=utf8 for UTF8 translations. This requires
+ CONFIG_NLS_UTF8 to be set in the kernel .config file.
resize=value Resize the volume to <value> blocks. JFS only supports
growing a volume, not shrinking it. This option is only
errors=remount-ro Default. Remount the filesystem read-only on an error.
errors=panic Panic and halt the machine if an error occurs.
-JFS TODO list:
-
-Plans for our near term development items
-
- - enhance support for logfile on dedicated partition
-
-Longer term work items
-
- - implement defrag utility, for online defragmenting
- - add quota support
- - add support for block sizes (512,1024,2048)
-
Please send bugs, comments, cards and letters to shaggy@austin.ibm.com.
The JFS mailing list can be subscribed to by using the link labeled
Every mounted file system needs a super block, so if you plan to mount lots of
file systems, you may want to increase these numbers.
+aio-nr and aio-max-nr
+---------------------
+
+aio-nr is the running total of the number of events specified on the
+io_setup system call for all currently active aio contexts. If aio-nr
+reaches aio-max-nr then io_setup will fail with EAGAIN. Note that
+raising aio-max-nr does not result in the pre-allocation or re-sizing
+of any kernel data structures.
+
2.2 /proc/sys/fs/binfmt_misc - Miscellaneous binary formats
-----------------------------------------------------------
Module parameters can be specified either directly when invoking
the program 'insmod' at the shell prompt:
- insmod ftape.o ft_tracing=4
+ modprobe ftape ft_tracing=4
- or by editing the file `/etc/modules.conf' in which case they take
+ or by editing the file `/etc/modprobe.conf' in which case they take
effect each time when the module is loaded with `modprobe' (please
refer to the respective manual pages). Thus, you should add a line
options ftape ft_tracing=4
- to `/etc/modules.conf` if you intend to increase the debugging
+ to `/etc/modprobe.conf` if you intend to increase the debugging
output of the driver.
5. Example module parameter setting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To do the same, but with ftape compiled as a loadable kernel
- module, add the following line to `/etc/modules.conf':
+ module, add the following line to `/etc/modprobe.conf':
options ftape ft_probe_fc10=1 ft_tracing=4
insmod esp dma=3 trigger=512
The esp module can be automatically loaded when needed. To cause this to
-happen, add the following lines to /etc/modules.conf (replacing the last line
+happen, add the following lines to /etc/modprobe.conf (replacing the last line
with options for your configuration):
alias char-major-57 esp
can only be compiled into the kernel, and the core code (ide.c) can be
compiled as a loadable module provided no chipset support is needed.
-When using ide.c/ide-tape.c as modules in combination with kerneld, add:
+When using ide.c as a module in combination with kmod, add:
alias block-major-3 ide-probe
- alias char-major-37 ide-tape
-respectively to /etc/modules.conf.
+to /etc/modprobe.conf.
When ide.c is used as a module, you can pass command line parameters to the
driver using the "options=" keyword to insmod, while replacing any ',' with
"hdx=cyl,head,sect" : disk drive is present, with specified geometry
- "hdx=remap" : remap access of sector 0 to sector 1 (for EZD)
+ "hdx=remap" : remap access of sector 0 to sector 1 (for EZDrive)
- "hdx=remap63" : remap the drive: shift all by 63 sectors (for DM)
+ "hdx=remap63" : remap the drive: add 63 to all sector numbers
+ (for DM OnTrack)
"hdx=autotune" : driver will attempt to tune interface speed
to the fastest PIO mode supported,
and quite likely to cause trouble with
older/odd IDE drives.
- "hdx=slow" : insert a huge pause after each access to the data
- port. Should be used only as a last resort.
-
"hdx=swapdata" : when the drive is a disk, byte swap all data
"hdx=bswap" : same as above..........
- "hdx=flash" : allows for more than one ata_flash disk to be
- registered. In most cases, only one device
- will be present.
-
"hdx=scsi" : the return of the ide-scsi flag, this is useful for
allowing ide-floppy, ide-tape, and ide-cdrom|writers
to use ide-scsi emulation on a device specific option.
Compiling modules outside the official kernel
---------------------------------------------
-Often modules are developed outside the official kernel.
-To keep up with changes in the build system the most portable way
-to compile a module outside the kernel is to use the following command-line:
+
+Often modules are developed outside the official kernel. To keep up
+with changes in the build system the most portable way to compile a
+module outside the kernel is to use the kernel build system,
+kbuild. Use the following command-line:
make -C path/to/kernel/src SUBDIRS=$PWD modules
This requires that a makefile exits made in accordance to
-Documentation/kbuild/makefiles.txt.
+Documentation/kbuild/makefiles.txt. Read that file for more details on
+the build system.
+
+The following is a short summary of how to write your Makefile to get
+you up and running fast. Assuming your module will be called
+yourmodule.ko, your code should be in yourmodule.c and your Makefile
+should include
+
+obj-m := yourmodule.o
+
+If the code for your module is in multiple files that need to be
+linked, you need to tell the build system which files to compile. In
+the case of multiple files, none of these files can be named
+yourmodule.c because doing so would cause a problem with the linking
+step. Assuming your code exists in file1.c, file2.c, and file3.c and
+you want to build yourmodule.ko from them, your Makefile should
+include
+
+obj-m := yourmodule.o
+yourmodule-objs := file1.o file2.o file3.o
+
+Now for a final example to put it all together. Assuming the
+KERNEL_SOURCE environment variable is set to the directory where you
+compiled the kernel, a simple Makefile that builds yourmodule.ko as
+described above would look like
+
+# Tells the build system to build yourmodule.ko.
+obj-m := yourmodule.o
+
+# Tells the build system to build these object files and link them as
+# yourmodule.o, before building yourmodule.ko. This line can be left
+# out if all the code for your module is in one file, yourmodule.c. If
+# you are using multiple files, none of these files can be named
+# yourmodule.c.
+yourmodule-objs := file1.o file2.o file3.o
+# Invokes the kernel build system to come back to the current
+# directory and build yourmodule.ko.
+default:
+ make -C ${KERNEL_SOURCE} SUBDIRS=`pwd` modules
Forces specified timesource (if avaliable) to be used
when calculating gettimeofday(). If specicified timesource
is not avalible, it defaults to PIT.
- Format: { pit | tsc | cyclone | ... }
+ Format: { pit | tsc | cyclone | pmtmr }
hpet= [IA-32,HPET] option to disable HPET and use PIT.
Format: disable
devfs= [DEVFS]
See Documentation/filesystems/devfs/boot-options.
+
+ dhash_entries= [KNL]
+ Set number of hash buckets for dentry cache.
digi= [HW,SERIAL]
IO parameters + enable/disable command.
dtc3181e= [HW,SCSI]
+ earlyprintk= [x86, x86_64]
+ early_printk=vga
+ early_printk=serial[,ttySn[,baudrate]]
+
+ Append ,keep to not disable it when the real console
+ takes over.
+
+ Only vga or serial at a time, not both.
+
+ Currently only ttyS0 and ttyS1 are supported.
+
+ Interaction with the standard serial driver is not
+ very good.
+
+ The VGA output is eventually overwritten by the real
+ console.
+
eata= [HW,SCSI]
eda= [HW,PS2]
idle= [HW]
Format: idle=poll or idle=halt
+ ihash_entries= [KNL]
+ Set number of hash buckets for inode cache.
+
in2000= [HW,SCSI]
See header of drivers/scsi/in2000.c.
resume= [SWSUSP] Specify the partition device for software suspension
+ rhash_entries= [KNL,NET]
+ Set number of hash buckets for route cache
+
riscom8= [HW,SERIAL]
Format: <io_board1>[,<io_board2>[,...<io_boardN>]]
tgfx_2= See Documentation/input/joystick-parport.txt.
tgfx_3=
+ thash_entries= [KNL,NET]
+ Set number of hash buckets for TCP connection
+
tipar= [HW]
See header of drivers/char/tipar.c.
modems it should access at which ports. This can be done with the setbaycom
utility. If you are only using one modem, you can also configure the
driver from the insmod command line (or by means of an option line in
-/etc/modules.conf).
+/etc/modprobe.conf).
Examples:
- insmod baycom_ser_fdx mode="ser12*" iobase=0x3f8 irq=4
+ modprobe baycom_ser_fdx mode="ser12*" iobase=0x3f8 irq=4
sethdlc -i bcsf0 -p mode "ser12*" io 0x3f8 irq 4
Both lines configure the first port to drive a ser12 modem at the first
Frequently Asked Questions
High Availability
Promiscuous Sniffing notes
+8021q VLAN support
Limitations
Resources and Links
Bond Configuration
==================
-You will need to add at least the following line to /etc/modules.conf
+You will need to add at least the following line to /etc/modprobe.conf
so the bonding driver will automatically load when the bond0 interface is
-configured. Refer to the modules.conf manual page for specific modules.conf
+configured. Refer to the modprobe.conf manual page for specific modprobe.conf
syntax details. The Module Parameters section of this document describes each
bonding driver parameter.
appropriate rc directory.
If you specifically need all network drivers loaded before the bonding driver,
-adding the following line to modules.conf will cause the network driver for
+adding the following line to modprobe.conf will cause the network driver for
eth0 and eth1 to be loaded before the bonding driver.
-probeall bond0 eth0 eth1 bonding
+install bond0 /sbin/modprobe -a eth0 eth1 && /sbin/modprobe bonding
Be careful not to reference bond0 itself at the end of the line, or modprobe
will die in an endless recursive loop.
-To have device characteristics (such as MTU size) propagate to slave devices,
-set the bond characteristics before enslaving the device. The characteristics
-are propagated during the enslave process.
-
If running SNMP agents, the bonding driver should be loaded before any network
drivers participating in a bond. This requirement is due to the the interface
index (ipAdEntIfIndex) being associated to the first interface found with a
Optional parameters for the bonding driver can be supplied as command line
arguments to the insmod command. Typically, these parameters are specified in
-the file /etc/modules.conf (see the manual page for modules.conf). The
+the file /etc/modprobe.conf (see the manual page for modprobe.conf). The
available bonding driver parameters are listed below. If a parameter is not
specified the default value is used. When initially configuring a bond, it
is recommended "tail -f /var/log/messages" be run in a separate window to
For ethernet cards not supporting MII status, the arp_interval and
arp_ip_target parameters must be specified for bonding to work
correctly. If packets have not been sent or received during the
- specified arp_interval durration, an ARP request is sent to the
+ specified arp_interval duration, an ARP request is sent to the
targets to generate send and receive traffic. If after this
interval, either the successful send and/or receive count has not
incremented, the next slave in the sequence will become the active
that will be added.
To restore your slaves' MAC addresses, you need to detach them
- from the bond (`ifenslave -d bond0 eth0'), set them down
- (`ifconfig eth0 down'), unload the drivers (`rmmod 3c59x', for
- example) and reload them to get the MAC addresses from their
- eeproms. If the driver is shared by several devices, you need
- to turn them all down. Another solution is to look for the MAC
- address at boot time (dmesg or tail /var/log/messages) and to
- reset it by hand with ifconfig :
-
- # ifconfig eth0 down
- # ifconfig eth0 hw ether 00:20:40:60:80:A0
+ from the bond (`ifenslave -d bond0 eth0'). The bonding driver will then
+ restore the MAC addresses that the slaves had before they were enslaved.
9. Which transmit polices can be used?
# modprobe bonding miimon=100
-Or, put the following lines in /etc/modules.conf:
+Or, put the following line in /etc/modprobe.conf:
- alias bond0 bonding
options bond0 miimon=100
There are currently two policies for high availability. They are dependent on
# modprobe bonding miimon=100 mode=1
-Or, put in your /etc/modules.conf :
+Or, put in your /etc/modprobe.conf :
- alias bond0 bonding
options bond0 miimon=100 mode=active-backup
Example 1: Using multiple host and multiple switches to build a "no single
In this configuration, there is an ISL - Inter Switch Link (could be a trunk),
several servers (host1, host2 ...) attached to both switches each, and one or
-more ports to the outside world (port3...). One an only one slave on each host
+more ports to the outside world (port3...). One and only one slave on each host
is active at a time, while all links are still monitored (the system can
detect a failure of active and backup links).
must add the promisc flag there; it will be propagated down to the
slave interfaces at ifenslave time; a full example might look like:
- grep bond0 /etc/modules.conf || echo alias bond0 bonding >/etc/modules.conf
ifconfig bond0 promisc up
for if in eth1 eth2 ...;do
ifconfig $if up
just ignore all the warnings it emits.
+8021q VLAN support
+==================
+
+It is possible to configure VLAN devices over a bond interface using the 8021q
+driver. However, only packets coming from the 8021q driver and passing through
+bonding will be tagged by default. Self generated packets, like bonding's
+learning packets or ARP packets generated by either ALB mode or the ARP
+monitor mechanism, are tagged internally by bonding itself. As a result,
+bonding has to "learn" what VLAN IDs are configured on top of it, and it uses
+those IDs to tag self generated packets.
+
+For simplicity reasons, and to support the use of adapters that can do VLAN
+hardware acceleration offloding, the bonding interface declares itself as
+fully hardware offloaing capable, it gets the add_vid/kill_vid notifications
+to gather the necessary information, and it propagates those actions to the
+slaves.
+In case of mixed adapter types, hardware accelerated tagged packets that should
+go through an adapter that is not offloading capable are "un-accelerated" by the
+bonding driver so the VLAN tag sits in the regular location.
+
+VLAN interfaces *must* be added on top of a bonding interface only after
+enslaving at least one slave. This is because until the first slave is added the
+bonding interface has a HW address of 00:00:00:00:00:00, which will be copied by
+the VLAN interface when it is created.
+
+Notice that a problem would occur if all slaves are released from a bond that
+still has VLAN interfaces on top of it. When later coming to add new slaves, the
+bonding interface would get a HW address from the first slave, which might not
+match that of the VLAN interfaces. It is recommended that either all VLANs are
+removed and then re-added, or to manually set the bonding interface's HW
+address so it matches the VLAN's. (Note: changing a VLAN interface's HW address
+would set the underlying device -- i.e. the bonding interface -- to promiscouos
+mode, which might not be what you want).
+
+
Limitations
===========
The main limitations are :
Install linux driver as following command:
1. make all
-2. insmod dl2k.o
+2. insmod dl2k.ko
3. ifconfig eth0 up 10.xxx.xxx.xxx netmask 255.0.0.0
^^^^^^^^^^^^^^^\ ^^^^^^^^\
IP NETMASK
Now eth0 should active, you can test it by "ping" or get more information by
"ifconfig". If tested ok, continue the next step.
-4. cp dl2k.o /lib/modules/`uname -r`/kernel/drivers/net
-5. Add the following lines to /etc/modules.conf:
+4. cp dl2k.ko /lib/modules/`uname -r`/kernel/drivers/net
+5. Add the following line to /etc/modprobe.conf:
alias eth0 dl2k
6. Run "netconfig" or "netconf" to create configuration script ifcfg-eth0
located at /etc/sysconfig/network-scripts or create it manually.
-----------------
1. Copy dl2k.o to the network modules directory, typically
/lib/modules/2.x.x-xx/net or /lib/modules/2.x.x/kernel/drivers/net.
- 2. Locate the boot module configuration file, most commonly modules.conf
- or conf.modules in the /etc directory. Add the following lines:
+ 2. Locate the boot module configuration file, most commonly modprobe.conf
+ or modules.conf (for 2.4) in the /etc directory. Add the following lines:
alias ethx dl2k
options dl2k <optional parameters>
* while it is running. It was already set during enslave. To
* simplify things, it is now handeled separately.
*
- * - 2003/09/24 - Shmulik Hen <shmulik.hen at intel dot com>
+ * - 2003/12/01 - Shmulik Hen <shmulik.hen at intel dot com>
* - Code cleanup and style changes
* set version to 1.1.0
*/
#define APP_VERSION "1.1.0"
-#define APP_RELDATE "Septemer 24, 2003"
+#define APP_RELDATE "December 1, 2003"
#define APP_NAME "ifenslave"
static char *version =
conf/{all,interface}/arp_filter is set to TRUE,
it will be disabled otherwise
+arp_announce - INTEGER
+ Define different restriction levels for announcing the local
+ source IP address from IP packets in ARP requests sent on
+ interface:
+ 0 - (default) Use any local address, configured on any interface
+ 1 - Try to avoid local addresses that are not in the target's
+ subnet for this interface. This mode is useful when target
+ hosts reachable via this interface require the source IP
+ address in ARP requests to be part of their logical network
+ configured on the receiving interface. When we generate the
+ request we will check all our subnets that include the
+ target IP and will preserve the source address if it is from
+ such subnet. If there is no such subnet we select source
+ address according to the rules for level 2.
+ 2 - Always use the best local address for this target.
+ In this mode we ignore the source address in the IP packet
+ and try to select local address that we prefer for talks with
+ the target host. Such local address is selected by looking
+ for primary IP addresses on all our subnets on the outgoing
+ interface that include the target IP address. If no suitable
+ local address is found we select the first local address
+ we have on the outgoing interface or on all other interfaces,
+ with the hope we will receive reply for our request and
+ even sometimes no matter the source IP address we announce.
+
+ The max value from conf/{all,interface}/arp_announce is used.
+
+ Increasing the restriction level gives more chance for
+ receiving answer from the resolved target while decreasing
+ the level announces more valid sender's information.
+
+arp_ignore - INTEGER
+ Define different modes for sending replies in response to
+ received ARP requests that resolve local target IP addresses:
+ 0 - (default): reply for any local target IP address, configured
+ on any interface
+ 1 - reply only if the target IP address is local address
+ configured on the incoming interface
+ 2 - reply only if the target IP address is local address
+ configured on the incoming interface and both with the
+ sender's IP address are part from same subnet on this interface
+ 3 - do not reply for local addresses configured with scope host,
+ only resolutions for global and link addresses are replied
+ 4-7 - reserved
+ 8 - do not reply for all local addresses
+
+ The max value from conf/{all,interface}/arp_ignore is used
+ when ARP request is received on the {interface}
+
tag - INTEGER
Allows you to write a number, which can be used as required.
Default value is 0.
If you load the driver as a module, you can pass the parameters "io=",
"irq=", and "dma=" on the command line with insmod or modprobe, or add
-them as options in /etc/modules.conf:
+them as options in /etc/modprobe.conf:
alias lt0 ltpc # autoload the module when the interface is configured
options ltpc io=0x240 irq=9 dma=1
-(C)Copyright 1999-2003 Marvell(R).
+(C)Copyright 1999-2004 Marvell(R).
All rights reserved
===========================================================================
-sk98lin.txt created 15-Dec-2003
+sk98lin.txt created 13-Feb-2004
-Readme File for sk98lin v6.21
+Readme File for sk98lin v6.23
Marvell Yukon/SysKonnect SK-98xx Gigabit Ethernet Adapter family driver for LINUX
This file contains
to the driver module.
If you use the kernel module loader, you can set driver parameters
-in the file /etc/modules.conf (or old name: /etc/conf.modules).
+in the file /etc/modprobe.conf (or /etc/modules.conf in 2.4 or earlier).
To set the driver parameters in this file, proceed as follows:
1. Insert a line of the form :
bogus network interfaces to trick firewalls or administrators.
Driver module autoloading
- Make sure that "Kernel module loader" - module auto-loading support is enabled
- in your kernel.
- Add the following line to the /etc/modules.conf:
- alias char-major-10-200 tun
- and run
- depmod -a
+ Make sure that "Kernel module loader" - module auto-loading
+ support is enabled in your kernel. The kernel should load it on
+ first access.
Manual loading
insert the module by hand:
=================
There are several parameters which may be provided to the driver when
-its module is loaded. These are usually placed in /etc/modules.conf
-(used to be conf.modules). Example:
+its module is loaded. These are usually placed in /etc/modprobe.conf
+(/etc/modules.conf in 2.4). Example:
options 3c59x debug=3 rx_copybreak=300
to increase this value on LANs which have very high collision rates.
The default value is 5000 (5.0 seconds).
+enable_wol=N1,N2,N3,...
+
+ Enable Wake-on-LAN support for the relevant interface. Donald
+ Becker's `ether-wake' application may be used to wake suspended
+ machines.
+
+ Also enables the NIC's power management support.
+
+global_enable_wol=N
+
+ Sets enable_wol mode for all 3c59x NICs in the machine. Entries in
+ the `enable_wol' array above will override any setting of this.
+
Media selection
---------------
1) Increase the debug level. Usually this is done via:
- a) modprobe driver.o debug=7
- b) In /etc/conf.modules (or modules.conf):
- options driver_name debug=7
+ a) modprobe driver debug=7
+ b) In /etc/modprobe.conf (or /etc/modules.conf for 2.4):
+ options driver debug=7
2) Recreate the problem with the higher debug level,
send all logs to the maintainer.
KMod
----
-If you use kmod, you will find it useful to edit /etc/modules.conf.
+If you use kmod, you will find it useful to edit /etc/modprobe.conf.
Here is an example of the lines that need to be added:
alias parport_lowlevel parport_pc
If installed as a module, the module must be loaded. This can be done
manually by entering "modprobe rocket". To have the module loaded automatically
-upon system boot, edit the /etc/modules.conf file and add the line
+upon system boot, edit the /etc/modprobe.conf file and add the line
"alias char-major-46 rocket".
In order to use the ports, their device names (nodes) must be created with mknod.
script and the resulting /tmp/mkdev3270.
If you have chosen to make tub3270 a module, you add a line to
-/etc/modules.conf. If you are working on a VM virtual machine, you
+/etc/modprobe.conf. If you are working on a VM virtual machine, you
can use DEF GRAF to define virtual 3270 devices.
You may generate both 3270 and 3215 console support, or one or the
In brief, these are the steps:
1. Install the tub3270 patch
- 2. (If a module) add a line to /etc/modules.conf
+ 2. (If a module) add a line to /etc/modprobe.conf
3. (If VM) define devices with DEF GRAF
4. Reboot
5. Configure
make modules_install
2. (Perform this step only if you have configured tub3270 as a
- module.) Add a line to /etc/modules.conf to automatically
+ module.) Add a line to /etc/modprobe.conf to automatically
load the driver when it's needed. With this line added,
you will see login prompts appear on your 3270s as soon as
boot is complete (or with emulated 3270s, as soon as you dial
into your vm guest using the command "DIAL <vmguestname>").
Since the line-mode major number is 227, the line to add to
- /etc/modules.conf should be:
+ /etc/modprobe.conf should be:
alias char-major-227 tub3270
3. Define graphic devices to your vm guest machine, if you
INCORRECTLY CAN RENDER YOUR SYSTEM INOPERABLE.
USE THEM WITH CAUTION.
- Edit the file "modules.conf" in the directory /etc and add/edit a
+ Edit the file "modprobe.conf" in the directory /etc and add/edit a
line containing 'options aic79xx aic79xx=[command[,command...]]' where
'command' is one or more of the following:
-----------------------------------------------------------------
INCORRECTLY CAN RENDER YOUR SYSTEM INOPERABLE.
USE THEM WITH CAUTION.
- Edit the file "modules.conf" in the directory /etc and add/edit a
+ Edit the file "modprobe.conf" in the directory /etc and add/edit a
line containing 'options aic7xxx aic7xxx=[command[,command...]]' where
'command' is one or more of the following:
-----------------------------------------------------------------
If you want to have the module autoloaded on access to /dev/osst, you may
add something like
alias char-major-206 osst
-to your /etc/modules.conf (old name: conf.modules).
+to your /etc/modprobe.conf (before 2.6: modules.conf).
You may find it convenient to create a symbolic link
ln -s nosst0 /dev/tape
---------------
Several options can be passed to the sonypi driver, either by adding them
-to /etc/modules.conf file, when the driver is compiled as a module or by
+to /etc/modprobe.conf file, when the driver is compiled as a module or by
adding the following to the kernel command line (in your bootloader):
sonypi=minor[,verbose[,fnkeyinit[,camera[,compat[,mask[,useinput]]]]]]
-----------
In order to automatically load the sonypi module on use, you can put those
-lines in your /etc/modules.conf file:
+lines in your /etc/modprobe.conf file:
alias char-major-10-250 sonypi
options sonypi minor=250
Copy it to a directory of your choice, and unpack it there.
-4) Edit /etc/modules.conf, and insert the following lines at the end of the
+4) Edit /etc/modprobe.conf, and insert the following lines at the end of the
file:
alias sound-slot-0 sb
alias sound-service-0-1 awe_wave
- post-install awe_wave /usr/local/bin/sfxload PATH_TO_SOUND_BANK_FILE
+ install awe_wave /sbin/modprobe --first-time -i awe_wave && /usr/local/bin/sfxload PATH_TO_SOUND_BANK_FILE
You will of course have to change "PATH_TO_SOUND_BANK_FILE" to the full
path of of the sound bank file. That will enable the Sound Blaster and AWE
(0x300, 0x310, 0x320 or 0x330)
mpu_irq MPU-401 irq line (5, 7, 9, 10 or 0)
-The /etc/modules.conf will have lines like this:
+The /etc/modprobe.conf will have lines like this:
options opl3 io=0x388
options ad1848 io=0x530 irq=11 dma=3
ad1848 are the corresponding options for the MSS and OPL3 modules.
Loading MSS and OPL3 needs to pre load the aedsp16 module to set up correctly
-the sound card. Installation dependencies must be written in the modules.conf
+the sound card. Installation dependencies must be written in the modprobe.conf
file:
-pre-install ad1848 modprobe aedsp16
-pre-install opl3 modprobe aedsp16
+install ad1848 /sbin/modprobe aedsp16 && /sbin/modprobe -i ad1848
+install opl3 /sbin/modprobe aedsp16 && /sbin/modprobe -i opl3
Then you must load the sound modules stack in this order:
sound -> aedsp16 -> [ ad1848, opl3 ]
-Alma Chao <elysian@ethereal.torsion.org> suggests the following /etc/modules.conf:
+Alma Chao <elysian@ethereal.torsion.org> suggests the following /etc/modprobe.conf:
alias sound ad1848
alias synth0 opl3
=========
If loading via modprobe, these common files are automatically loaded
-when requested by modprobe. For example, my /etc/modules.conf contains:
+when requested by modprobe. For example, my /etc/modprobe.conf contains:
alias sound sb
options sb io=0x240 irq=9 dma=3 dma16=5 mpu_io=0x300
driver, you should do the following:
1. remove sound modules (detailed above)
-2. remove the sound modules from /etc/modules.conf
+2. remove the sound modules from /etc/modprobe.conf
3. move the sound modules from /lib/modules/<kernel>/misc
(for example, I make a /lib/modules/<kernel>/misc/tmp
directory and copy the sound module files to that
sb.o could be copied (or symlinked) to sb1.o for the
second SoundBlaster.
-2. Make a second entry in /etc/modules.conf, for example,
+2. Make a second entry in /etc/modprobe.conf, for example,
sound1 or sb1. This second entry should refer to the
new module names for example sb1, and should include
the I/O, etc. for the second sound card.
2) On the command line when using insmod or in a bash script
using command line calls to load sound.
-3) In /etc/modules.conf when using modprobe.
+3) In /etc/modprobe.conf when using modprobe.
4) Via Red Hat's GPL'd /usr/sbin/sndconfig program (text based).
-(This recipe has been edited to update the configuration symbols.)
+(This recipe has been edited to update the configuration symbols,
+ and change over to modprobe.conf for 2.6)
From: Shaw Carruthers <shaw@shawc.demon.co.uk>
CONFIG_SOUND_MAD16=m
CONFIG_SOUND_YM3812=m
-modules.conf has:
+modprobe.conf has:
-alias char-major-14 mad16
+alias char-major-14-* mad16
options sb mad16=1
options mad16 io=0x530 irq=7 dma=0 dma16=1 && /usr/local/bin/aumix -w 15 -p 20 -m 0 -1 0 -2 0 -3 0 -i 0
installed with the rest of the modules for the kernel on the system.
Typically this will be in /lib/modules/ somewhere. 'alias sound-slot-0
maestro3' should also be added to your module configs (typically
-/etc/modules.conf) if you're using modular OSS/Lite sound and want to
+/etc/modprobe.conf) if you're using modular OSS/Lite sound and want to
default to using a maestro3 chip.
There are very few options to the driver. One is 'debug' which will
modprobe opl3 io=0x388
See the section "Automatic Module Loading" below for how to set up
-/etc/modules.conf to automate this.
+/etc/modprobe.conf to automate this.
An important thing to remember that the opl3sa2 module's io argument is
for it's own control port, which handles the card's master mixer for
Lastly, if you're using modules and want to set up automatic module
loading with kmod, the kernel module loader, here is the section I
-currently use in my modules.conf file:
+currently use in my modprobe.conf file:
# Sound
alias sound-slot-0 opl3sa2
If you have another OS installed on your computer it is recommended
that Linux and the other OS use the same resources.
-Also, it is recommended that resources specified in /etc/modules.conf
+Also, it is recommended that resources specified in /etc/modprobe.conf
and resources specified in /etc/isapnp.conf agree.
Compiling the sound driver
Using kmod and autoloading the sound driver
-------------------------------------------
Comment: as of linux-2.1.90 kmod is replacing kerneld.
-The config file '/etc/modules.conf' is used as before.
+The config file '/etc/modprobe.conf' is used as before.
-This is the sound part of my /etc/modules.conf file.
+This is the sound part of my /etc/modprobe.conf file.
Following that I will explain each line.
alias mixer0 mad16
options sb mad16=1
options mad16 irq=10 dma=0 dma16=1 io=0x530 joystick=1 cdtype=0
options opl3 io=0x388
-post-install mad16 /sbin/ad1848_mixer_reroute 14 8 15 3 16 6
+install mad16 /sbin/modprobe -i mad16 && /sbin/ad1848_mixer_reroute 14 8 15 3 16 6
If you have an MPU daughtercard or onboard MPU you will want to add to the
"options mad16" line - eg
You can then get OPL3 functionality by issuing the command:
insmod opl3
In addition, you must either add the following line to
- /etc/modules.conf:
+ /etc/modprobe.conf:
options opl3 io=0x388
or else add the following line to /etc/lilo.conf:
opl3=0x388
append="pas2=0x388,10,3,-1,0,-1,-1,-1 opl3=0x388"
If sound is built totally modular, the above options may be
-specified in /etc/modules.conf for pas2.o, sb.o and opl3.o
+specified in /etc/modprobe.conf for pas2, sb and opl3
respectively.
drivers/sound dir. Now one simply configures and makes one's kernel and
modules in the usual way.
- Then, add to your /etc/modules.conf something like:
+ Then, add to your /etc/modprobe.conf something like:
-alias char-major-14 sb
-post-install sb /sbin/modprobe "-k" "adlib_card"
+alias char-major-14-* sb
+install sb /sbin/modprobe -i sb && /sbin/modprobe adlib_card
options sb io=0x220 irq=7 dma=1 dma16=5 mpu_io=0x330
options adlib_card io=0x388 # FM synthesizer
Note that at present there is no way to configure the io, irq and other
parameters for the modular drivers as one does for the wired drivers.. One
needs to pass the modules the necessary parameters as arguments, either
-with /etc/modules.conf or with command-line args to modprobe, e.g.
+with /etc/modprobe.conf or with command-line args to modprobe, e.g.
-modprobe -k sb io=0x220 irq=7 dma=1 dma16=5 mpu_io=0x330
-modprobe -k adlib_card io=0x388
+modprobe sb io=0x220 irq=7 dma=1 dma16=5 mpu_io=0x330
+modprobe adlib_card io=0x388
- recommend using /etc/modules.conf.
+ recommend using /etc/modprobe.conf.
Persistent DMA Buffers:
To make the sound driver use persistent DMA buffers we need to pass the
sound.o module a "dmabuf=1" command-line argument. This is normally done
-in /etc/modules.conf like so:
+in /etc/modprobe.conf like so:
options sound dmabuf=1
6) How do I configure my card ?
************************************************************
-You need to edit /etc/modules.conf. Here's mine (edited to show the
+You need to edit /etc/modprobe.conf. Here's mine (edited to show the
relevant details):
# Sound system
- alias char-major-14 wavefront
+ alias char-major-14-* wavefront
alias synth0 wavefront
alias mixer0 cs4232
alias audio0 cs4232
- pre-install wavefront modprobe "-k" "cs4232"
- post-install wavefront modprobe "-k" "opl3"
+ install wavefront /sbin/modprobe cs4232 && /sbin/modprobe -i wavefront && /sbin/modprobe opl3
options wavefront io=0x200 irq=9
options cs4232 synthirq=9 synthio=0x200 io=0x530 irq=5 dma=1 dma2=0
options opl3 io=0x388
can have. You only need to increase super-max if you need to
mount more filesystems than the current value in super-max
allows you to.
+
+==============================================================
+
+aio-nr & aio-max-nr:
+
+aio-nr shows the current system-wide number of asynchronous io
+requests. aio-max-nr allows you to change the maximum value
+aio-nr can grow to.
+
+==============================================================
options scanner vendor=0x#### product=0x****
-to the /etc/modules.conf file replacing the #'s and the *'s with the
+to the /etc/modprobe.conf file replacing the #'s and the *'s with the
correct IDs. The IDs can be retrieved from the messages file or
using "cat /proc/bus/usb/devices".
If the default timeout is too low, i.e. there are frequent "timeout" messages,
you may want to increase the timeout manually by using the parameter
"read_timeout". The time is given in seconds. This is an example for
-modules.conf with a timeout of 60 seconds:
+modprobe.conf with a timeout of 60 seconds:
options scanner read_timeout=60
The configuration requires module configuration and device
configuration. I like kmod or kerneld process with the
-/etc/modules.conf file so the modules can automatically load/unload as
+/etc/modprobe.conf file so the modules can automatically load/unload as
they are used. The video devices could already exist, be generated
using MAKEDEV, or need to be created. The following sections detail
these procedures.
2.1 Module Configuration
Using modules requires a bit of work to install and pass the
-parameters. Understand that entries in /etc/modules.conf of:
+parameters. Understand that entries in /etc/modprobe.conf of:
alias parport_lowlevel parport_pc
options parport_pc io=0x378 irq=none
alias char-major-81 videodev
alias char-major-81-0 c-qcam
-will cause the kmod/kerneld/modprobe to do certain things. If you are
-using kmod or kerneld, then a request for a 'char-major-81-0' will cause
+will cause the kmod/modprobe to do certain things. If you are
+using kmod, then a request for a 'char-major-81-0' will cause
the 'c-qcam' module to load. If you have other video sources with
modules, you might want to assign the different minor numbers to
different modules.
option with X being the card number as given in the previous section.
To have more than one card, use card=X1[,X2[,X3,[X4[..]]]]
-To automate this, add the following to your /etc/modules.conf:
+To automate this, add the following to your /etc/modprobe.conf:
options zr36067 card=X1[,X2[,X3[,X4[..]]]]
alias char-major-81-0 zr36067
--- /dev/null
+# i2c
+alias char-major-89 i2c-dev
+options i2c-core i2c_debug=1
+options i2c-algo-bit bit_test=1
+
+# bttv
+alias char-major-81 videodev
+alias char-major-81-0 bttv
+options bttv card=2 radio=1
+options tuner debug=1
+
+# For modern kernels (2.6 or above), this belongs in /etc/modprobe.conf
+# For for 2.4 kernels or earlier, this belongs in /etc/modules.conf.
+
# i2c
alias char-major-89 i2c-dev
options i2c-core i2c_debug=1
cards is in CARDLIST.bttv
If bttv takes very long to load (happens sometimes with the cheap
-cards which have no tuner), try adding this to your modules.conf:
+cards which have no tuner), try adding this to your modprobe.conf:
options i2c-algo-bit bit_test=1
For the WinTV/PVR you need one firmware file from the driver CD:
---------------
Several options can be passed to the meye driver, either by adding them
-to /etc/modules.conf file, when the driver is compiled as a module, or
+to /etc/modprobe.conf file, when the driver is compiled as a module, or
by adding the following to the kernel command line (in your bootloader):
meye=gbuffers[,gbufsize[,video_nr]]
-----------
In order to automatically load the meye module on use, you can put those lines
-in your /etc/modules.conf file:
+in your /etc/modprobe.conf file:
alias char-major-81 videodev
alias char-major-81-0 meye
P: Dave Jones
M: davej@codemonkey.org.uk
L: cpufreq@www.linux.org.uk
-W: http://www.codemonkey.org.uk/cpufreq/
+W: http://www.codemonkey.org.uk/projects/cpufreq/
S: Maintained
CPUID/MSR DRIVER
irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, irq_affinity[(long)data]);
+ int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
prof_cpu_mask_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
source "fs/Kconfig"
+source "arch/arm/oprofile/Kconfig"
+
source "drivers/video/Kconfig"
if ARCH_ACORN || ARCH_CLPS7500 || ARCH_TBOX || ARCH_SHARK || ARCH_SA1100 || PCI
core-$(CONFIG_FPE_NWFPE) += arch/arm/nwfpe/
core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ)
+drivers-$(CONFIG_OPROFILE) += arch/arm/oprofile/
drivers-$(CONFIG_ARCH_CLPS7500) += drivers/acorn/char/
drivers-$(CONFIG_ARCH_L7200) += drivers/acorn/char/
*/
static inline void do_profile(struct pt_regs *regs)
{
+
+ profile_hook(regs);
+
if (!user_mode(regs) &&
prof_buffer &&
current->pid) {
--- /dev/null
+
+menu "Profiling support"
+ depends on EXPERIMENTAL
+
+config PROFILING
+ bool "Profiling support (EXPERIMENTAL)"
+ help
+ Say Y here to enable the extended profiling support mechanisms used
+ by profilers such as OProfile.
+
+
+config OPROFILE
+ tristate "OProfile system profiling (EXPERIMENTAL)"
+ depends on PROFILING
+ help
+ OProfile is a profiling system capable of profiling the
+ whole system, include the kernel, kernel modules, libraries,
+ and applications.
+
+ If unsure, say N.
+
+endmenu
+
--- /dev/null
+obj-$(CONFIG_OPROFILE) += oprofile.o
+
+DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
+ oprof.o cpu_buffer.o buffer_sync.o \
+ event_buffer.o oprofile_files.o \
+ oprofilefs.o oprofile_stats.o \
+ timer_int.o )
+
+oprofile-y := $(DRIVER_OBJS) init.o
--- /dev/null
+/**
+ * @file init.c
+ *
+ * @remark Copyright 2004 Oprofile Authors
+ *
+ * @author Zwane Mwaikambo
+ */
+
+#include <linux/oprofile.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+
+int oprofile_arch_init(struct oprofile_operations **ops)
+{
+ int ret = -ENODEV;
+
+ return ret;
+}
+
+void oprofile_arch_exit(void)
+{
+}
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
/* Register device */
err = register_netdev(dev);
if (err) {
- kfree(dev);
+ free_netdev(dev);
return err;
}
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
local_irq_restore(flags);
+ clock_was_set();
return 0;
}
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
help
Choose this option if your computer is a standard PC or compatible.
+config X86_ELAN
+ bool "AMD Elan"
+ help
+ Select this for an AMD Elan processor.
+
+ Do not use this option for K6/Athlon/Opteron processors!
+
+ If unsure, choose "PC-compatible" instead.
+
config X86_VOYAGER
bool "Voyager (NCR)"
help
default y
depends on SMP && X86_ES7000 && MPENTIUMIII
+if !X86_ELAN
+
choice
prompt "Processor family"
default M686
extended prefetch instructions in addition to the Pentium II
extensions.
+config MPENTIUMM
+ bool "Pentium M"
+ help
+ Select this for Intel Pentium M (not Pentium-4 M)
+ notebook chips.
+
config MPENTIUM4
- bool "Pentium-4/Celeron(P4-based)/Xeon"
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/Xeon"
help
- Select this for Intel Pentium 4 chips. This includes both
- the Pentium 4 and P4-based Celeron chips. This option
- enables compile flags optimized for the chip, uses the
- correct cache shift, and applies any applicable Pentium III
- optimizations.
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, P4-based Celeron and Xeon, and Pentium-4 M
+ (not Pentium M) chips. This option enables compile flags
+ optimized for the chip, uses the correct cache shift, and
+ applies any applicable Pentium III optimizations.
config MK6
bool "K6/K6-II/K6-III"
when it has moderate overhead. This is intended for generic
distributions kernels.
+endif
+
#
# Define implied options from the CPU selection here
#
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || X86_GENERIC
- default "4" if MELAN || M486 || M386
+ default "4" if X86_ELAN || M486 || M386
default "5" if MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCRUSOE || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2
- default "6" if MK7 || MK8
+ default "6" if MK7 || MK8 || MPENTIUMM
config RWSEM_GENERIC_SPINLOCK
bool
config X86_ALIGNMENT_16
bool
- depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2
+ depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || X86_ELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2
default y
config X86_GOOD_APIC
bool
- depends on MK7 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8
+ depends on MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8
default y
config X86_INTEL_USERCOPY
bool
- depends on MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7
+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7
default y
config X86_USE_PPRO_CHECKSUM
bool
- depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2
+ depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2
default y
config X86_USE_3DNOW
config X86_TSC
bool
- depends on (MWINCHIP3D || MWINCHIP2 || MCRUSOE || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2) && !X86_NUMAQ
+ depends on (MWINCHIP3D || MWINCHIP2 || MCRUSOE || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2) && !X86_NUMAQ
default y
config X86_MCE
To compile this driver as a module, choose M here: the
module will be called microcode.
- If you use modprobe or kmod you may also want to add the line
- 'alias char-major-10-184 microcode' to your /etc/modules.conf file.
config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support"
# Common NUMA Features
config NUMA
bool "Numa Memory Allocation Support"
- depends on SMP && HIGHMEM64G && (X86_PC || X86_NUMAQ || X86_GENERICARCH || (X86_SUMMIT && ACPI))
+ depends on SMP && HIGHMEM64G && (X86_NUMAQ || X86_GENERICARCH || (X86_SUMMIT && ACPI))
default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT)
anything about EFI). However, even with this option, the resultant
kernel should continue to boot on existing non-EFI platforms.
+config IRQBALANCE
+ bool "Enable kernel irq balancing"
+ depends on SMP
+ default y
+ help
+ The defalut yes will allow the kernel to do irq load balancing.
+ Saying no will keep the kernel from doing irq load balancing.
+
config HAVE_DEC_LOCK
bool
depends on (SMP || PREEMPT) && X86_CMPXCHG
config REGPARM
bool "Use register arguments (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
default n
help
Compile the kernel with -mregparm=3. This uses an different ABI
and passes the first three arguments of a function call in registers.
- This will probably break binary only modules.
-
+ This will probably break binary only modules.
+
+ This feature is only enabled for gcc-3.0 and later - earlier compilers
+ generate incorrect output with certain kernel constructs when
+ -mregparm=3 is used.
+
endmenu
menu "Special options"
identify kernel problems.
config EARLY_PRINTK
- bool
- default y
+ bool "Early printk" if EMBEDDED
+ default y
+ help
+ Write kernel log output directly into the VGA buffer or to a serial
+ port.
+
+ This is useful for kernel debugging when your machine crashes very
+ early before the console code is initialized. For normal operation
+ it is not recommended because it looks ugly and doesn't cooperate
+ with klogd/syslogd or the X server. You should normally N here,
+ unless you want to debug such a crash.
config DEBUG_STACKOVERFLOW
bool "Check for stack overflows"
depends on DEBUG_KERNEL
+config DEBUG_STACK_USAGE
+ bool "Stack utilization instrumentation"
+ depends on DEBUG_KERNEL
+ help
+ Enables the display of the minimum amount of free stack which each
+ task has ever had available in the sysrq-T and sysrq-P debug output.
+
+ This option will slow down process creation somewhat.
+
config DEBUG_SLAB
bool "Debug memory allocations"
depends on DEBUG_KERNEL
depends on X86_LOCAL_APIC && !X86_VISWS
default y
+config REGPARM
+ bool "Use register arguments (EXPERIMENTAL)"
+ default n
+ help
+ Compile the kernel with -mregparm=3. This uses an different ABI
+ and passes the first three arguments of a function call in registers.
+ This will probably break binary only modules.
+
endmenu
source "security/Kconfig"
cflags-$(CONFIG_M686) += -march=i686
cflags-$(CONFIG_MPENTIUMII) += $(call check_gcc,-march=pentium2,-march=i686)
cflags-$(CONFIG_MPENTIUMIII) += $(call check_gcc,-march=pentium3,-march=i686)
+cflags-$(CONFIG_MPENTIUMM) += $(call check_gcc,-march=pentium3,-march=i686)
cflags-$(CONFIG_MPENTIUM4) += $(call check_gcc,-march=pentium4,-march=i686)
-cflags-$(CONFIG_MK6) += $(call check_gcc,-march=k6,-march=i586)
+cflags-$(CONFIG_MK6) += -march=k6
# Please note, that patches that add -march=athlon-xp and friends are pointless.
# They make zero difference whatsosever to performance at this time.
cflags-$(CONFIG_MK7) += $(call check_gcc,-march=athlon,-march=i686 $(align)-functions=4)
cflags-$(CONFIG_MCYRIXIII) += $(call check_gcc,-march=c3,-march=i486) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MVIAC3_2) += $(call check_gcc,-march=c3-2,-march=i686)
+# AMD Elan support
+cflags-$(CONFIG_X86_ELAN) += -march=i486
+
+# -mregparm=3 works ok on gcc-3.0 and later
+#
+GCC_VERSION := $(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-version.sh $(CC))
+cflags-$(CONFIG_REGPARM) += $(shell if [ $(GCC_VERSION) -ge 0300 ] ; then echo "-mregparm=3"; fi ;)
+
+# Enable unit-at-a-time mode when possible. It shrinks the
+# kernel considerably.
+CFLAGS += $(call check_gcc,-funit-at-a-time,)
+
# m586 and genericarch is likely a distribution kernel. optimize for the
# most common CPU (686)
ifeq ($(CONFIG_X86_GENERICARCH),y)
host-progs := tools/build
+HOSTCFLAGS_build.o := -Iinclude
+
# ---------------------------------------------------------------------------
$(obj)/zImage: IMAGE_OFFSET := 0x1000
# AMD Elan bug fix by Robert Schwebel.
#
-#if defined(CONFIG_MELAN)
+#if defined(CONFIG_X86_ELAN)
movb $0x02, %al # alternate A20 gate
outb %al, $0x92 # this works on SC410/SC520
a20_elan_wait:
obj-$(CONFIG_EFI) += efi.o efi_stub.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
-early_printk-y := ../../x86_64/kernel/early_printk.o
-
EXTRA_AFLAGS := -traditional
obj-$(CONFIG_SCx200) += scx200.o
}
#endif
+/* detect the location of the ACPI PM Timer */
+#ifdef CONFIG_X86_PM_TIMER
+extern u32 pmtmr_ioport;
+
+static int __init acpi_parse_fadt(unsigned long phys, unsigned long size)
+{
+ struct fadt_descriptor_rev2 *fadt =0;
+
+ fadt = (struct fadt_descriptor_rev2*) __acpi_map_table(phys,size);
+ if(!fadt) {
+ printk(KERN_WARNING PREFIX "Unable to map FADT\n");
+ return 0;
+ }
+
+ if (fadt->revision >= FADT2_REVISION_ID) {
+ /* FADT rev. 2 */
+ if (fadt->xpm_tmr_blk.address_space_id != ACPI_ADR_SPACE_SYSTEM_IO)
+ return 0;
+
+ pmtmr_ioport = fadt->xpm_tmr_blk.address;
+ } else {
+ /* FADT rev. 1 */
+ pmtmr_ioport = fadt->V1_pm_tmr_blk;
+ }
+ if (pmtmr_ioport)
+ printk(KERN_INFO PREFIX "PM-Timer IO Port: %#x\n", pmtmr_ioport);
+ return 0;
+}
+#endif
+
+
unsigned long __init
acpi_find_rsdp (void)
{
return result;
}
+#ifdef CONFIG_X86_PM_TIMER
+ acpi_table_parse(ACPI_FADT, acpi_parse_fadt);
+#endif
+
#ifdef CONFIG_X86_LOCAL_APIC
/*
config ELAN_CPUFREQ
tristate "AMD Elan"
- depends on CPU_FREQ_TABLE && MELAN
+ depends on CPU_FREQ_TABLE && X86_ELAN
---help---
This adds the CPUFreq driver for AMD Elan SC400 and SC410
processors.
/*
- * (C) 2001-2003 Dave Jones. <davej@codemonkey.org.uk>
+ * (C) 2001-2004 Dave Jones. <davej@codemonkey.org.uk>
* (C) 2002 Padraig Brady. <padraig@antefacto.com>
*
* Licensed under the terms of the GNU GPL License version 2.
return target;
}
+
static int guess_fsb(int maxmult)
{
int speed = (cpu_khz/1000);
}
-
static int __init longhaul_get_ranges (void)
{
struct cpuinfo_x86 *c = cpu_data;
return 0;
}
-static int longhaul_cpu_init (struct cpufreq_policy *policy)
+static int __init longhaul_cpu_init (struct cpufreq_policy *policy)
{
struct cpuinfo_x86 *c = cpu_data;
char *cpuname=NULL;
}
-static int longrun_cpu_init(struct cpufreq_policy *policy)
+static int __init longrun_cpu_init(struct cpufreq_policy *policy)
{
int result = 0;
/*
* AMD K7 Powernow driver.
* (C) 2003 Dave Jones <davej@codemonkey.org.uk> on behalf of SuSE Labs.
- * (C) 2003 Dave Jones <davej@redhat.com>
+ * (C) 2003-2004 Dave Jones <davej@redhat.com>
*
* Licensed under the terms of the GNU GPL License version 2.
* Based upon datasheets & sample CPUs kindly provided by AMD.
#include "mce.h"
-static struct timer_list mce_timer;
-static int timerset;
static int firstbank;
#define MCE_RATE 15*HZ /* timer rate is 15s */
u32 low, high;
int i;
- preempt_disable();
for (i=firstbank; i<nr_mce_banks; i++) {
rdmsr (MSR_IA32_MC0_STATUS+i*4, low, high);
if (high & (1<<31)) {
- printk (KERN_EMERG "MCE: The hardware reports a non fatal, correctable incident occurred on CPU %d.\n",
+ printk(KERN_INFO "MCE: The hardware reports a non "
+ "fatal, correctable incident occurred on "
+ "CPU %d.\n",
smp_processor_id());
- printk (KERN_EMERG "Bank %d: %08x%08x\n", i, high, low);
+ printk (KERN_INFO "Bank %d: %08x%08x\n", i, high, low);
/* Scrub the error so we don't pick it up in MCE_RATE seconds time. */
wrmsr (MSR_IA32_MC0_STATUS+i*4, 0UL, 0UL);
wmb();
}
}
- preempt_enable();
}
-static void do_mce_timer(void *data)
+static void mce_work_fn(void *data);
+static DECLARE_WORK(mce_work, mce_work_fn, NULL);
+
+static void mce_work_fn(void *data)
{
- smp_call_function (mce_checkregs, NULL, 1, 1);
+ on_each_cpu(mce_checkregs, NULL, 1, 1);
+ schedule_delayed_work(&mce_work, MCE_RATE);
}
-static DECLARE_WORK(mce_work, do_mce_timer, NULL);
-
-static void mce_timerfunc (unsigned long data)
-{
- mce_checkregs (NULL);
-#ifdef CONFIG_SMP
- if (num_online_cpus() > 1)
- schedule_work (&mce_work);
-#endif
- mce_timer.expires = jiffies + MCE_RATE;
- add_timer (&mce_timer);
-}
-
static int __init init_nonfatal_mce_checker(void)
{
struct cpuinfo_x86 *c = &boot_cpu_data;
else
firstbank = 0;
- if (timerset == 0) {
- /* Set the timer to check for non-fatal
- errors every MCE_RATE seconds */
- init_timer (&mce_timer);
- mce_timer.expires = jiffies + MCE_RATE;
- mce_timer.data = 0;
- mce_timer.function = &mce_timerfunc;
- add_timer (&mce_timer);
- timerset = 1;
- printk(KERN_INFO "Machine check exception polling timer started.\n");
- }
+ /*
+ * Check for non-fatal errors every MCE_RATE s
+ */
+ schedule_delayed_work(&mce_work, MCE_RATE);
+ printk(KERN_INFO "Machine check exception polling timer started.\n");
return 0;
}
module_init(init_nonfatal_mce_checker);
--- /dev/null
+
+#include "../../x86_64/kernel/early_printk.c"
for (i = 0; i < 4; i++) {
if (isprint(info->params.host_bus_type[i])) {
- p += snprintf(p, left, "%c", info->params.host_bus_type[i]);
+ p += scnprintf(p, left, "%c", info->params.host_bus_type[i]);
} else {
- p += snprintf(p, left, " ");
+ p += scnprintf(p, left, " ");
}
}
if (!strncmp(info->params.host_bus_type, "ISA", 3)) {
- p += snprintf(p, left, "\tbase_address: %x\n",
+ p += scnprintf(p, left, "\tbase_address: %x\n",
info->params.interface_path.isa.base_address);
} else if (!strncmp(info->params.host_bus_type, "PCIX", 4) ||
!strncmp(info->params.host_bus_type, "PCI", 3)) {
- p += snprintf(p, left,
+ p += scnprintf(p, left,
"\t%02x:%02x.%d channel: %u\n",
info->params.interface_path.pci.bus,
info->params.interface_path.pci.slot,
} else if (!strncmp(info->params.host_bus_type, "IBND", 4) ||
!strncmp(info->params.host_bus_type, "XPRS", 4) ||
!strncmp(info->params.host_bus_type, "HTPT", 4)) {
- p += snprintf(p, left,
+ p += scnprintf(p, left,
"\tTBD: %llx\n",
info->params.interface_path.ibnd.reserved);
} else {
- p += snprintf(p, left, "\tunknown: %llx\n",
+ p += scnprintf(p, left, "\tunknown: %llx\n",
info->params.interface_path.unknown.reserved);
}
return (p - buf);
for (i = 0; i < 8; i++) {
if (isprint(info->params.interface_type[i])) {
- p += snprintf(p, left, "%c", info->params.interface_type[i]);
+ p += scnprintf(p, left, "%c", info->params.interface_type[i]);
} else {
- p += snprintf(p, left, " ");
+ p += scnprintf(p, left, " ");
}
}
if (!strncmp(info->params.interface_type, "ATAPI", 5)) {
- p += snprintf(p, left, "\tdevice: %u lun: %u\n",
+ p += scnprintf(p, left, "\tdevice: %u lun: %u\n",
info->params.device_path.atapi.device,
info->params.device_path.atapi.lun);
} else if (!strncmp(info->params.interface_type, "ATA", 3)) {
- p += snprintf(p, left, "\tdevice: %u\n",
+ p += scnprintf(p, left, "\tdevice: %u\n",
info->params.device_path.ata.device);
} else if (!strncmp(info->params.interface_type, "SCSI", 4)) {
- p += snprintf(p, left, "\tid: %u lun: %llu\n",
+ p += scnprintf(p, left, "\tid: %u lun: %llu\n",
info->params.device_path.scsi.id,
info->params.device_path.scsi.lun);
} else if (!strncmp(info->params.interface_type, "USB", 3)) {
- p += snprintf(p, left, "\tserial_number: %llx\n",
+ p += scnprintf(p, left, "\tserial_number: %llx\n",
info->params.device_path.usb.serial_number);
} else if (!strncmp(info->params.interface_type, "1394", 4)) {
- p += snprintf(p, left, "\teui: %llx\n",
+ p += scnprintf(p, left, "\teui: %llx\n",
info->params.device_path.i1394.eui);
} else if (!strncmp(info->params.interface_type, "FIBRE", 5)) {
- p += snprintf(p, left, "\twwid: %llx lun: %llx\n",
+ p += scnprintf(p, left, "\twwid: %llx lun: %llx\n",
info->params.device_path.fibre.wwid,
info->params.device_path.fibre.lun);
} else if (!strncmp(info->params.interface_type, "I2O", 3)) {
- p += snprintf(p, left, "\tidentity_tag: %llx\n",
+ p += scnprintf(p, left, "\tidentity_tag: %llx\n",
info->params.device_path.i2o.identity_tag);
} else if (!strncmp(info->params.interface_type, "RAID", 4)) {
- p += snprintf(p, left, "\tidentity_tag: %x\n",
+ p += scnprintf(p, left, "\tidentity_tag: %x\n",
info->params.device_path.raid.array_number);
} else if (!strncmp(info->params.interface_type, "SATA", 4)) {
- p += snprintf(p, left, "\tdevice: %u\n",
+ p += scnprintf(p, left, "\tdevice: %u\n",
info->params.device_path.sata.device);
} else {
- p += snprintf(p, left, "\tunknown: %llx %llx\n",
+ p += scnprintf(p, left, "\tunknown: %llx %llx\n",
info->params.device_path.unknown.reserved1,
info->params.device_path.unknown.reserved2);
}
return -EINVAL;
}
- p += snprintf(p, left, "0x%02x\n", info->version);
+ p += scnprintf(p, left, "0x%02x\n", info->version);
return (p - buf);
}
edd_show_disk80_sig(struct edd_device *edev, char *buf)
{
char *p = buf;
- p += snprintf(p, left, "0x%08x\n", edd_disk80_sig);
+ p += scnprintf(p, left, "0x%08x\n", edd_disk80_sig);
return (p - buf);
}
}
if (info->interface_support & EDD_EXT_FIXED_DISK_ACCESS) {
- p += snprintf(p, left, "Fixed disk access\n");
+ p += scnprintf(p, left, "Fixed disk access\n");
}
if (info->interface_support & EDD_EXT_DEVICE_LOCKING_AND_EJECTING) {
- p += snprintf(p, left, "Device locking and ejecting\n");
+ p += scnprintf(p, left, "Device locking and ejecting\n");
}
if (info->interface_support & EDD_EXT_ENHANCED_DISK_DRIVE_SUPPORT) {
- p += snprintf(p, left, "Enhanced Disk Drive support\n");
+ p += scnprintf(p, left, "Enhanced Disk Drive support\n");
}
if (info->interface_support & EDD_EXT_64BIT_EXTENSIONS) {
- p += snprintf(p, left, "64-bit extensions\n");
+ p += scnprintf(p, left, "64-bit extensions\n");
}
return (p - buf);
}
}
if (info->params.info_flags & EDD_INFO_DMA_BOUNDARY_ERROR_TRANSPARENT)
- p += snprintf(p, left, "DMA boundary error transparent\n");
+ p += scnprintf(p, left, "DMA boundary error transparent\n");
if (info->params.info_flags & EDD_INFO_GEOMETRY_VALID)
- p += snprintf(p, left, "geometry valid\n");
+ p += scnprintf(p, left, "geometry valid\n");
if (info->params.info_flags & EDD_INFO_REMOVABLE)
- p += snprintf(p, left, "removable\n");
+ p += scnprintf(p, left, "removable\n");
if (info->params.info_flags & EDD_INFO_WRITE_VERIFY)
- p += snprintf(p, left, "write verify\n");
+ p += scnprintf(p, left, "write verify\n");
if (info->params.info_flags & EDD_INFO_MEDIA_CHANGE_NOTIFICATION)
- p += snprintf(p, left, "media change notification\n");
+ p += scnprintf(p, left, "media change notification\n");
if (info->params.info_flags & EDD_INFO_LOCKABLE)
- p += snprintf(p, left, "lockable\n");
+ p += scnprintf(p, left, "lockable\n");
if (info->params.info_flags & EDD_INFO_NO_MEDIA_PRESENT)
- p += snprintf(p, left, "no media present\n");
+ p += scnprintf(p, left, "no media present\n");
if (info->params.info_flags & EDD_INFO_USE_INT13_FN50)
- p += snprintf(p, left, "use int13 fn50\n");
+ p += scnprintf(p, left, "use int13 fn50\n");
return (p - buf);
}
return -EINVAL;
}
- p += snprintf(p, left, "0x%x\n", info->params.num_default_cylinders);
+ p += scnprintf(p, left, "0x%x\n", info->params.num_default_cylinders);
return (p - buf);
}
return -EINVAL;
}
- p += snprintf(p, left, "0x%x\n", info->params.num_default_heads);
+ p += scnprintf(p, left, "0x%x\n", info->params.num_default_heads);
return (p - buf);
}
return -EINVAL;
}
- p += snprintf(p, left, "0x%x\n", info->params.sectors_per_track);
+ p += scnprintf(p, left, "0x%x\n", info->params.sectors_per_track);
return (p - buf);
}
return -EINVAL;
}
- p += snprintf(p, left, "0x%llx\n", info->params.number_of_sectors);
+ p += scnprintf(p, left, "0x%llx\n", info->params.number_of_sectors);
return (p - buf);
}
/* Do not access memory above the end of our stack page,
* it might not exist.
*/
- andl $0x1fff,%eax
- cmpl $0x1fec,%eax
+ andl $(THREAD_SIZE-1),%eax
+ cmpl $(THREAD_SIZE-20),%eax
popl %eax
jae nmi_stack_correct
cmpl $sysenter_entry,12(%esp)
#include <asm/pgtable.h>
#include <asm/desc.h>
#include <asm/cache.h>
+#include <asm/thread_info.h>
+
#define OLD_CL_MAGIC_ADDR 0x90020
#define OLD_CL_MAGIC 0xA33F
ret
ENTRY(stack_start)
- .long init_thread_union+8192
+ .long init_thread_union+THREAD_SIZE
.long __BOOT_DS
/* This is the default interrupt "handler" :-) */
spin_unlock_irqrestore(&ioapic_lock, flags);
}
-#if defined(CONFIG_SMP)
+#if defined(CONFIG_IRQBALANCE)
# include <asm/processor.h> /* kernel_thread() */
# include <linux/kernel_stat.h> /* kstat */
# include <linux/slab.h> /* kmalloc() */
__initcall(balanced_irq_init);
-#else /* !SMP */
+#else /* !CONFIG_IRQBALANCE */
static inline void move_irq(int irq) { }
+#endif /* CONFIG_IRQBALANCE */
+#ifndef CONFIG_SMP
void send_IPI_self(int vector)
{
unsigned int cfg;
*/
apic_write_around(APIC_ICR, cfg);
}
-#endif /* defined(CONFIG_SMP) */
+#endif /* !CONFIG_SMP */
/*
{
int pin1, pin2;
int vector;
+ unsigned int ver;
+
+ ver = apic_read(APIC_LVR);
+ ver = GET_APIC_VERSION(ver);
/*
* get/set the timer IRQ vector:
* mode for the 8259A whenever interrupts are routed
* through I/O APICs. Also IRQ0 has to be enabled in
* the 8259A which implies the virtual wire has to be
- * disabled in the local APIC.
+ * disabled in the local APIC. Finally timer interrupts
+ * need to be acknowledged manually in the 8259A for
+ * do_slow_timeoffset() and for the i82489DX when using
+ * the NMI watchdog.
*/
apic_write_around(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_EXTINT);
init_8259A(1);
- timer_ack = 1;
+ if (nmi_watchdog == NMI_IO_APIC && !APIC_INTEGRATED(ver))
+ timer_ack = 1;
+ else
+ timer_ack = !cpu_has_tsc;
enable_8259A_irq(0);
pin1 = find_isa_irq_pin(0, mp_INT);
disable_8259A_irq(0);
setup_nmi();
enable_8259A_irq(0);
- check_nmi_watchdog();
+ if (check_nmi_watchdog() < 0);
+ timer_ack = !cpu_has_tsc;
}
return;
}
add_pin_to_irq(0, 0, pin2);
if (nmi_watchdog == NMI_IO_APIC) {
setup_nmi();
- check_nmi_watchdog();
+ if (check_nmi_watchdog() < 0);
+ timer_ack = !cpu_has_tsc;
}
return;
}
long esp;
__asm__ __volatile__("andl %%esp,%0" :
- "=r" (esp) : "0" (8191));
+ "=r" (esp) : "0" (THREAD_SIZE - 1));
if (unlikely(esp < (sizeof(struct thread_info) + 1024))) {
printk("do_IRQ: stack overflow: %ld\n",
esp - sizeof(struct thread_info));
static int irq_affinity_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, irq_affinity[(long)data]);
+ int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
/* write microcode via MSR 0x79 */
wrmsr(MSR_IA32_UCODE_WRITE,
- (unsigned long) uci->mc->bits,
- (unsigned long) uci->mc->bits >> 16 >> 16);
+ (unsigned long) uci->mc->bits,
+ (unsigned long) uci->mc->bits >> 16 >> 16);
wrmsr(MSR_IA32_UCODE_REV, 0, 0);
__asm__ __volatile__ ("cpuid" : : : "ax", "bx", "cx", "dx");
module_init(microcode_init)
module_exit(microcode_exit)
+MODULE_ALIAS_MISCDEV(MICROCODE_MINOR);
* be enabled
* -1: the lapic NMI watchdog is disabled, but can be enabled
*/
-static int nmi_active;
+int nmi_active;
#define K7_EVNTSEL_ENABLE (1 << 22)
#define K7_EVNTSEL_INT (1 << 20)
}
}
+EXPORT_SYMBOL(nmi_active);
EXPORT_SYMBOL(nmi_watchdog);
EXPORT_SYMBOL(disable_lapic_nmi_watchdog);
EXPORT_SYMBOL(enable_lapic_nmi_watchdog);
extern void scheduling_functions_end_here(void);
#define first_sched ((unsigned long) scheduling_functions_start_here)
#define last_sched ((unsigned long) scheduling_functions_end_here)
+#define top_esp (THREAD_SIZE - sizeof(unsigned long))
+#define top_ebp (THREAD_SIZE - 2*sizeof(unsigned long))
unsigned long get_wchan(struct task_struct *p)
{
return 0;
stack_page = (unsigned long)p->thread_info;
esp = p->thread.esp;
- if (!stack_page || esp < stack_page || esp > 8188+stack_page)
+ if (!stack_page || esp < stack_page || esp > top_esp+stack_page)
return 0;
/* include/asm-i386/system.h:switch_to() pushes ebp last. */
ebp = *(unsigned long *) esp;
do {
- if (ebp < stack_page || ebp > 8184+stack_page)
+ if (ebp < stack_page || ebp > top_ebp+stack_page)
return 0;
eip = *(unsigned long *) (ebp+4);
if (eip < first_sched || eip >= last_sched)
#ifdef CONFIG_EARLY_PRINTK
{
- char *s = strstr(*cmdline_p, "earlyprintk=");
- if (s) {
- extern void setup_early_printk(char *);
- setup_early_printk(s+12);
- printk("early console should work ....\n");
- }
+ char *s = strstr(*cmdline_p, "earlyprintk=");
+ if (s) {
+ extern void setup_early_printk(char *);
+
+ setup_early_printk(s);
+ printk("early console enabled\n");
+ }
}
#endif
printk("CPU%d: ", 0);
print_cpu_info(&cpu_data[0]);
+ boot_cpu_physical_apicid = GET_APIC_ID(apic_read(APIC_ID));
boot_cpu_logical_apicid = logical_smp_processor_id();
current_thread_info()->cpu = 0;
setup_local_APIC();
map_cpu_to_logical_apicid();
- if (GET_APIC_ID(apic_read(APIC_ID)) != boot_cpu_physical_apicid)
- BUG();
setup_portio_remap();
obj-$(CONFIG_X86_CYCLONE_TIMER) += timer_cyclone.o
obj-$(CONFIG_HPET_TIMER) += timer_hpet.o
+obj-$(CONFIG_X86_PM_TIMER) += timer_pm.o
}
#endif
+/* calculate cpu_khz */
+void __init init_cpu_khz(void)
+{
+ if (cpu_has_tsc) {
+ unsigned long tsc_quotient = calibrate_tsc();
+ if (tsc_quotient) {
+ /* report CPU clock rate in Hz.
+ * The formula is (10^6 * 2^32) / (2^32 * 1 / (clocks/us)) =
+ * clock/second. Our precision is about 100 ppm.
+ */
+ { unsigned long eax=0, edx=1000;
+ __asm__("divl %2"
+ :"=a" (cpu_khz), "=d" (edx)
+ :"r" (tsc_quotient),
+ "0" (eax), "1" (edx));
+ printk("Detected %lu.%03lu MHz processor.\n", cpu_khz / 1000, cpu_khz % 1000);
+ }
+ }
+ }
+}
#ifdef CONFIG_HPET_TIMER
&timer_hpet,
#endif
+#ifdef CONFIG_X86_PM_TIMER
+ &timer_pmtmr,
+#endif
&timer_tsc,
&timer_pit,
NULL,
}
}
- /* init cpu_khz.
- * XXX - This should really be done elsewhere,
- * and in a more generic fashion. -johnstul@us.ibm.com
- */
- if (cpu_has_tsc) {
- unsigned long tsc_quotient = calibrate_tsc();
- if (tsc_quotient) {
- /* report CPU clock rate in Hz.
- * The formula is (10^6 * 2^32) / (2^32 * 1 / (clocks/us)) =
- * clock/second. Our precision is about 100 ppm.
- */
- { unsigned long eax=0, edx=1000;
- __asm__("divl %2"
- :"=a" (cpu_khz), "=d" (edx)
- :"r" (tsc_quotient),
- "0" (eax), "1" (edx));
- printk("Detected %lu.%03lu MHz processor.\n", cpu_khz / 1000, cpu_khz % 1000);
- }
- }
- }
+ init_cpu_khz();
/* Everything looks good! */
return 0;
--- /dev/null
+/*
+ * (C) Dominik Brodowski <linux@brodo.de> 2003
+ *
+ * Driver to use the Power Management Timer (PMTMR) available in some
+ * southbridges as primary timing source for the Linux kernel.
+ *
+ * Based on parts of linux/drivers/acpi/hardware/hwtimer.c, timer_pit.c,
+ * timer_hpet.c, and on Arjan van de Ven's implementation for 2.4.
+ *
+ * This file is licensed under the GPL v2.
+ */
+
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/init.h>
+#include <asm/types.h>
+#include <asm/timer.h>
+#include <asm/smp.h>
+#include <asm/io.h>
+#include <asm/arch_hooks.h>
+
+
+/* The I/O port the PMTMR resides at.
+ * The location is detected during setup_arch(),
+ * in arch/i386/acpi/boot.c */
+u32 pmtmr_ioport = 0;
+
+
+/* value of the Power timer at last timer interrupt */
+static u32 offset_tick;
+static u32 offset_delay;
+
+static unsigned long long monotonic_base;
+static seqlock_t monotonic_lock = SEQLOCK_UNLOCKED;
+
+#define ACPI_PM_MASK 0xFFFFFF /* limit it to 24 bits */
+
+/*helper function to safely read acpi pm timesource*/
+static inline u32 read_pmtmr(void)
+{
+ u32 v1=0,v2=0,v3=0;
+ /* It has been reported that because of various broken
+ * chipsets (ICH4, PIIX4 and PIIX4E) where the ACPI PM time
+ * source is not latched, so you must read it multiple
+ * times to insure a safe value is read.
+ */
+ do {
+ v1 = inl(pmtmr_ioport);
+ v2 = inl(pmtmr_ioport);
+ v3 = inl(pmtmr_ioport);
+ } while ((v1 > v2 && v1 < v3) || (v2 > v3 && v2 < v1)
+ || (v3 > v1 && v3 < v2));
+
+ /* mask the output to 24 bits */
+ return v2 & ACPI_PM_MASK;
+}
+
+static int init_pmtmr(char* override)
+{
+ u32 value1, value2;
+ unsigned int i;
+
+ if (override[0] && strncmp(override,"pmtmr",5))
+ return -ENODEV;
+
+ if (!pmtmr_ioport)
+ return -ENODEV;
+
+ /* we use the TSC for delay_pmtmr, so make sure it exists */
+ if (!cpu_has_tsc)
+ return -ENODEV;
+
+ /* "verify" this timing source */
+ value1 = read_pmtmr();
+ for (i = 0; i < 10000; i++) {
+ value2 = read_pmtmr();
+ if (value2 == value1)
+ continue;
+ if (value2 > value1)
+ goto pm_good;
+ if ((value2 < value1) && ((value2) < 0xFFF))
+ goto pm_good;
+ printk(KERN_INFO "PM-Timer had inconsistent results: 0x%#x, 0x%#x - aborting.\n", value1, value2);
+ return -EINVAL;
+ }
+ printk(KERN_INFO "PM-Timer had no reasonable result: 0x%#x - aborting.\n", value1);
+ return -ENODEV;
+
+pm_good:
+ init_cpu_khz();
+ return 0;
+}
+
+static inline u32 cyc2us(u32 cycles)
+{
+ /* The Power Management Timer ticks at 3.579545 ticks per microsecond.
+ * 1 / PM_TIMER_FREQUENCY == 0.27936511 =~ 286/1024 [error: 0.024%]
+ *
+ * Even with HZ = 100, delta is at maximum 35796 ticks, so it can
+ * easily be multiplied with 286 (=0x11E) without having to fear
+ * u32 overflows.
+ */
+ cycles *= 286;
+ return (cycles >> 10);
+}
+
+/*
+ * this gets called during each timer interrupt
+ * - Called while holding the writer xtime_lock
+ */
+static void mark_offset_pmtmr(void)
+{
+ u32 lost, delta, last_offset;
+ static int first_run = 1;
+ last_offset = offset_tick;
+
+ write_seqlock(&monotonic_lock);
+
+ offset_tick = read_pmtmr();
+
+ /* calculate tick interval */
+ delta = (offset_tick - last_offset) & ACPI_PM_MASK;
+
+ /* convert to usecs */
+ delta = cyc2us(delta);
+
+ /* update the monotonic base value */
+ monotonic_base += delta * NSEC_PER_USEC;
+ write_sequnlock(&monotonic_lock);
+
+ /* convert to ticks */
+ delta += offset_delay;
+ lost = delta / (USEC_PER_SEC / HZ);
+ offset_delay = delta % (USEC_PER_SEC / HZ);
+
+
+ /* compensate for lost ticks */
+ if (lost >= 2)
+ jiffies_64 += lost - 1;
+
+ /* don't calculate delay for first run,
+ or if we've got less then a tick */
+ if (first_run || (lost < 1)) {
+ first_run = 0;
+ offset_delay = 0;
+ }
+}
+
+
+static unsigned long long monotonic_clock_pmtmr(void)
+{
+ u32 last_offset, this_offset;
+ unsigned long long base, ret;
+ unsigned seq;
+
+
+ /* atomically read monotonic base & last_offset */
+ do {
+ seq = read_seqbegin(&monotonic_lock);
+ last_offset = offset_tick;
+ base = monotonic_base;
+ } while (read_seqretry(&monotonic_lock, seq));
+
+ /* Read the pmtmr */
+ this_offset = read_pmtmr();
+
+ /* convert to nanoseconds */
+ ret = (this_offset - last_offset) & ACPI_PM_MASK;
+ ret = base + (cyc2us(ret) * NSEC_PER_USEC);
+ return ret;
+}
+
+static void delay_pmtmr(unsigned long loops)
+{
+ unsigned long bclock, now;
+
+ rdtscl(bclock);
+ do
+ {
+ rep_nop();
+ rdtscl(now);
+ } while ((now-bclock) < loops);
+}
+
+
+/*
+ * get the offset (in microseconds) from the last call to mark_offset()
+ * - Called holding a reader xtime_lock
+ */
+static unsigned long get_offset_pmtmr(void)
+{
+ u32 now, offset, delta = 0;
+
+ offset = offset_tick;
+ now = read_pmtmr();
+ delta = (now - offset)&ACPI_PM_MASK;
+
+ return (unsigned long) offset_delay + cyc2us(delta);
+}
+
+
+/* acpi timer_opts struct */
+struct timer_opts timer_pmtmr = {
+ .name = "pmtmr",
+ .init = init_pmtmr,
+ .mark_offset = mark_offset_pmtmr,
+ .get_offset = get_offset_pmtmr,
+ .monotonic_clock = monotonic_clock_pmtmr,
+ .delay = delay_pmtmr,
+};
+
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>");
+MODULE_DESCRIPTION("Power Management Timer (PMTMR) as primary timing source for x86");
unsigned long esp = tsk->thread.esp;
/* User space on another CPU? */
- if ((esp ^ (unsigned long)tsk->thread_info) & (PAGE_MASK<<1))
+ if ((esp ^ (unsigned long)tsk->thread_info) & ~(THREAD_SIZE - 1))
return;
show_trace(tsk, (unsigned long *)esp);
}
void die(const char * str, struct pt_regs * regs, long err)
{
static int die_counter;
+ int nl = 0;
console_verbose();
spin_lock_irq(&die_lock);
bust_spinlocks(1);
handle_BUG(regs);
printk("%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter);
+#ifdef CONFIG_PREEMPT
+ printk("PREEMPT ");
+ nl = 1;
+#endif
+#ifdef CONFIG_SMP
+ printk("SMP ");
+ nl = 1;
+#endif
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ printk("DEBUG_PAGEALLOC");
+ nl = 1;
+#endif
+ if (nl)
+ printk("\n");
show_registers(regs);
bust_spinlocks(0);
spin_unlock_irq(&die_lock);
if (cpu_model > 0xd)
return 0;
- if (cpu_model > 5) {
+ if (cpu_model == 9) {
+ nmi_ops.cpu_type = "i386/p6_mobile";
+ } else if (cpu_model > 5) {
nmi_ops.cpu_type = "i386/piii";
} else if (cpu_model > 2) {
nmi_ops.cpu_type = "i386/pii";
.cpu_type = "timer"
};
-
int __init nmi_timer_init(struct oprofile_operations ** ops)
{
+ extern int nmi_active;
+
+ if (nmi_active <= 0)
+ return -ENODEV;
+
*ops = &nmi_timer_ops;
printk(KERN_INFO "oprofile: using NMI timer interrupt.\n");
return 0;
#include <linux/oprofile.h>
#include <asm/ptrace.h>
#include <asm/msr.h>
+#include <asm/apic.h>
#include "op_x86_model.h"
#include "op_counter.h"
}
}
+ /* Only P6 based Pentium M need to re-unmask the apic vector but it
+ * doesn't hurt other P6 variant */
+ apic_write(APIC_LVTPC, apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED);
+
/* We can't work out if we really handled an interrupt. We
* might have caught a *second* counter just after overflowing
* the interrupt for this counter then arrives
}
struct pci_fixup pcibios_fixups[] = {
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82451NX, pci_fixup_i450nx },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454GX, pci_fixup_i450gx },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8886BF, pci_fixup_umc_ide },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5513, pci_fixup_ide_trash },
- { PCI_FIXUP_HEADER, PCI_ANY_ID, PCI_ANY_ID, pci_fixup_ide_bases },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, pci_fixup_latency },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5598, pci_fixup_latency },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_3, pci_fixup_piix4_acpi },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_10, pci_fixup_ide_trash },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_11, pci_fixup_ide_trash },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_9, pci_fixup_ide_trash },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, pci_fixup_via_northbridge_bug },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8622, pci_fixup_via_northbridge_bug },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, pci_fixup_via_northbridge_bug },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8367_0, pci_fixup_via_northbridge_bug },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_NCR, PCI_DEVICE_ID_NCR_53C810, pci_fixup_ncr53c810 },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_fixup_transparent_bridge },
- { 0 }
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_82451NX,
+ .hook = pci_fixup_i450nx
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_82454GX,
+ .hook = pci_fixup_i450gx
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_UMC,
+ .device = PCI_DEVICE_ID_UMC_UM8886BF,
+ .hook = pci_fixup_umc_ide
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_SI,
+ .device = PCI_DEVICE_ID_SI_5513,
+ .hook = pci_fixup_ide_trash
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_ANY_ID,
+ .device = PCI_ANY_ID,
+ .hook = pci_fixup_ide_bases
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_SI,
+ .device = PCI_DEVICE_ID_SI_5597,
+ .hook = pci_fixup_latency
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_SI,
+ .device = PCI_DEVICE_ID_SI_5598,
+ .hook = pci_fixup_latency
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_82371AB_3,
+ .hook = pci_fixup_piix4_acpi
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_82801CA_10,
+ .hook = pci_fixup_ide_trash
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_82801CA_11,
+ .hook = pci_fixup_ide_trash
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_82801DB_9,
+ .hook = pci_fixup_ide_trash
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_VIA,
+ .device = PCI_DEVICE_ID_VIA_8363_0,
+ .hook = pci_fixup_via_northbridge_bug
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_VIA,
+ .device = PCI_DEVICE_ID_VIA_8622,
+ .hook = pci_fixup_via_northbridge_bug
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_VIA,
+ .device = PCI_DEVICE_ID_VIA_8361,
+ .hook = pci_fixup_via_northbridge_bug
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_VIA,
+ .device = PCI_DEVICE_ID_VIA_8367_0,
+ .hook = pci_fixup_via_northbridge_bug
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_NCR,
+ .device = PCI_DEVICE_ID_NCR_53C810,
+ .hook = pci_fixup_ncr53c810
+ },
+ {
+ .pass = PCI_FIXUP_HEADER,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_ANY_ID,
+ .hook = pci_fixup_transparent_bridge
+ },
+ { .pass = 0 }
};
Say Y here to enable machine check support for IA-64. If you're
unsure, answer Y.
+config IA64_CYCLONE
+ bool "Support Cyclone(EXA) Time Source"
+ help
+ Say Y here to enable support for IBM EXA Cyclone time source.
+ If you're unsure, answer N.
+
config PM
bool "Power Management support"
depends on IA64_GENERIC || IA64_DIG || IA64_HP_ZX1
err = register_netdev(dev);
if (err) {
- kfree(dev);
+ free_netdev(dev);
return err;
}
return sys_lseek(fd, offset, whence);
}
-extern asmlinkage long sys_getgroups (int gidsetsize, gid_t *grouplist);
+static int
+groups16_to_user(short *grouplist, struct group_info *group_info)
+{
+ int i;
+ short group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ group = (short)GROUP_AT(group_info, i);
+ if (put_user(group, grouplist+i))
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int
+groups16_from_user(struct group_info *group_info, short *grouplist)
+{
+ int i;
+ short group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ if (get_user(group, grouplist+i))
+ return -EFAULT;
+ GROUP_AT(group_info, i) = (gid_t)group;
+ }
+
+ return 0;
+}
asmlinkage long
sys32_getgroups16 (int gidsetsize, short *grouplist)
{
- mm_segment_t old_fs = get_fs();
- gid_t gl[NGROUPS];
- int ret, i;
+ int i;
- set_fs(KERNEL_DS);
- ret = sys_getgroups(gidsetsize, gl);
- set_fs(old_fs);
+ if (gidsetsize < 0)
+ return -EINVAL;
- if (gidsetsize && ret > 0 && ret <= NGROUPS)
- for (i = 0; i < ret; i++, grouplist++)
- if (put_user(gl[i], grouplist))
- return -EFAULT;
- return ret;
+ get_group_info(current->group_info);
+ i = current->group_info->ngroups;
+ if (gidsetsize) {
+ if (i > gidsetsize) {
+ i = -EINVAL;
+ goto out;
+ }
+ if (groups16_to_user(grouplist, current->group_info)) {
+ i = -EFAULT;
+ goto out;
+ }
+ }
+out:
+ put_group_info(current->group_info);
+ return i;
}
-extern asmlinkage long sys_setgroups (int gidsetsize, gid_t *grouplist);
-
asmlinkage long
sys32_setgroups16 (int gidsetsize, short *grouplist)
{
- mm_segment_t old_fs = get_fs();
- gid_t gl[NGROUPS];
- int ret, i;
+ struct group_info *group_info;
+ int retval;
- if ((unsigned) gidsetsize > NGROUPS)
+ if (!capable(CAP_SETGID))
+ return -EPERM;
+ if ((unsigned)gidsetsize > NGROUPS_MAX)
return -EINVAL;
- for (i = 0; i < gidsetsize; i++, grouplist++)
- if (get_user(gl[i], grouplist))
- return -EFAULT;
- set_fs(KERNEL_DS);
- ret = sys_setgroups(gidsetsize, gl);
- set_fs(old_fs);
- return ret;
+
+ group_info = groups_alloc(gidsetsize);
+ if (!group_info)
+ return -ENOMEM;
+ retval = groups16_from_user(group_info, grouplist);
+ if (retval) {
+ put_group_info(group_info);
+ return retval;
+ }
+
+ retval = set_current_groups(group_info);
+ put_group_info(group_info);
+
+ return retval;
}
asmlinkage long
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_SMP) += smp.o smpboot.o
obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o
+obj-$(CONFIG_IA64_CYCLONE) += cyclone.o
# The gate DSO image is built using a special linker script.
targets += gate.so gate-syms.o
#include <asm/page.h>
#include <asm/system.h>
#include <asm/numa.h>
+#include <asm/sal.h>
+#include <asm/cyclone.h>
#define PREFIX "ACPI: "
return 0;
}
+/* Hook from generic ACPI tables.c */
+void __init acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+{
+ if (!strncmp(oem_id, "IBM", 3) &&
+ (!strncmp(oem_table_id, "SERMOW", 6))){
+
+ /* Unfortunatly ITC_DRIFT is not yet part of the
+ * official SAL spec, so the ITC_DRIFT bit is not
+ * set by the BIOS on this hardware.
+ */
+ sal_platform_features |= IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT;
+
+ /*Start cyclone clock*/
+ cyclone_setup(0);
+ }
+}
static int __init
acpi_parse_madt (unsigned long phys_addr, unsigned long size)
ipi_base_addr = (unsigned long) ioremap(acpi_madt->lapic_address, 0);
printk(KERN_INFO PREFIX "Local APIC address 0x%lx\n", ipi_base_addr);
+
+ acpi_madt_oem_check(acpi_madt->header.oem_id,
+ acpi_madt->header.oem_table_id);
+
return 0;
}
--- /dev/null
+#include <linux/smp.h>
+#include <linux/time.h>
+#include <linux/errno.h>
+
+/* IBM Summit (EXA) Cyclone counter code*/
+#define CYCLONE_CBAR_ADDR 0xFEB00CD0
+#define CYCLONE_PMCC_OFFSET 0x51A0
+#define CYCLONE_MPMC_OFFSET 0x51D0
+#define CYCLONE_MPCS_OFFSET 0x51A8
+#define CYCLONE_TIMER_FREQ 100000000
+
+int use_cyclone;
+int __init cyclone_setup(char *str)
+{
+ use_cyclone = 1;
+ return 1;
+}
+
+static u32* volatile cyclone_timer; /* Cyclone MPMC0 register */
+static u32 last_update_cyclone;
+
+static unsigned long offset_base;
+
+static unsigned long get_offset_cyclone(void)
+{
+ u32 now;
+ unsigned long offset;
+
+ /* Read the cyclone timer */
+ now = readl(cyclone_timer);
+ /* .. relative to previous update*/
+ offset = now - last_update_cyclone;
+
+ /* convert cyclone ticks to nanoseconds */
+ offset = (offset*NSEC_PER_SEC)/CYCLONE_TIMER_FREQ;
+
+ /* our adjusted time in nanoseconds */
+ return offset_base + offset;
+}
+
+static void update_cyclone(long delta_nsec)
+{
+ u32 now;
+ unsigned long offset;
+
+ /* Read the cyclone timer */
+ now = readl(cyclone_timer);
+ /* .. relative to previous update*/
+ offset = now - last_update_cyclone;
+
+ /* convert cyclone ticks to nanoseconds */
+ offset = (offset*NSEC_PER_SEC)/CYCLONE_TIMER_FREQ;
+
+ offset += offset_base;
+
+ /* Be careful about signed/unsigned comparisons here: */
+ if (delta_nsec < 0 || (unsigned long) delta_nsec < offset)
+ offset_base = offset - delta_nsec;
+ else
+ offset_base = 0;
+
+ last_update_cyclone = now;
+}
+
+static void reset_cyclone(void)
+{
+ offset_base = 0;
+ last_update_cyclone = readl(cyclone_timer);
+}
+
+struct time_interpolator cyclone_interpolator = {
+ .get_offset = get_offset_cyclone,
+ .update = update_cyclone,
+ .reset = reset_cyclone,
+ .frequency = CYCLONE_TIMER_FREQ,
+ .drift = -100,
+};
+
+int __init init_cyclone_clock(void)
+{
+ u64* reg;
+ u64 base; /* saved cyclone base address */
+ u64 offset; /* offset from pageaddr to cyclone_timer register */
+ int i;
+
+ if (!use_cyclone)
+ return -ENODEV;
+
+ printk(KERN_INFO "Summit chipset: Starting Cyclone Counter.\n");
+
+ /* find base address */
+ offset = (CYCLONE_CBAR_ADDR);
+ reg = (u64*)ioremap_nocache(offset, sizeof(u64));
+ if(!reg){
+ printk(KERN_ERR "Summit chipset: Could not find valid CBAR register.\n");
+ use_cyclone = 0;
+ return -ENODEV;
+ }
+ base = readq(reg);
+ if(!base){
+ printk(KERN_ERR "Summit chipset: Could not find valid CBAR value.\n");
+ use_cyclone = 0;
+ return -ENODEV;
+ }
+ iounmap(reg);
+
+ /* setup PMCC */
+ offset = (base + CYCLONE_PMCC_OFFSET);
+ reg = (u64*)ioremap_nocache(offset, sizeof(u64));
+ if(!reg){
+ printk(KERN_ERR "Summit chipset: Could not find valid PMCC register.\n");
+ use_cyclone = 0;
+ return -ENODEV;
+ }
+ writel(0x00000001,reg);
+ iounmap(reg);
+
+ /* setup MPCS */
+ offset = (base + CYCLONE_MPCS_OFFSET);
+ reg = (u64*)ioremap_nocache(offset, sizeof(u64));
+ if(!reg){
+ printk(KERN_ERR "Summit chipset: Could not find valid MPCS register.\n");
+ use_cyclone = 0;
+ return -ENODEV;
+ }
+ writel(0x00000001,reg);
+ iounmap(reg);
+
+ /* map in cyclone_timer */
+ offset = (base + CYCLONE_MPMC_OFFSET);
+ cyclone_timer = (u32*)ioremap_nocache(offset, sizeof(u32));
+ if(!cyclone_timer){
+ printk(KERN_ERR "Summit chipset: Could not find valid MPMC register.\n");
+ use_cyclone = 0;
+ return -ENODEV;
+ }
+
+ /*quick test to make sure its ticking*/
+ for(i=0; i<3; i++){
+ u32 old = readl(cyclone_timer);
+ int stall = 100;
+ while(stall--) barrier();
+ if(readl(cyclone_timer) == old){
+ printk(KERN_ERR "Summit chipset: Counter not counting! DISABLED\n");
+ iounmap(cyclone_timer);
+ cyclone_timer = 0;
+ use_cyclone = 0;
+ return -ENODEV;
+ }
+ }
+ /* initialize last tick */
+ last_update_cyclone = readl(cyclone_timer);
+ register_time_interpolator(&cyclone_interpolator);
+
+ return 0;
+}
+
+__initcall(init_cyclone_clock);
static int irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, irq_affinity[(long)data]);
+ int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
case UNW_WHERE_FR:
if (rval <= 5)
- val = unw.preg_index[UNW_REG_F2 + (rval - 1)];
+ val = unw.preg_index[UNW_REG_F2 + (rval - 2)];
else if (rval >= 16 && rval <= 31)
val = unw.preg_index[UNW_REG_F16 + (rval - 16)];
else {
res->end = end;
res->flags = flags;
- if (insert_resource(root, res))
+ if (insert_resource(root, res)) {
+ kfree(res);
return -EBUSY;
+ }
return 0;
}
#include <linux/pci.h>
+#include <asm/uaccess.h>
#include <asm/sn/sgi.h>
#include <asm/io.h>
#include <asm/sn/iograph.h>
#include <asm/sn/types.h>
#include <asm/sn/sgi.h>
#include <asm/sn/driver.h>
-#include <asm/sn/iograph.h>
#include <asm/param.h>
#include <asm/sn/pio.h>
#include <asm/sn/xtalk/xwidget.h>
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include <asm/sn/sgi.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/pci/pci_bus_cvlink.h>
#include <asm/sn/sn_cpuid.h>
#include <asm/sn/simulator.h>
extern void register_pcibr_intr(int irq, pcibr_intr_t intr);
-static void sn_dma_flush_init(unsigned long start,
+static struct sn_flush_device_list *sn_dma_flush_init(unsigned long start,
unsigned long end,
int idx, int pin, int slot);
extern int cbrick_type_get_nasid(nasid_t);
}
/*
- * pci_bus_cvlink_init() - To be called once during initialization before
+ * pci_bus_cvlink_init() - To be called once during initialization before
* SGI IO Infrastructure init is called.
*/
int
}
/*
- * pci_bus_to_vertex() - Given a logical Linux Bus Number returns the associated
+ * pci_bus_to_vertex() - Given a logical Linux Bus Number returns the associated
* pci bus vertex from the SGI IO Infrastructure.
*/
static inline vertex_hdl_t
}
/*
- * devfn_to_vertex() - returns the vertex of the device given the bus, slot,
+ * devfn_to_vertex() - returns the vertex of the device given the bus, slot,
* and function numbers.
*/
vertex_hdl_t
* ../pci/1, ../pci/2 ..
*/
if (func == 0) {
- sprintf(name, "%d", slot);
- if (hwgraph_traverse(pci_bus, name, &device_vertex) ==
+ sprintf(name, "%d", slot);
+ if (hwgraph_traverse(pci_bus, name, &device_vertex) ==
GRAPH_SUCCESS) {
if (device_vertex) {
return(device_vertex);
return(device_vertex);
}
+/*
+ * sn_alloc_pci_sysdata() - This routine allocates a pci controller
+ * which is expected as the pci_dev and pci_bus sysdata by the Linux
+ * PCI infrastructure.
+ */
+static struct pci_controller *
+sn_alloc_pci_sysdata(void)
+{
+ struct pci_controller *pci_sysdata;
+
+ pci_sysdata = kmalloc(sizeof(*pci_sysdata), GFP_KERNEL);
+ if (!pci_sysdata)
+ return NULL;
+
+ memset(pci_sysdata, 0, sizeof(*pci_sysdata));
+ return pci_sysdata;
+}
+
+/*
+ * sn_pci_fixup_bus() - This routine sets up a bus's resources
+ * consistent with the Linux PCI abstraction layer.
+ */
+static int __init
+sn_pci_fixup_bus(struct pci_bus *bus)
+{
+ struct pci_controller *pci_sysdata;
+ struct sn_widget_sysdata *widget_sysdata;
+
+ pci_sysdata = sn_alloc_pci_sysdata();
+ if (!pci_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_bus(): Unable to "
+ "allocate memory for pci_sysdata\n");
+ return -ENOMEM;
+ }
+ widget_sysdata = kmalloc(sizeof(struct sn_widget_sysdata),
+ GFP_KERNEL);
+ if (!widget_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_bus(): Unable to "
+ "allocate memory for widget_sysdata\n");
+ kfree(pci_sysdata);
+ return -ENOMEM;
+ }
+
+ widget_sysdata->vhdl = pci_bus_to_vertex(bus->number);
+ pci_sysdata->platform_data = (void *)widget_sysdata;
+ bus->sysdata = pci_sysdata;
+ return 0;
+}
+
+
+/*
+ * sn_pci_fixup_slot() - This routine sets up a slot's resources
+ * consistent with the Linux PCI abstraction layer. Resources acquired
+ * from our PCI provider include PIO maps to BAR space and interrupt
+ * objects.
+ */
+static int
+sn_pci_fixup_slot(struct pci_dev *dev)
+{
+ extern int bit_pos_to_irq(int);
+ unsigned int irq;
+ int idx;
+ u16 cmd;
+ vertex_hdl_t vhdl;
+ unsigned long size;
+ struct pci_controller *pci_sysdata;
+ struct sn_device_sysdata *device_sysdata;
+ pciio_intr_line_t lines = 0;
+ vertex_hdl_t device_vertex;
+ pciio_provider_t *pci_provider;
+ pciio_intr_t intr_handle;
+
+ /* Allocate a controller structure */
+ pci_sysdata = sn_alloc_pci_sysdata();
+ if (!pci_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_slot: Unable to "
+ "allocate memory for pci_sysdata\n");
+ return -ENOMEM;
+ }
+
+ /* Set the device vertex */
+ device_sysdata = kmalloc(sizeof(struct sn_device_sysdata), GFP_KERNEL);
+ if (!device_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_slot: Unable to "
+ "allocate memory for device_sysdata\n");
+ kfree(pci_sysdata);
+ return -ENOMEM;
+ }
+
+ device_sysdata->vhdl = devfn_to_vertex(dev->bus->number, dev->devfn);
+ pci_sysdata->platform_data = (void *) device_sysdata;
+ dev->sysdata = pci_sysdata;
+ set_pci_provider(device_sysdata);
+
+ pci_read_config_word(dev, PCI_COMMAND, &cmd);
+
+ /*
+ * Set the resources address correctly. The assumption here
+ * is that the addresses in the resource structure has been
+ * read from the card and it was set in the card by our
+ * Infrastructure. NOTE: PIC and TIOCP don't have big-window
+ * upport for PCI I/O space. So by mapping the I/O space
+ * first we will attempt to use Device(x) registers for I/O
+ * BARs (which can't use big windows like MEM BARs can).
+ */
+ vhdl = device_sysdata->vhdl;
+
+ /* Allocate the IORESOURCE_IO space first */
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ unsigned long start, end, addr;
+
+ device_sysdata->pio_map[idx] = NULL;
+
+ if (!(dev->resource[idx].flags & IORESOURCE_IO))
+ continue;
+
+ start = dev->resource[idx].start;
+ end = dev->resource[idx].end;
+ size = end - start;
+ if (!size)
+ continue;
+
+ addr = (unsigned long)pciio_pio_addr(vhdl, 0,
+ PCIIO_SPACE_WIN(idx), 0, size,
+ &device_sysdata->pio_map[idx], 0);
+
+ if (!addr) {
+ dev->resource[idx].start = 0;
+ dev->resource[idx].end = 0;
+ printk("sn_pci_fixup(): pio map failure for "
+ "%s bar%d\n", dev->slot_name, idx);
+ } else {
+ addr |= __IA64_UNCACHED_OFFSET;
+ dev->resource[idx].start = addr;
+ dev->resource[idx].end = addr + size;
+ }
+
+ if (dev->resource[idx].flags & IORESOURCE_IO)
+ cmd |= PCI_COMMAND_IO;
+ }
+
+ /* Allocate the IORESOURCE_MEM space next */
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ unsigned long start, end, addr;
+
+ if ((dev->resource[idx].flags & IORESOURCE_IO))
+ continue;
+
+ start = dev->resource[idx].start;
+ end = dev->resource[idx].end;
+ size = end - start;
+ if (!size)
+ continue;
+
+ addr = (unsigned long)pciio_pio_addr(vhdl, 0,
+ PCIIO_SPACE_WIN(idx), 0, size,
+ &device_sysdata->pio_map[idx], 0);
+
+ if (!addr) {
+ dev->resource[idx].start = 0;
+ dev->resource[idx].end = 0;
+ printk("sn_pci_fixup(): pio map failure for "
+ "%s bar%d\n", dev->slot_name, idx);
+ } else {
+ addr |= __IA64_UNCACHED_OFFSET;
+ dev->resource[idx].start = addr;
+ dev->resource[idx].end = addr + size;
+ }
+
+ if (dev->resource[idx].flags & IORESOURCE_MEM)
+ cmd |= PCI_COMMAND_MEMORY;
+ }
+
+ /*
+ * Update the Command Word on the Card.
+ */
+ cmd |= PCI_COMMAND_MASTER; /* If the device doesn't support */
+ /* bit gets dropped .. no harm */
+ pci_write_config_word(dev, PCI_COMMAND, cmd);
+
+ pci_read_config_byte(dev, PCI_INTERRUPT_PIN, (unsigned char *)&lines);
+ device_vertex = device_sysdata->vhdl;
+ pci_provider = device_sysdata->pci_provider;
+ device_sysdata->intr_handle = NULL;
+
+ if (!lines)
+ return 0;
+
+ irqpdaindr->curr = dev;
+
+ intr_handle = (pci_provider->intr_alloc)(device_vertex, NULL, lines, device_vertex);
+ if (intr_handle == NULL) {
+ printk(KERN_WARNING "sn_pci_fixup: pcibr_intr_alloc() failed\n");
+ kfree(pci_sysdata);
+ kfree(device_sysdata);
+ return -ENOMEM;
+ }
+
+ device_sysdata->intr_handle = intr_handle;
+ irq = intr_handle->pi_irq;
+ irqpdaindr->device_dev[irq] = dev;
+ (pci_provider->intr_connect)(intr_handle, (intr_func_t)0, (intr_arg_t)0);
+ dev->irq = irq;
+
+ register_pcibr_intr(irq, (pcibr_intr_t)intr_handle);
+
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ int ibits = ((pcibr_intr_t)intr_handle)->bi_ibits;
+ int i;
+
+ size = dev->resource[idx].end -
+ dev->resource[idx].start;
+ if (size == 0) continue;
+
+ for (i=0; i<8; i++) {
+ if (ibits & (1 << i) ) {
+ extern pcibr_info_t pcibr_info_get(vertex_hdl_t);
+ device_sysdata->dma_flush_list =
+ sn_dma_flush_init(dev->resource[idx].start,
+ dev->resource[idx].end,
+ idx,
+ i,
+ PCIBR_INFO_SLOT_GET_EXT(pcibr_info_get(device_sysdata->vhdl)));
+ }
+ }
+ }
+ return 0;
+}
+
struct sn_flush_nasid_entry flush_nasid_list[MAX_NASIDS];
/* Initialize the data structures for flushing write buffers after a PIO read.
- * The theory is:
+ * The theory is:
* Take an unused int. pin and associate it with a pin that is in use.
* After a PIO read, force an interrupt on the unused pin, forcing a write buffer flush
- * on the in use pin. This will prevent the race condition between PIO read responses and
+ * on the in use pin. This will prevent the race condition between PIO read responses and
* DMA writes.
*/
-static void
+static struct sn_flush_device_list *
sn_dma_flush_init(unsigned long start, unsigned long end, int idx, int pin, int slot)
{
- nasid_t nasid;
+ nasid_t nasid;
unsigned long dnasid;
int wid_num;
int bus;
sizeof(struct sn_flush_device_list *), GFP_KERNEL);
if (!flush_nasid_list[nasid].widget_p) {
printk(KERN_WARNING "sn_dma_flush_init: Cannot allocate memory for nasid list\n");
- return;
+ return NULL;
}
memset(flush_nasid_list[nasid].widget_p, 0, (HUB_WIDGET_ID_MAX+1) * sizeof(struct sn_flush_device_list *));
}
itte = HUB_L(IIO_ITTE_GET(nasid, itte_index));
flush_nasid_list[nasid].iio_itte[bwin] = itte;
- wid_num = (itte >> IIO_ITTE_WIDGET_SHIFT) &
- IIO_ITTE_WIDGET_MASK;
+ wid_num = (itte >> IIO_ITTE_WIDGET_SHIFT)
+ & IIO_ITTE_WIDGET_MASK;
bus = itte & IIO_ITTE_OFFSET_MASK;
if (bus == 0x4 || bus == 0x8) {
bus = 0;
* because these are the IOC4 slots and we don't flush them.
*/
if (isIO9(nasid) && bus == 0 && (slot == 1 || slot == 4)) {
- return;
+ return NULL;
}
if (flush_nasid_list[nasid].widget_p[wid_num] == NULL) {
flush_nasid_list[nasid].widget_p[wid_num] = (struct sn_flush_device_list *)kmalloc(
DEV_PER_WIDGET * sizeof (struct sn_flush_device_list), GFP_KERNEL);
if (!flush_nasid_list[nasid].widget_p[wid_num]) {
printk(KERN_WARNING "sn_dma_flush_init: Cannot allocate memory for nasid sub-list\n");
- return;
+ return NULL;
}
- memset(flush_nasid_list[nasid].widget_p[wid_num], 0,
+ memset(flush_nasid_list[nasid].widget_p[wid_num], 0,
DEV_PER_WIDGET * sizeof (struct sn_flush_device_list));
p = &flush_nasid_list[nasid].widget_p[wid_num][0];
for (i=0; i<DEV_PER_WIDGET;i++) {
* about the case when there is a card in slot 2. A multifunction card will appear
* to be in slot 6 (from an interrupt point of view) also. That's the most we'll
* have to worry about. A four function card will overload the interrupt lines in
- * slot 2 and 6.
+ * slot 2 and 6.
* We also need to special case the 12160 device in slot 3. Fortunately, we have
* a spare intr. line for pin 4, so we'll use that for the 12160.
* All other buses have slot 3 and 4 and slots 7 and 8 unused. Since we can only
pcireg_bridge_intr_device_bit_set(b, (1<<18));
dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
pcireg_bridge_intr_addr_set(b, 6, ((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
- (dnasid << 36) | (0xfUL << 48)));
+ (dnasid << 36) | (0xfUL << 48)));
} else if (pin == 2) { /* 12160 SCSI device in IO9 */
p->force_int_addr = (unsigned long)pcireg_bridge_force_always_addr_get(b, 4);
pcireg_bridge_intr_device_bit_set(b, (2<<12));
dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
pcireg_bridge_intr_addr_set(b, 4,
((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
- (dnasid << 36) | (0xfUL << 48)));
+ (dnasid << 36) | (0xfUL << 48)));
} else { /* slot == 6 */
p->force_int_addr = (unsigned long)pcireg_bridge_force_always_addr_get(b, 7);
pcireg_bridge_intr_device_bit_set(b, (5<<21));
dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
pcireg_bridge_intr_addr_set(b, 7,
((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
- (dnasid << 36) | (0xfUL << 48)));
+ (dnasid << 36) | (0xfUL << 48)));
}
} else {
p->force_int_addr = (unsigned long)pcireg_bridge_force_always_addr_get(b, (pin +2));
dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
pcireg_bridge_intr_addr_set(b, (pin + 2),
((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
- (dnasid << 36) | (0xfUL << 48)));
- }
-}
-
-/*
- * sn_pci_fixup() - This routine is called when platform_pci_fixup() is
- * invoked at the end of pcibios_init() to link the Linux pci
- * infrastructure to SGI IO Infrasturcture - ia64/kernel/pci.c
- *
- * Other platform specific fixup can also be done here.
- */
-static void __init
-sn_pci_fixup(int arg)
-{
- struct list_head *ln;
- struct pci_bus *pci_bus = NULL;
- struct pci_dev *device_dev = NULL;
- struct sn_widget_sysdata *widget_sysdata;
- struct sn_device_sysdata *device_sysdata;
- pcibr_intr_t intr_handle;
- pciio_provider_t *pci_provider;
- vertex_hdl_t device_vertex;
- pciio_intr_line_t lines = 0;
- extern int numnodes;
- int cnode;
-
- if (arg == 0) {
-#ifdef CONFIG_PROC_FS
- extern void register_sn_procfs(void);
-#endif
- extern void sgi_master_io_infr_init(void);
- extern void sn_init_cpei_timer(void);
-
- sgi_master_io_infr_init();
-
- for (cnode = 0; cnode < numnodes; cnode++) {
- extern void intr_init_vecblk(cnodeid_t);
- intr_init_vecblk(cnode);
- }
-
- sn_init_cpei_timer();
-
-#ifdef CONFIG_PROC_FS
- register_sn_procfs();
-#endif
- return;
- }
-
-
- done_probing = 1;
-
- /*
- * Initialize the pci bus vertex in the pci_bus struct.
- */
- for( ln = pci_root_buses.next; ln != &pci_root_buses; ln = ln->next) {
- pci_bus = pci_bus_b(ln);
- widget_sysdata = kmalloc(sizeof(struct sn_widget_sysdata),
- GFP_KERNEL);
- if (!widget_sysdata) {
- printk(KERN_WARNING "sn_pci_fixup(): Unable to "
- "allocate memory for widget_sysdata\n");
- return;
- }
- widget_sysdata->vhdl = pci_bus_to_vertex(pci_bus->number);
- pci_bus->sysdata = (void *)widget_sysdata;
- }
-
- /*
- * set the root start and end so that drivers calling check_region()
- * won't see a conflict
- */
-
-#ifdef CONFIG_IA64_SGI_SN_SIM
- if (! IS_RUNNING_ON_SIMULATOR()) {
- ioport_resource.start = 0xc000000000000000;
- ioport_resource.end = 0xcfffffffffffffff;
- }
-#endif
-
- /*
- * Set the root start and end for Mem Resource.
- */
- iomem_resource.start = 0;
- iomem_resource.end = 0xffffffffffffffff;
-
- /*
- * Initialize the device vertex in the pci_dev struct.
- */
- while ((device_dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, device_dev)) != NULL) {
- unsigned int irq;
- int idx;
- u16 cmd;
- vertex_hdl_t vhdl;
- unsigned long size;
- extern int bit_pos_to_irq(int);
-
- /* Set the device vertex */
-
- device_sysdata = kmalloc(sizeof(struct sn_device_sysdata),
- GFP_KERNEL);
- if (!device_sysdata) {
- printk(KERN_WARNING "sn_pci_fixup: Cannot allocate memory for device sysdata\n");
- return;
- }
-
- device_sysdata->vhdl = devfn_to_vertex(device_dev->bus->number, device_dev->devfn);
- device_dev->sysdata = (void *) device_sysdata;
- set_pci_provider(device_sysdata);
-
- pci_read_config_word(device_dev, PCI_COMMAND, &cmd);
-
- /*
- * Set the resources address correctly. The assumption here
- * is that the addresses in the resource structure has been
- * read from the card and it was set in the card by our
- * Infrastructure ..
- */
- vhdl = device_sysdata->vhdl;
- /* Allocate the IORESOURCE_IO space first */
- for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
- unsigned long start, end, addr;
-
- if (!(device_dev->resource[idx].flags & IORESOURCE_IO))
- continue;
-
- start = device_dev->resource[idx].start;
- end = device_dev->resource[idx].end;
- size = end - start;
- if (!size)
- continue;
-
- addr = (unsigned long)pciio_pio_addr(vhdl, 0,
- PCIIO_SPACE_WIN(idx), 0, size, 0, 0);
- if (!addr) {
- device_dev->resource[idx].start = 0;
- device_dev->resource[idx].end = 0;
- printk("sn_pci_fixup(): pio map failure for "
- "%s bar%d\n", device_dev->slot_name, idx);
- } else {
- addr |= __IA64_UNCACHED_OFFSET;
- device_dev->resource[idx].start = addr;
- device_dev->resource[idx].end = addr + size;
- }
-
- if (device_dev->resource[idx].flags & IORESOURCE_IO)
- cmd |= PCI_COMMAND_IO;
- }
-
- /* Allocate the IORESOURCE_MEM space next */
- for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
- unsigned long start, end, addr;
-
- if ((device_dev->resource[idx].flags & IORESOURCE_IO))
- continue;
-
- start = device_dev->resource[idx].start;
- end = device_dev->resource[idx].end;
- size = end - start;
- if (!size)
- continue;
-
- addr = (unsigned long)pciio_pio_addr(vhdl, 0,
- PCIIO_SPACE_WIN(idx), 0, size, 0, 0);
- if (!addr) {
- device_dev->resource[idx].start = 0;
- device_dev->resource[idx].end = 0;
- printk("sn_pci_fixup(): pio map failure for "
- "%s bar%d\n", device_dev->slot_name, idx);
- } else {
- addr |= __IA64_UNCACHED_OFFSET;
- device_dev->resource[idx].start = addr;
- device_dev->resource[idx].end = addr + size;
- }
-
- if (device_dev->resource[idx].flags & IORESOURCE_MEM)
- cmd |= PCI_COMMAND_MEMORY;
- }
-
- /*
- * Update the Command Word on the Card.
- */
- cmd |= PCI_COMMAND_MASTER; /* If the device doesn't support */
- /* bit gets dropped .. no harm */
- pci_write_config_word(device_dev, PCI_COMMAND, cmd);
-
- pci_read_config_byte(device_dev, PCI_INTERRUPT_PIN,
- (unsigned char *)&lines);
- device_sysdata = (struct sn_device_sysdata *)device_dev->sysdata;
- device_vertex = device_sysdata->vhdl;
- pci_provider = device_sysdata->pci_provider;
-
- if (!lines) {
- continue;
- }
-
- irqpdaindr->curr = device_dev;
- intr_handle = (pci_provider->intr_alloc)(device_vertex, NULL, lines, device_vertex);
-
- if (intr_handle == NULL) {
- printk("sn_pci_fixup: pcibr_intr_alloc() failed\n");
- continue;
- }
- irq = intr_handle->bi_irq;
- irqpdaindr->device_dev[irq] = device_dev;
- (pci_provider->intr_connect)(intr_handle, (intr_func_t)0, (intr_arg_t)0);
- device_dev->irq = irq;
- register_pcibr_intr(irq, (pcibr_intr_t)intr_handle);
-
- for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
- int ibits = intr_handle->bi_ibits;
- int i;
-
- size = device_dev->resource[idx].end -
- device_dev->resource[idx].start;
- if (size == 0)
- continue;
-
- for (i=0; i<8; i++) {
- if (ibits & (1 << i) ) {
- sn_dma_flush_init(device_dev->resource[idx].start,
- device_dev->resource[idx].end,
- idx,
- i,
- PCIBR_INFO_SLOT_GET_EXT(pcibr_info_get(device_sysdata->vhdl)));
- }
- }
- }
-
+ (dnasid << 36) | (0xfUL << 48)));
}
+ return p;
}
/*
- * linux_bus_cvlink() Creates a link between the Linux PCI Bus number
+ * linux_bus_cvlink() Creates a link between the Linux PCI Bus number
* to the actual hardware component that it represents:
* /dev/hw/linux/busnum/0 -> ../../../hw/module/001c01/slab/0/Ibrick/xtalk/15/pci
*
continue;
sprintf(name, "%x", index);
- (void) hwgraph_edge_add(linux_busnum, busnum_to_pcibr_vhdl[index],
+ (void) hwgraph_edge_add(linux_busnum, busnum_to_pcibr_vhdl[index],
name);
}
}
* Linux PCI Bus numbers are assigned from lowest module_id numbers
* (rack/slot etc.)
*/
-static int
+static int
pci_bus_map_create(struct pcibr_list_s *softlistp, moduleid_t moduleid)
{
memset(moduleid_str, 0, 16);
format_module_id(moduleid_str, moduleid, MODULE_FORMAT_BRIEF);
- (void) ioconfig_get_busnum((char *)moduleid_str, &basebus_num);
+ (void) ioconfig_get_busnum((char *)moduleid_str, &basebus_num);
/*
- * Assign the correct bus number and also the nasid of this
+ * Assign the correct bus number and also the nasid of this
* pci Xwidget.
*/
bus_number = basebus_num + pcibr_widget_to_bus(pci_bus);
printk("pci_bus_map_create: Cannot allocate memory for ate maps\n");
return -1;
}
- memset(busnum_to_atedmamaps[bus_number], 0x0,
+ memset(busnum_to_atedmamaps[bus_number], 0x0,
sizeof(struct pcibr_dmamap_s) * MAX_ATE_MAPS);
return(0);
}
/*
- * pci_bus_to_hcl_cvlink() - This routine is called after SGI IO Infrastructure
+ * pci_bus_to_hcl_cvlink() - This routine is called after SGI IO Infrastructure
* initialization has completed to set up the mappings between PCI BRIDGE
- * ASIC and logical pci bus numbers.
+ * ASIC and logical pci bus numbers.
*
* Must be called before pci_init() is invoked.
*/
int
-pci_bus_to_hcl_cvlink(void)
+pci_bus_to_hcl_cvlink(void)
{
int i;
extern pcibr_list_p pcibr_list;
/* Is this PCI bus associated with this moduleid? */
moduleid = NODE_MODULEID(
- NASID_TO_COMPACT_NODEID(pcibr_soft->bs_nasid));
+ NASID_TO_COMPACT_NODEID(pcibr_soft->bs_nasid));
if (modules[i]->id == moduleid) {
struct pcibr_list_s *new_element;
continue;
}
- /*
- * BASEIO IObricks attached to a module have
- * a higher priority than non BASEIO IOBricks
+ /*
+ * BASEIO IObricks attached to a module have
+ * a higher priority than non BASEIO IOBricks
* when it comes to persistant pci bus
* numbering, so put them on the front of the
* list.
softlistp = softlistp->bl_next;
}
- /*
+ /*
* We now have a list of all the pci bridges associated with
* the module_id, modules[i]. Call pci_bus_map_create() for
* each pci bridge
/*
* Ugly hack to get PCI setup until we have a proper ACPI namespace.
*/
+
+#define PCI_BUSES_TO_SCAN 256
+
extern struct pci_ops sn_pci_ops;
int __init
sn_pci_init (void)
{
-# define PCI_BUSES_TO_SCAN 256
int i = 0;
struct pci_controller *controller;
+ struct list_head *ln;
+ struct pci_bus *pci_bus = NULL;
+ struct pci_dev *pci_dev = NULL;
+ extern int numnodes;
+ int cnode, ret;
+#ifdef CONFIG_PROC_FS
+ extern void register_sn_procfs(void);
+#endif
+ extern void sgi_master_io_infr_init(void);
+ extern void sn_init_cpei_timer(void);
+
if (!ia64_platform_is("sn2") || IS_RUNNING_ON_SIMULATOR())
return 0;
/*
* set pci_raw_ops, etc.
*/
- sn_pci_fixup(0);
+
+ sgi_master_io_infr_init();
+
+ for (cnode = 0; cnode < numnodes; cnode++) {
+ extern void intr_init_vecblk(cnodeid_t);
+ intr_init_vecblk(cnode);
+ }
+
+ sn_init_cpei_timer();
+
+#ifdef CONFIG_PROC_FS
+ register_sn_procfs();
+#endif
controller = kmalloc(sizeof(struct pci_controller), GFP_KERNEL);
if (controller) {
/*
* actually find devices and fill in hwgraph structs
*/
- sn_pci_fixup(1);
+
+ done_probing = 1;
+
+ /*
+ * Initialize the pci bus vertex in the pci_bus struct.
+ */
+ for( ln = pci_root_buses.next; ln != &pci_root_buses; ln = ln->next) {
+ pci_bus = pci_bus_b(ln);
+ ret = sn_pci_fixup_bus(pci_bus);
+ if ( ret ) {
+ printk(KERN_WARNING
+ "sn_pci_fixup: sn_pci_fixup_bus fails : error %d\n",
+ ret);
+ return;
+ }
+ }
+
+ /*
+ * set the root start and end so that drivers calling check_region()
+ * won't see a conflict
+ */
+
+#ifdef CONFIG_IA64_SGI_SN_SIM
+ if (! IS_RUNNING_ON_SIMULATOR()) {
+ ioport_resource.start = 0xc000000000000000;
+ ioport_resource.end = 0xcfffffffffffffff;
+ }
+#endif
+
+ /*
+ * Set the root start and end for Mem Resource.
+ */
+ iomem_resource.start = 0;
+ iomem_resource.end = 0xffffffffffffffff;
+
+ /*
+ * Initialize the device vertex in the pci_dev struct.
+ */
+ while ((pci_dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, pci_dev)) != NULL) {
+ ret = sn_pci_fixup_slot(pci_dev);
+ if ( ret ) {
+ printk(KERN_WARNING
+ "sn_pci_fixup: sn_pci_fixup_slot fails : error %d\n",
+ ret);
+ return;
+ }
+ }
return 0;
}
/*
* Get hwgraph vertex for the device
*/
- device_sysdata = (struct sn_device_sysdata *) hwdev->sysdata;
+ device_sysdata = SN_DEVICE_SYSDATA(hwdev);
vhdl = device_sysdata->vhdl;
/*
/*
* Get the hwgraph vertex for the device
*/
- device_sysdata = (struct sn_device_sysdata *) hwdev->sysdata;
+ device_sysdata = SN_DEVICE_SYSDATA(hwdev);
vhdl = device_sysdata->vhdl;
/*
/*
* find vertex for the device
*/
- device_sysdata = (struct sn_device_sysdata *)hwdev->sysdata;
+ device_sysdata = SN_DEVICE_SYSDATA(hwdev);
vhdl = device_sysdata->vhdl;
/*
#include <asm/smp.h>
#include <asm/sn/sgi.h>
#include <asm/sn/io.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/hcl.h>
#include <asm/sn/labelcl.h>
#include <asm/sn/sn_private.h>
#include <asm/hw_irq.h>
#include <asm/sn/types.h>
#include <asm/sn/sgi.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/hcl.h>
#include <asm/sn/labelcl.h>
#include <asm/sn/io.h>
return(0);
}
-#include "asm/sn/sn_private.h"
-
/*
* Format a module id for printing.
*
#include <linux/bootmem.h>
#include <asm/sn/sgi.h>
#include <asm/sn/io.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/hcl.h>
#include <asm/sn/labelcl.h>
#include <asm/sn/sn_private.h>
#include <asm/sn/klconfig.h>
#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/simulator.h>
int maxcpus;
}
void
-init_platform_hubinfo(nodepda_t **nodepdaindr) {
+init_platform_hubinfo(nodepda_t **nodepdaindr)
+{
cnodeid_t cnode;
hubinfo_t hubinfo;
nodepda_t *npda;
extern int numionodes;
+ if (IS_RUNNING_ON_SIMULATOR())
+ return;
for (cnode = 0; cnode < numionodes; cnode++) {
npda = nodepdaindr[cnode];
hubinfo = (hubinfo_t)npda->pdinfo;
#include <asm/sal.h>
#include <asm/sn/sn_sal.h>
#include <asm/sn/sn2/shub_mmr.h>
+#include <asm/sn/pda.h>
extern irqpda_t *irqpdaindr;
extern cnodeid_t master_node_get(vertex_hdl_t vhdl);
{
cpuid_t cpu, best_cpu = CPU_NONE;
int slice, min_count = 1000;
- irqpda_t *irqs;
for (slice = CPUS_PER_NODE - 1; slice >= 0; slice--) {
int intrs;
if (!cpu_online(cpu))
continue;
- irqs = irqpdaindr;
- intrs = irqs->num_irq_used;
+ intrs = pdacpu(cpu)->sn_num_irqs;
if (min_count > intrs) {
min_count = intrs;
}
}
}
+ pdacpu(best_cpu)->sn_num_irqs++;
return best_cpu;
}
#include <linux/types.h>
#include <asm/sn/sgi.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/pci/pciio.h>
#include <asm/sn/pci/pcibr.h>
#include <asm/sn/pci/pcibr_private.h>
#include <linux/types.h>
#include <asm/sn/sgi.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/pci/pciio.h>
#include <asm/sn/pci/pcibr.h>
#include <asm/sn/pci/pcibr_private.h>
#include <linux/module.h>
#include <asm/sn/sgi.h>
#include <asm/sn/arch.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/pci/pciio.h>
#include <asm/sn/pci/pcibr.h>
#include <asm/sn/pci/pcibr_private.h>
#include <linux/types.h>
#include <asm/sn/sgi.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/addrs.h>
#include <asm/sn/pci/pcibr.h>
#include <asm/sn/pci/pcibr_private.h>
#include <linux/types.h>
#include <asm/sn/sgi.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/pci/pciio.h>
#include <asm/sn/pci/pcibr.h>
#include <asm/sn/pci/pcibr_private.h>
peer_widget_info->w_efunc = 0;
peer_widget_info->w_einfo = 0;
peer_widget_info->w_name = kmalloc(strlen(peer_path) + 1, GFP_KERNEL);
+ if (!peer_widget_info->w_name) {
+ kfree(peer_widget_info);
+ return -ENOMEM;
+ }
strcpy(peer_widget_info->w_name, peer_path);
if (hwgraph_info_add_LBL(peer_conn_v, INFO_LBL_XWIDGET,
(arbitrary_info_t)peer_widget_info) != GRAPH_SUCCESS) {
+ kfree(peer_widget_info->w_name);
kfree(peer_widget_info);
return 0;
}
s = dev_to_name(pcibr_vhdl, devnm, MAXDEVNAME);
pcibr_soft->bs_name = kmalloc(strlen(s) + 1, GFP_KERNEL);
+ if (!pcibr_soft->bs_name)
+ return -ENOMEM;
+
strcpy(pcibr_soft->bs_name, s);
pcibr_soft->bs_conn = xconn_vhdl;
#include <asm/system.h>
#include <asm/sn/sgi.h>
#include <asm/uaccess.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/hcl.h>
#include <asm/sn/labelcl.h>
#include <asm/sn/io.h>
#include <asm/sn/types.h>
#include <asm/sn/sgi.h>
#include <asm/sn/driver.h>
-#include <asm/sn/iograph.h>
#include <asm/param.h>
#include <asm/sn/pio.h>
#include <asm/sn/xtalk/xwidget.h>
#include <asm/delay.h>
#include <asm/sn/sgi.h>
#include <asm/sn/io.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/hcl.h>
#include <asm/sn/labelcl.h>
#include <asm/sn/sn_private.h>
#include <asm/errno.h>
#include <asm/sn/sgi.h>
#include <asm/sn/driver.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/hcl.h>
#include <asm/sn/labelcl.h>
#include <asm/sn/xtalk/xtalk.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/sn/sgi.h>
-#include <asm/sn/iograph.h>
#include <asm/sn/hcl.h>
#include <asm/sn/types.h>
#include <asm/sn/pci/pciio.h>
static void
sn_set_affinity_irq(unsigned int irq, unsigned long cpu)
{
-#if CONFIG_SMP
+#ifdef CONFIG_SMP
int redir = 0;
struct sn_intr_list_t *p = sn_intr_list[irq];
pcibr_intr_t intr;
u64 master_node_bedrock_address;
static void sn_init_pdas(char **);
+static void scan_for_ionodes(void);
static nodepda_t *nodepdaindr[MAX_COMPACT_NODES];
* may not be initialized yet.
*/
-static int
+static int __init
pxm_to_nasid(int pxm)
{
int i;
*
* One time setup for Node Data Area. Called by sn_setup().
*/
-void
+void __init
sn_init_pdas(char **cmdline_p)
{
cnodeid_t cnode;
- void scan_for_ionodes(void);
/*
* Make sure that the PDA fits entirely in the same page as the
* physical_node_map and the pda and increment numionodes.
*/
-void
+static void __init
scan_for_ionodes(void)
{
int nasid = 0;
precision in some cases.
To compile this driver as a module, choose M here: the
- module will be called genrtc. To load the module automatically
- add 'alias char-major-10-135 genrtc' to your /etc/modules.conf
+ module will be called genrtc.
config GEN_RTC_X
bool "Extended RTC operation"
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
movec %d0, %ACR3
/* Enable cache */
- move.l #0xa4098400, %d0 /* Write buffer, dflt precise */
+ move.l #0xb6088400, %d0 /* Enable caches */
movec %d0,%CACR
nop
+
+#ifdef CONFIG_ROMFS_FS
/*
* Move ROM filesystem above bss :-)
*/
cmp.l %a0, %a2 /* Check if at end */
bne _copy_romfs
+#else /* CONFIG_ROMFS_FS */
+ lea.l _ebss, %a1
+ move.l %a1, _ramstart
+#endif /* CONFIG_ROMFS_FS */
+
+
/*
* Zero out the bss region.
*/
static int irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, irq_affinity[(long)data]);
+ int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
retval = HZ;
goto out;
case 4:
- retval = NGROUPS;
+ retval = NGROUPS_MAX;
goto out;
case 5:
retval = NR_OPEN;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
-
+ clock_was_set();
return 0;
}
time_esterror = NTP_PHASE_LIMIT;
}
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
EXPORT_SYMBOL(do_settimeofday);
err = register_netdev(dev);
if (err) {
- kfree(dev);
+ free_netdev(dev);
return err;
}
err = register_netdev(dev);
if (err) {
- kfree(dev);
+ free_netdev(dev);
return err;
}
err = register_netdev(dev);
if (err) {
- kfree(dev);
+ free_netdev(dev);
return err;
}
err = register_netdev(dev);
if (err) {
- kfree(dev);
+ free_netdev(dev);
return err;
}
static int irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, irq_affinity[(long)data]);
+ int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
extern void power4_idle(void);
extern boot_infos_t *boot_infos;
-char saved_command_line[256];
+char saved_command_line[COMMAND_LINE_SIZE];
unsigned char aux_device_present;
struct ide_machdep_calls ppc_ide_md;
char *sysmap;
ulong *data = rec->data;
switch (rec->tag) {
case BI_CMD_LINE:
- memcpy(cmd_line, (void *)data, rec->size);
+ strlcpy(cmd_line, (void *)data, sizeof(cmd_line));
break;
case BI_SYSMAP:
sysmap = (char *)((data[0] >= (KERNELBASE)) ? data[0] :
init_mm.brk = (unsigned long) klimit;
/* Save unparsed command line copy for /proc/cmdline */
- strcpy(saved_command_line, cmd_line);
+ strlcpy(saved_command_line, cmd_line, sizeof(saved_command_line));
*cmdline_p = cmd_line;
/* set up the bootmem stuff with available memory */
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irqrestore(&xtime_lock, flags);
+ clock_was_set();
return 0;
}
__setup_cpu_ppc970,
COMMON_PPC64_FW
},
+ { /* PPC970FX */
+ 0xffff0000, 0x003c0000, "PPC970FX",
+ CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP | CPU_FTR_CAN_NAP,
+ COMMON_USER_PPC64 | PPC_FEATURE_HAS_ALTIVEC_COMP,
+ 128, 128,
+ __setup_cpu_ppc970,
+ COMMON_PPC64_FW
+ },
{ /* Power5 */
0xffff0000, 0x003a0000, "Power5",
CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
#include <asm/prom.h>
#include <asm/hvconsole.h>
-/* map console index (e.g. 0) to vterm number (e.g. 0x30000000) */
-static int vtermnos[MAX_NR_HVC_CONSOLES];
+struct vtty_struct {
+ u32 vtermno;
+ int (*get_chars)(struct vtty_struct *vtty, char *buf, int count);
+ int (*put_chars)(struct vtty_struct *vtty, const char *buf, int count);
+ int (*ioctl)(struct vtty_struct *vtty, unsigned int cmd, unsigned long val);
+};
+static struct vtty_struct vttys[MAX_NR_HVC_CONSOLES];
-int hvc_get_chars(int index, char *buf, int count)
+int hvterm_get_chars(struct vtty_struct *vtty, char *buf, int count)
{
unsigned long got;
- if (index > MAX_NR_HVC_CONSOLES)
- return -1;
-
- if (plpar_hcall(H_GET_TERM_CHAR, vtermnos[index], 0, 0, 0, &got,
+ if (plpar_hcall(H_GET_TERM_CHAR, vtty->vtermno, 0, 0, 0, &got,
(unsigned long *)buf, (unsigned long *)buf+1) == H_Success) {
/*
* Work around a HV bug where it gives us a null
return 0;
}
-int hvc_put_chars(int index, const char *buf, int count)
+int hvterm_put_chars(struct vtty_struct *vtty, const char *buf, int count)
{
unsigned long *lbuf = (unsigned long *) buf;
long ret;
- if (index > MAX_NR_HVC_CONSOLES)
- return -1;
-
- ret = plpar_hcall_norets(H_PUT_TERM_CHAR, vtermnos[index], count, lbuf[0],
+ ret = plpar_hcall_norets(H_PUT_TERM_CHAR, vtty->vtermno, count, lbuf[0],
lbuf[1]);
if (ret == H_Success)
return count;
return -1;
}
+int hvc_get_chars(int index, char *buf, int count)
+{
+ struct vtty_struct *vtty = &vttys[index];
+
+ if (index >= MAX_NR_HVC_CONSOLES)
+ return -1;
+
+ return vtty->get_chars(vtty, buf, count);
+}
+
+int hvc_put_chars(int index, const char *buf, int count)
+{
+ struct vtty_struct *vtty = &vttys[index];
+
+ if (index >= MAX_NR_HVC_CONSOLES)
+ return -1;
+
+ return vtty->put_chars(vtty, buf, count);
+}
+
int hvc_find_vterms(void)
{
struct device_node *vty;
for (vty = of_find_node_by_name(NULL, "vty"); vty != NULL;
vty = of_find_node_by_name(vty, "vty")) {
+ struct vtty_struct *vtty;
u32 *vtermno;
vtermno = (u32 *)get_property(vty, "reg", NULL);
if (count >= MAX_NR_HVC_CONSOLES)
break;
+ vtty = &vttys[count];
if (device_is_compatible(vty, "hvterm1")) {
- vtermnos[count] = *vtermno;
+ vtty->vtermno = *vtermno;
+ vtty->get_chars = hvterm_get_chars;
+ vtty->put_chars = hvterm_put_chars;
+ vtty->ioctl = NULL;
hvc_instantiate();
count++;
}
#define IRQ_TO_IDSEL(irq) (((((irq) - 1) >> 3) & 7) + 1)
#define IRQ_TO_FUNC(irq) (((irq) - 1) & 7)
-
/* This is called by iSeries_activate_IRQs */
static unsigned int iSeries_startup_IRQ(unsigned int irq)
{
}
/*
+ * Temporary hack
+ */
+#define get_irq_desc(irq) &irq_desc[(irq)]
+
+/*
* This is called out of iSeries_fixup to activate interrupt
* generation for usable slots
*/
ppc_md.setup_residual = iSeries_setup_residual;
ppc_md.get_cpuinfo = iSeries_get_cpuinfo;
ppc_md.init_IRQ = iSeries_init_IRQ;
- ppc_md.init_irq_desc = iSeries_init_irq_desc;
ppc_md.get_irq = iSeries_get_irq;
ppc_md.init = NULL;
unsigned int irq = (long)data;
irq_desc_t *desc = get_irq_desc(irq);
struct hw_irq_stat *hwstat = get_irq_stat(desc);
- int len = cpumask_snprintf(page, count, hwstat->irq_affinity);
+ int len = cpumask_scnprintf(page, count, hwstat->irq_affinity);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
memset(buf, 0, size);
shared = (int)(lpaca->xLpPacaPtr->xSharedProc);
- n += snprintf(buf, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf, LPARCFG_BUFF_SIZE - n,
"serial_number=%c%c%c%c%c%c%c\n",
e2a(xItExtVpdPanel.mfgID[2]),
e2a(xItExtVpdPanel.mfgID[3]),
e2a(xItExtVpdPanel.systemSerial[4]),
e2a(xItExtVpdPanel.systemSerial[5]));
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_type=%c%c%c%c\n",
e2a(xItExtVpdPanel.machineType[0]),
e2a(xItExtVpdPanel.machineType[1]),
e2a(xItExtVpdPanel.machineType[3]));
lp_index = HvLpConfig_getLpIndex();
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_id=%d\n", (int)lp_index);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_active_processors=%d\n",
(int)HvLpConfig_getSystemPhysicalProcessors());
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_potential_processors=%d\n",
(int)HvLpConfig_getSystemPhysicalProcessors());
processors = (int)HvLpConfig_getPhysicalProcessors();
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_active_processors=%d\n", processors);
max_processors = (int)HvLpConfig_getMaxPhysicalProcessors();
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_potential_processors=%d\n", max_processors);
if(shared) {
entitled_capacity = processors * 100;
max_entitled_capacity = max_processors * 100;
}
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_entitled_capacity=%d\n", entitled_capacity);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_max_entitled_capacity=%d\n",
max_entitled_capacity);
if(shared) {
pool_id = HvLpConfig_getSharedPoolIndex();
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n, "pool=%d\n",
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n, "pool=%d\n",
(int)pool_id);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"pool_capacity=%d\n", (int)(HvLpConfig_getNumProcsInSharedPool(pool_id)*100));
}
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"shared_processor_mode=%d\n", shared);
return 0;
if(lp_index_ptr) lp_index = *lp_index_ptr;
}
- n = snprintf(buf, LPARCFG_BUFF_SIZE - n,
+ n = scnprintf(buf, LPARCFG_BUFF_SIZE - n,
"serial_number=%s\n", system_id);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_type=%s\n", model);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_id=%d\n", (int)lp_index);
rtas_node = find_path_device("/rtas");
if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR) {
h_get_ppp(&h_entitled,&h_unallocated,&h_aggregation,&h_resource);
#ifdef DEBUG
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"R4=0x%lx\n", h_entitled);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"R5=0x%lx\n", h_unallocated);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"R6=0x%lx\n", h_aggregation);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"R7=0x%lx\n", h_resource);
#endif /* DEBUG */
}
if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR) {
system_potential_processors = get_splpar_potential_characteristics();
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_active_processors=%ld\n",
(h_resource >> 2*8) & 0xffff);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_potential_processors=%d\n",
system_potential_processors);
} else {
system_potential_processors = system_active_processors;
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_active_processors=%d\n",
system_active_processors);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"system_potential_processors=%d\n",
system_potential_processors);
}
processors = systemcfg->processorCount;
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_active_processors=%d\n", processors);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_potential_processors=%d\n",
system_active_processors);
/* max_entitled_capacity will come out of get_splpar_potential_characteristics() when that function is complete */
max_entitled_capacity = system_active_processors * 100;
if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR) {
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_entitled_capacity=%ld\n", h_entitled);
} else {
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_entitled_capacity=%d\n", system_active_processors*100);
}
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"partition_max_entitled_capacity=%d\n",
max_entitled_capacity);
shared = 0;
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"shared_processor_mode=%d\n", shared);
if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR) {
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"pool=%ld\n", (h_aggregation >> 0*8)&0xffff);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"pool_capacity=%ld\n", (h_resource >> 3*8) &0xffff);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"group=%ld\n", (h_aggregation >> 2*8)&0xffff);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"capped=%ld\n", (h_resource >> 6*8)&0x40);
- n += snprintf(buf+n, LPARCFG_BUFF_SIZE - n,
+ n += scnprintf(buf+n, LPARCFG_BUFF_SIZE - n,
"capacity_weight=%d\n", (int)(h_resource>>5*8)&0xFF);
}
return 0;
unsigned long size, unsigned long offset);
unsigned long dev_tree_size;
+unsigned long _get_PIR(void);
#ifdef CONFIG_HMT
struct {
unsigned int pir;
unsigned int threadid;
-} hmt_thread_data[NR_CPUS] = {0};
+} hmt_thread_data[NR_CPUS];
#endif /* CONFIG_HMT */
char testString[] = "LINUX\n";
char stkbuf[40]; /* its small, its on stack */
int n, sn;
if (power_on_time == 0)
- n = snprintf(stkbuf, 40, "Power on time not set\n");
+ n = scnprintf(stkbuf,sizeof(stkbuf),"Power on time not set\n");
else
- n = snprintf(stkbuf, 40, "%lu\n", power_on_time);
+ n = scnprintf(stkbuf,sizeof(stkbuf),"%lu\n",power_on_time);
sn = strlen (stkbuf) +1;
if (*ppos >= sn)
if (error != 0){
printk(KERN_WARNING "error: reading the clock returned: %s\n",
ppc_rtas_process_error(error));
- n = snprintf (stkbuf, 40, "0");
+ n = scnprintf (stkbuf, sizeof(stkbuf), "0");
} else {
- n = snprintf (stkbuf, 40, "%lu\n", mktime(year, mon, day, hour, min, sec));
+ n = scnprintf (stkbuf, sizeof(stkbuf), "%lu\n",
+ mktime(year, mon, day, hour, min, sec));
}
kfree(ret);
n += check_location_string(ret, buffer + n);
n += sprintf ( buffer+n, " ");
/* see how many characters we have printed */
- snprintf ( t, 50, "%s ", ret);
+ scnprintf(t, sizeof(t), "%s ", ret);
pos += strlen(t);
if (pos >= llen) pos=0;
int n, sn;
char stkbuf[40]; /* its small, its on stack */
- n = snprintf(stkbuf, 40, "%lu\n", rtas_tone_frequency);
+ n = scnprintf(stkbuf, 40, "%lu\n", rtas_tone_frequency);
sn = strlen (stkbuf) +1;
if (*ppos >= sn)
int n, sn;
char stkbuf[40]; /* its small, its on stack */
- n = snprintf(stkbuf, 40, "%lu\n", rtas_tone_volume);
+ n = scnprintf(stkbuf, 40, "%lu\n", rtas_tone_volume);
sn = strlen (stkbuf) +1;
if (*ppos >= sn)
printk(KERN_ERR "rtasd: truncated error log from %d to %d bytes\n", rtas_error_log_max, RTAS_ERROR_LOG_MAX);
rtas_error_log_max = RTAS_ERROR_LOG_MAX;
}
+
+ /* Make room for the sequence number */
+ rtas_error_log_buffer_max = rtas_error_log_max + sizeof(int);
+
of_node_put(node);
return 0;
if (kernel_thread(rtasd, 0, CLONE_FS) < 0)
printk(KERN_ERR "Failed to start RTAS daemon\n");
- /* Make room for the sequence number */
- rtas_error_log_buffer_max = rtas_error_log_max + sizeof(int);
-
return 0;
}
}
write_sequnlock_irqrestore(&xtime_lock, flags);
+ clock_was_set();
return 0;
}
return sys_setfsgid((gid_t)gid);
}
+static int groups16_to_user(u16 *grouplist, struct group_info *group_info)
+{
+ int i;
+ u16 group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ group = (u16)GROUP_AT(group_info, i);
+ if (put_user(group, grouplist+i))
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int groups16_from_user(struct group_info *group_info, u16 *grouplist)
+{
+ int i;
+ u16 group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ if (get_user(group, grouplist+i))
+ return -EFAULT;
+ GROUP_AT(group_info, i) = (gid_t)group;
+ }
+
+ return 0;
+}
+
asmlinkage long sys32_getgroups16(int gidsetsize, u16 *grouplist)
{
- u16 groups[NGROUPS];
- int i,j;
+ int i;
if (gidsetsize < 0)
return -EINVAL;
- i = current->ngroups;
+
+ get_group_info(current->group_info);
+ i = current->group_info->ngroups;
if (gidsetsize) {
- if (i > gidsetsize)
- return -EINVAL;
- for(j=0;j<i;j++)
- groups[j] = current->groups[j];
- if (copy_to_user(grouplist, groups, sizeof(u16)*i))
- return -EFAULT;
+ if (i > gidsetsize) {
+ i = -EINVAL;
+ goto out;
+ }
+ if (groups16_to_user(grouplist, current->group_info)) {
+ i = -EFAULT;
+ goto out;
+ }
}
+out:
+ put_group_info(current->group_info);
return i;
}
asmlinkage long sys32_setgroups16(int gidsetsize, u16 *grouplist)
{
- u16 groups[NGROUPS];
- int i;
+ struct group_info *group_info;
+ int retval;
if (!capable(CAP_SETGID))
return -EPERM;
- if ((unsigned) gidsetsize > NGROUPS)
+ if ((unsigned)gidsetsize > NGROUPS_MAX)
return -EINVAL;
- if (copy_from_user(groups, grouplist, gidsetsize * sizeof(u16)))
- return -EFAULT;
- for (i = 0 ; i < gidsetsize ; i++)
- current->groups[i] = (gid_t)groups[i];
- current->ngroups = gidsetsize;
- return 0;
+
+ group_info = groups_alloc(gidsetsize);
+ if (!group_info)
+ return -ENOMEM;
+ retval = groups16_from_user(group_info, grouplist);
+ if (retval) {
+ put_group_info(group_info);
+ return retval;
+ }
+
+ retval = set_current_groups(group_info);
+ put_group_info(group_info);
+
+ return retval;
}
asmlinkage long sys32_getuid16(void)
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
write_seqlock_irq(&xtime_lock);
ret = bus_do_settimeofday(tv);
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return ret;
}
int i;
va_start(args, fmt);
- i = vsnprintf(ppbuf, sizeof(ppbuf), fmt, args);
+ i = vscnprintf(ppbuf, sizeof(ppbuf), fmt, args);
va_end(args);
prom_write(ppbuf, i);
if (cpus_empty(mask))
mask = cpu_online_map;
- len = cpumask_snprintf(page, count, mask);
+ len = cpumask_scnprintf(page, count, mask);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
return sys_setfsgid((gid_t)gid);
}
+static int groups16_to_user(u16 *grouplist, struct group_info *group_info)
+{
+ int i;
+ u16 group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ group = (u16)GROUP_AT(group_info, i);
+ if (put_user(group, grouplist+i))
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int groups16_from_user(struct group_info *group_info, u16 *grouplist)
+{
+ int i;
+ u16 group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ if (get_user(group, grouplist+i))
+ return -EFAULT;
+ GROUP_AT(group_info, i) = (gid_t)group;
+ }
+
+ return 0;
+}
+
asmlinkage long sys32_getgroups16(int gidsetsize, u16 *grouplist)
{
- u16 groups[NGROUPS];
- int i,j;
+ int i;
if (gidsetsize < 0)
return -EINVAL;
- i = current->ngroups;
+
+ get_group_info(current->group_info);
+ i = current->group_info->ngroups;
if (gidsetsize) {
- if (i > gidsetsize)
- return -EINVAL;
- for(j=0;j<i;j++)
- groups[j] = current->groups[j];
- if (copy_to_user(grouplist, groups, sizeof(u16)*i))
- return -EFAULT;
+ if (i > gidsetsize) {
+ i = -EINVAL;
+ goto out;
+ }
+ if (groups16_to_user(grouplist, current->group_info)) {
+ i = -EFAULT;
+ goto out;
+ }
}
+out:
+ put_group_info(current->group_info);
return i;
}
asmlinkage long sys32_setgroups16(int gidsetsize, u16 *grouplist)
{
- u16 groups[NGROUPS];
- int i;
+ struct group_info *group_info;
+ int retval;
if (!capable(CAP_SETGID))
return -EPERM;
- if ((unsigned) gidsetsize > NGROUPS)
+ if ((unsigned)gidsetsize > NGROUPS_MAX)
return -EINVAL;
- if (copy_from_user(groups, grouplist, gidsetsize * sizeof(u16)))
- return -EFAULT;
- for (i = 0 ; i < gidsetsize ; i++)
- current->groups[i] = (gid_t)groups[i];
- current->ngroups = gidsetsize;
- return 0;
+
+ group_info = groups_alloc(gidsetsize);
+ if (!group_info)
+ return -ENOMEM;
+ retval = groups16_from_user(group_info, grouplist);
+ if (retval) {
+ put_group_info(group_info);
+ return retval;
+ }
+
+ retval = set_current_groups(group_info);
+ put_group_info(group_info);
+
+ return retval;
}
asmlinkage long sys32_getuid16(void)
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
return 0;
}
int i;
va_start(args, fmt);
- i = vsnprintf(ppbuf, sizeof(ppbuf), fmt, args);
+ i = vscnprintf(ppbuf, sizeof(ppbuf), fmt, args);
va_end(args);
prom_write(ppbuf, i);
rtnl_lock();
err = register_netdevice(dev);
rtnl_unlock();
- if (err)
+ if (err) {
+ device->dev = NULL;
+ /* XXX: should we call ->remove() here? */
+ free_netdev(dev);
return 1;
+ }
lp = dev->priv;
/* lp.user is the first four bytes of the transport data, which
static int irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, irq_affinity[(long)data]);
+ int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
-
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
gettimeofday(tv, NULL);
timeradd(tv, &local_offset, tv);
time_unlock(flags);
+ clock_was_set();
}
int do_settimeofday(struct timespec *tv)
rval = -EPERM;
if (pid == 1) /* you may not mess with init */
- goto out;
+ goto out_tsk;
if (request == PTRACE_ATTACH) {
rval = ptrace_attach(child);
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq (&xtime_lock);
+ clock_was_set();
return 0;
}
bool
default y
help
- Write kernel log output directly into the VGA buffer. This is useful
- for kernel debugging when your machine crashes very early before
- the console code is initialized. For normal operation it is not
- recommended because it looks ugly and doesn't cooperate with
- klogd/syslogd or the X server. You should normally N here, unless
- you want to debug such a crash.
+ Write kernel log output directly into the VGA buffer or to a serial
+ port.
+
+ This is useful for kernel debugging when your machine crashes very
+ early before the console code is initialized. For normal operation
+ it is not recommended because it looks ugly and doesn't cooperate
+ with klogd/syslogd or the X server. You should normally N here,
+ unless you want to debug such a crash.
config HPET_TIMER
bool
ifneq ($(CONFIG_DEBUG_INFO),y)
CFLAGS += -fno-asynchronous-unwind-tables
endif
-#CFLAGS += $(call check_gcc,-funit-at-a-time,)
+
+# Enable unit-at-a-time mode when possible. It shrinks the
+# kernel considerably.
+CFLAGS += $(call check_gcc,-funit-at-a-time,)
head-y := arch/x86_64/kernel/head.o arch/x86_64/kernel/head64.o arch/x86_64/kernel/init_task.o
/* Simple VGA output */
#ifdef __i386__
-#define VGABASE (__PAGE_OFFSET + 0xb8000UL)
+#define VGABASE __pa(__PAGE_OFFSET + 0xb8000UL)
#else
#define VGABASE 0xffffffff800b8000UL
#endif
while ((c = *str++) != '\0' && n-- > 0) {
if (current_ypos >= MAX_YPOS) {
/* scroll 1 line up */
- for(k = 1, j = 0; k < MAX_YPOS; k++, j++) {
- for(i = 0; i < MAX_XPOS; i++) {
+ for (k = 1, j = 0; k < MAX_YPOS; k++, j++) {
+ for (i = 0; i < MAX_XPOS; i++) {
writew(readw(VGABASE + 2*(MAX_XPOS*k + i)),
VGABASE + 2*(MAX_XPOS*j + i));
}
}
- for(i = 0; i < MAX_XPOS; i++) {
+ for (i = 0; i < MAX_XPOS; i++)
writew(0x720, VGABASE + 2*(MAX_XPOS*j + i));
- }
current_ypos = MAX_YPOS-1;
}
if (c == '\n') {
current_ypos++;
} else if (c != '\r') {
writew(((0x7 << 8) | (unsigned short) c),
- VGABASE + 2*(MAX_XPOS*current_ypos + current_xpos++));
+ VGABASE + 2*(MAX_XPOS*current_ypos +
+ current_xpos++));
if (current_xpos >= MAX_XPOS) {
current_xpos = 0;
current_ypos++;
{
unsigned timeout = 0xffff;
while ((inb(early_serial_base + LSR) & XMTRDY) == 0 && --timeout)
- rep_nop();
+ cpu_relax();
outb(ch, early_serial_base + TXR);
return timeout ? 0 : -1;
}
}
}
+#define DEFAULT_BAUD 9600
+
static __init void early_serial_init(char *opt)
{
unsigned char c;
- unsigned divisor, baud = 38400;
+ unsigned divisor;
+ unsigned baud = DEFAULT_BAUD;
char *s, *e;
if (*opt == ',')
early_serial_base = simple_strtoul(s, &e, 16);
} else {
static int bases[] = { 0x3f8, 0x2f8 };
- if (!strncmp(s,"ttyS",4))
- s+=4;
- port = simple_strtoul(s, &e, 10);
- if (port > 1 || s == e)
- port = 0;
- early_serial_base = bases[port];
- }
+
+ if (!strncmp(s,"ttyS",4))
+ s += 4;
+ port = simple_strtoul(s, &e, 10);
+ if (port > 1 || s == e)
+ port = 0;
+ early_serial_base = bases[port];
+ }
}
- outb(0x3, early_serial_base + LCR); /* 8n1 */
- outb(0, early_serial_base + IER); /* no interrupt */
- outb(0, early_serial_base + FCR); /* no fifo */
- outb(0x3, early_serial_base + MCR); /* DTR + RTS */
+ outb(0x3, early_serial_base + LCR); /* 8n1 */
+ outb(0, early_serial_base + IER); /* no interrupt */
+ outb(0, early_serial_base + FCR); /* no fifo */
+ outb(0x3, early_serial_base + MCR); /* DTR + RTS */
s = strsep(&opt, ",");
if (s != NULL) {
baud = simple_strtoul(s, &e, 0);
if (baud == 0 || s == e)
- baud = 38400;
+ baud = DEFAULT_BAUD;
}
divisor = 115200 / baud;
char buf[512];
int n;
va_list ap;
+
va_start(ap,fmt);
- n = vsnprintf(buf,512,fmt,ap);
+ n = vscnprintf(buf,512,fmt,ap);
early_console->write(early_console,buf,n);
va_end(ap);
}
if (early_console_initialized)
return -1;
+ opt = strchr(opt, '=') + 1;
+
strlcpy(buf,opt,sizeof(buf));
space = strchr(buf, ' ');
if (space)
if (!early_console_initialized || !early_console)
return;
if (!keep_early) {
- printk("disabling early console...\n");
+ printk("disabling early console\n");
unregister_console(early_console);
early_console_initialized = 0;
} else {
- printk("keeping early console.\n");
+ printk("keeping early console\n");
}
}
-/* syntax: earlyprintk=vga
- earlyprintk=serial[,ttySn[,baudrate]]
- Append ,keep to not disable it when the real console takes over.
- Only vga or serial at a time, not both.
- Currently only ttyS0 and ttyS1 are supported.
- Interaction with the standard serial driver is not very good.
- The VGA output is eventually overwritten by the real console. */
-__setup("earlyprintk=", setup_early_printk);
+__setup("earlyprintk=", setup_early_printk);
/* default console: */
if (!strstr(saved_command_line, "console="))
strcat(saved_command_line, " console=tty0");
- s = strstr(saved_command_line, "earlyprintk=");
+ s = strstr(saved_command_line, "earlyprintk=");
if (s != NULL)
- setup_early_printk(s+12);
+ setup_early_printk(s);
#ifdef CONFIG_DISCONTIGMEM
s = strstr(saved_command_line, "numa=");
if (s != NULL)
static int irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, irq_affinity[(long)data]);
+ int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
- int len = cpumask_snprintf(page, count, *(cpumask_t *)data);
+ int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
* be enabled
* -1: the lapic NMI watchdog is disabled, but can be enabled
*/
-static int nmi_active;
+int nmi_active; /* oprofile uses this */
static int panic_on_timeout;
unsigned int nmi_watchdog = NMI_LOCAL_APIC;
nmi_callback = dummy_nmi_callback;
}
+EXPORT_SYMBOL(nmi_active);
EXPORT_SYMBOL(nmi_watchdog);
EXPORT_SYMBOL(disable_lapic_nmi_watchdog);
EXPORT_SYMBOL(enable_lapic_nmi_watchdog);
return;
}
#endif
+
if (dma_addr < iommu_bus_base + EMERGENCY_PAGES*PAGE_SIZE ||
dma_addr >= iommu_bus_base + iommu_size)
return;
EXPORT_SYMBOL(die_chain);
-#ifdef CONFIG_SMP_
+#ifdef CONFIG_SMP
EXPORT_SYMBOL(cpu_sibling_map);
#endif
particular, many Toshiba laptops require this for correct operation
of the AC module.
+config X86_PM_TIMER
+ bool "Power Management Timer Support"
+ depends on X86 && ACPI
+ depends on ACPI_BOOT && EXPERIMENTAL
+ default n
+ help
+ The Power Management Timer is available on all ACPI-capable,
+ in most cases even if ACPI is unusable or blacklisted.
+
+ This timing source is not affected by powermanagement features
+ like aggressive processor idling, throttling, frequency and/or
+ voltage scaling, unlike the commonly used Time Stamp Counter
+ (TSC) timing source.
+
+ So, if you see messages like 'Losing too many ticks!' in the
+ kernel logs, and/or you are using a this on a notebook which
+ does not yet have an HPET, you should say "Y" here.
+
config ACPI_INITRD
bool "Read DSDT from initrd"
depends on ACPI && BLK_DEV_INITRD
next = buf;
size = PAGE_SIZE;
- temp = snprintf (next, size, "poolinfo - 0.1\n");
+ temp = scnprintf(next, size, "poolinfo - 0.1\n");
size -= temp;
next += temp;
}
/* per-pool info, no real statistics yet */
- temp = snprintf (next, size, "%-16s %4u %4Zu %4Zu %2u\n",
+ temp = scnprintf(next, size, "%-16s %4u %4Zu %4Zu %2u\n",
pool->name,
blocks, pages * pool->blocks_per_page,
pool->size, pages);
/* FIXME - someone should pass us a buffer size (count) or
* use seq_file or something to avoid buffer overrun risk. */
- len = cpumask_snprintf(buf, 99 /* XXX FIXME */, mask);
+ len = cpumask_scnprintf(buf, 99 /* XXX FIXME */, mask);
len += sprintf(buf + len, "\n");
return len;
}
static int
-cryptoloop_transfer_ecb(struct loop_device *lo, int cmd, char *raw_buf,
- char *loop_buf, int size, sector_t IV)
+cryptoloop_transfer_ecb(struct loop_device *lo, int cmd,
+ struct page *raw_page, unsigned raw_off,
+ struct page *loop_page, unsigned loop_off,
+ int size, sector_t IV)
{
struct crypto_tfm *tfm = (struct crypto_tfm *) lo->key_data;
struct scatterlist sg_out = { 0, };
struct scatterlist sg_in = { 0, };
encdec_ecb_t encdecfunc;
- char const *in;
- char *out;
+ struct page *in_page, *out_page;
+ unsigned in_offs, out_offs;
if (cmd == READ) {
- in = raw_buf;
- out = loop_buf;
+ in_page = raw_page;
+ in_offs = raw_off;
+ out_page = loop_page;
+ out_offs = loop_off;
encdecfunc = tfm->crt_u.cipher.cit_decrypt;
} else {
- in = loop_buf;
- out = raw_buf;
+ in_page = loop_page;
+ in_offs = loop_off;
+ out_page = raw_page;
+ out_offs = raw_off;
encdecfunc = tfm->crt_u.cipher.cit_encrypt;
}
while (size > 0) {
const int sz = min(size, LOOP_IV_SECTOR_SIZE);
- sg_in.page = virt_to_page(in);
- sg_in.offset = (unsigned long)in & ~PAGE_MASK;
+ sg_in.page = in_page;
+ sg_in.offset = in_offs;
sg_in.length = sz;
- sg_out.page = virt_to_page(out);
- sg_out.offset = (unsigned long)out & ~PAGE_MASK;
+ sg_out.page = out_page;
+ sg_out.offset = out_offs;
sg_out.length = sz;
encdecfunc(tfm, &sg_out, &sg_in, sz);
size -= sz;
- in += sz;
- out += sz;
+ in_offs += sz;
+ out_offs += sz;
}
return 0;
unsigned int nsg, u8 *iv);
static int
-cryptoloop_transfer_cbc(struct loop_device *lo, int cmd, char *raw_buf,
- char *loop_buf, int size, sector_t IV)
+cryptoloop_transfer_cbc(struct loop_device *lo, int cmd,
+ struct page *raw_page, unsigned raw_off,
+ struct page *loop_page, unsigned loop_off,
+ int size, sector_t IV)
{
struct crypto_tfm *tfm = (struct crypto_tfm *) lo->key_data;
struct scatterlist sg_out = { 0, };
struct scatterlist sg_in = { 0, };
encdec_cbc_t encdecfunc;
- char const *in;
- char *out;
+ struct page *in_page, *out_page;
+ unsigned in_offs, out_offs;
if (cmd == READ) {
- in = raw_buf;
- out = loop_buf;
+ in_page = raw_page;
+ in_offs = raw_off;
+ out_page = loop_page;
+ out_offs = loop_off;
encdecfunc = tfm->crt_u.cipher.cit_decrypt_iv;
} else {
- in = loop_buf;
- out = raw_buf;
+ in_page = loop_page;
+ in_offs = loop_off;
+ out_page = raw_page;
+ out_offs = raw_off;
encdecfunc = tfm->crt_u.cipher.cit_encrypt_iv;
}
u32 iv[4] = { 0, };
iv[0] = cpu_to_le32(IV & 0xffffffff);
- sg_in.page = virt_to_page(in);
- sg_in.offset = offset_in_page(in);
+ sg_in.page = in_page;
+ sg_in.offset = in_offs;
sg_in.length = sz;
- sg_out.page = virt_to_page(out);
- sg_out.offset = offset_in_page(out);
+ sg_out.page = out_page;
+ sg_out.offset = out_offs;
sg_out.length = sz;
encdecfunc(tfm, &sg_out, &sg_in, sz, (u8 *)iv);
IV++;
size -= sz;
- in += sz;
- out += sz;
+ in_offs += sz;
+ out_offs += sz;
}
return 0;
}
static int
-cryptoloop_transfer(struct loop_device *lo, int cmd, char *raw_buf,
- char *loop_buf, int size, sector_t IV)
+cryptoloop_transfer(struct loop_device *lo, int cmd,
+ struct page *raw_page, unsigned raw_off,
+ struct page *loop_page, unsigned loop_off,
+ int size, sector_t IV)
{
struct crypto_tfm *tfm = (struct crypto_tfm *) lo->key_data;
if(tfm->crt_cipher.cit_mode == CRYPTO_TFM_MODE_ECB)
{
lo->transfer = cryptoloop_transfer_ecb;
- return cryptoloop_transfer_ecb(lo, cmd, raw_buf, loop_buf, size, IV);
+ return cryptoloop_transfer_ecb(lo, cmd, raw_page, raw_off,
+ loop_page, loop_off, size, IV);
}
if(tfm->crt_cipher.cit_mode == CRYPTO_TFM_MODE_CBC)
{
lo->transfer = cryptoloop_transfer_cbc;
- return cryptoloop_transfer_cbc(lo, cmd, raw_buf, loop_buf, size, IV);
+ return cryptoloop_transfer_cbc(lo, cmd, raw_page, raw_off,
+ loop_page, loop_off, size, IV);
}
/* This is not supposed to happen */
/* Don't show non-partitionable removeable devices or empty devices */
if (!get_capacity(sgp) ||
- (sgp->minors == 1 && (sgp->flags & GENHD_FL_REMOVABLE))
- )
+ (sgp->minors == 1 && (sgp->flags & GENHD_FL_REMOVABLE)))
+ return 0;
+ if (sgp->flags & GENHD_FL_SUPPRESS_PARTITION_INFO)
return 0;
/* show the full disk and all non-0 size partitions of it */
/*
* Transfer functions
*/
-static int transfer_none(struct loop_device *lo, int cmd, char *raw_buf,
- char *loop_buf, int size, sector_t real_block)
+static int transfer_none(struct loop_device *lo, int cmd,
+ struct page *raw_page, unsigned raw_off,
+ struct page *loop_page, unsigned loop_off,
+ int size, sector_t real_block)
{
- if (raw_buf != loop_buf) {
- if (cmd == READ)
- memcpy(loop_buf, raw_buf, size);
- else
- memcpy(raw_buf, loop_buf, size);
- }
+ char *raw_buf = kmap_atomic(raw_page, KM_USER0) + raw_off;
+ char *loop_buf = kmap_atomic(loop_page, KM_USER1) + loop_off;
+
+ if (cmd == READ)
+ memcpy(loop_buf, raw_buf, size);
+ else
+ memcpy(raw_buf, loop_buf, size);
+ kunmap_atomic(raw_buf, KM_USER0);
+ kunmap_atomic(loop_buf, KM_USER1);
+ cond_resched();
return 0;
}
-static int transfer_xor(struct loop_device *lo, int cmd, char *raw_buf,
- char *loop_buf, int size, sector_t real_block)
+static int transfer_xor(struct loop_device *lo, int cmd,
+ struct page *raw_page, unsigned raw_off,
+ struct page *loop_page, unsigned loop_off,
+ int size, sector_t real_block)
{
- char *in, *out, *key;
- int i, keysize;
+ char *raw_buf = kmap_atomic(raw_page, KM_USER0) + raw_off;
+ char *loop_buf = kmap_atomic(loop_page, KM_USER1) + loop_off;
+ char *in, *out, *key;
+ int i, keysize;
if (cmd == READ) {
in = raw_buf;
keysize = lo->lo_encrypt_key_size;
for (i = 0; i < size; i++)
*out++ = *in++ ^ key[(i & 511) % keysize];
+
+ kunmap_atomic(raw_buf, KM_USER0);
+ kunmap_atomic(loop_buf, KM_USER1);
+ cond_resched();
return 0;
}
}
static inline int
-lo_do_transfer(struct loop_device *lo, int cmd, char *rbuf,
- char *lbuf, int size, sector_t rblock)
+lo_do_transfer(struct loop_device *lo, int cmd,
+ struct page *rpage, unsigned roffs,
+ struct page *lpage, unsigned loffs,
+ int size, sector_t rblock)
{
if (!lo->transfer)
return 0;
- return lo->transfer(lo, cmd, rbuf, lbuf, size, rblock);
+ return lo->transfer(lo, cmd, rpage, roffs, lpage, loffs, size, rblock);
}
static int
struct address_space *mapping = file->f_mapping;
struct address_space_operations *aops = mapping->a_ops;
struct page *page;
- char *kaddr, *data;
pgoff_t index;
- unsigned size, offset;
+ unsigned size, offset, bv_offs;
int len;
int ret = 0;
down(&mapping->host->i_sem);
index = pos >> PAGE_CACHE_SHIFT;
offset = pos & ((pgoff_t)PAGE_CACHE_SIZE - 1);
- data = kmap(bvec->bv_page) + bvec->bv_offset;
+ bv_offs = bvec->bv_offset;
len = bvec->bv_len;
while (len > 0) {
sector_t IV;
goto fail;
if (aops->prepare_write(file, page, offset, offset+size))
goto unlock;
- kaddr = kmap(page);
- transfer_result = lo_do_transfer(lo, WRITE, kaddr + offset,
- data, size, IV);
+ transfer_result = lo_do_transfer(lo, WRITE, page, offset,
+ bvec->bv_page, bv_offs,
+ size, IV);
if (transfer_result) {
+ char *kaddr;
+
/*
* The transfer failed, but we still write the data to
* keep prepare/commit calls balanced.
*/
printk(KERN_ERR "loop: transfer error block %llu\n",
(unsigned long long)index);
+ kaddr = kmap_atomic(page, KM_USER0);
memset(kaddr + offset, 0, size);
+ kunmap_atomic(kaddr, KM_USER0);
}
flush_dcache_page(page);
- kunmap(page);
if (aops->commit_write(file, page, offset, offset+size))
goto unlock;
if (transfer_result)
goto unlock;
- data += size;
+ bv_offs += size;
len -= size;
offset = 0;
index++;
}
up(&mapping->host->i_sem);
out:
- kunmap(bvec->bv_page);
return ret;
unlock:
static int
lo_send(struct loop_device *lo, struct bio *bio, int bsize, loff_t pos)
{
- unsigned vecnr;
- int ret = 0;
-
- for (vecnr = 0; vecnr < bio->bi_vcnt; vecnr++) {
- struct bio_vec *bvec = &bio->bi_io_vec[vecnr];
+ struct bio_vec *bvec;
+ int i, ret = 0;
+ bio_for_each_segment(bvec, bio, i) {
ret = do_lo_send(lo, bvec, bsize, pos);
if (ret < 0)
break;
struct lo_read_data {
struct loop_device *lo;
- char *data;
+ struct page *page;
+ unsigned offset;
int bsize;
};
lo_read_actor(read_descriptor_t *desc, struct page *page,
unsigned long offset, unsigned long size)
{
- char *kaddr;
unsigned long count = desc->count;
struct lo_read_data *p = (struct lo_read_data*)desc->buf;
struct loop_device *lo = p->lo;
if (size > count)
size = count;
- kaddr = kmap(page);
- if (lo_do_transfer(lo, READ, kaddr + offset, p->data, size, IV)) {
+ if (lo_do_transfer(lo, READ, page, offset, p->page, p->offset, size, IV)) {
size = 0;
printk(KERN_ERR "loop: transfer error block %ld\n",
page->index);
desc->error = -EINVAL;
}
- kunmap(page);
desc->count = count - size;
desc->written += size;
- p->data += size;
+ p->offset += size;
return size;
}
int retval;
cookie.lo = lo;
- cookie.data = kmap(bvec->bv_page) + bvec->bv_offset;
+ cookie.page = bvec->bv_page;
+ cookie.offset = bvec->bv_offset;
cookie.bsize = bsize;
file = lo->lo_backing_file;
retval = file->f_op->sendfile(file, &pos, bvec->bv_len,
lo_read_actor, &cookie);
- kunmap(bvec->bv_page);
return (retval < 0)? retval: 0;
}
static int
lo_receive(struct loop_device *lo, struct bio *bio, int bsize, loff_t pos)
{
- unsigned vecnr;
- int ret = 0;
-
- for (vecnr = 0; vecnr < bio->bi_vcnt; vecnr++) {
- struct bio_vec *bvec = &bio->bi_io_vec[vecnr];
+ struct bio_vec *bvec;
+ int i, ret = 0;
+ bio_for_each_segment(bvec, bio, i) {
ret = do_lo_receive(lo, bvec, bsize, pos);
if (ret < 0)
break;
return ret;
}
-static int loop_end_io_transfer(struct bio *, unsigned int, int);
-
-static void loop_put_buffer(struct bio *bio)
-{
- /*
- * check bi_end_io, may just be a remapped bio
- */
- if (bio && bio->bi_end_io == loop_end_io_transfer) {
- int i;
-
- for (i = 0; i < bio->bi_vcnt; i++)
- __free_page(bio->bi_io_vec[i].bv_page);
-
- bio_put(bio);
- }
-}
-
/*
* Add bio to back of pending list
*/
return bio;
}
-/*
- * if this was a WRITE lo->transfer stuff has already been done. for READs,
- * queue it for the loop thread and let it do the transfer out of
- * bi_end_io context (we don't want to do decrypt of a page with irqs
- * disabled)
- */
-static int loop_end_io_transfer(struct bio *bio, unsigned int bytes_done, int err)
-{
- struct bio *rbh = bio->bi_private;
- struct loop_device *lo = rbh->bi_bdev->bd_disk->private_data;
-
- if (bio->bi_size)
- return 1;
-
- if (err || bio_rw(bio) == WRITE) {
- bio_endio(rbh, rbh->bi_size, err);
- if (atomic_dec_and_test(&lo->lo_pending))
- up(&lo->lo_bh_mutex);
- loop_put_buffer(bio);
- } else
- loop_add_bio(lo, bio);
-
- return 0;
-}
-
-static struct bio *loop_copy_bio(struct bio *rbh)
-{
- struct bio *bio;
- struct bio_vec *bv;
- int i;
-
- bio = bio_alloc(__GFP_NOWARN, rbh->bi_vcnt);
- if (!bio)
- return NULL;
-
- /*
- * iterate iovec list and alloc pages
- */
- __bio_for_each_segment(bv, rbh, i, 0) {
- struct bio_vec *bbv = &bio->bi_io_vec[i];
-
- bbv->bv_page = alloc_page(__GFP_NOWARN|__GFP_HIGHMEM);
- if (bbv->bv_page == NULL)
- goto oom;
-
- bbv->bv_len = bv->bv_len;
- bbv->bv_offset = bv->bv_offset;
- }
-
- bio->bi_vcnt = rbh->bi_vcnt;
- bio->bi_size = rbh->bi_size;
-
- return bio;
-
-oom:
- while (--i >= 0)
- __free_page(bio->bi_io_vec[i].bv_page);
-
- bio_put(bio);
- return NULL;
-}
-
-static struct bio *loop_get_buffer(struct loop_device *lo, struct bio *rbh)
-{
- struct bio *bio;
-
- /*
- * When called on the page reclaim -> writepage path, this code can
- * trivially consume all memory. So we drop PF_MEMALLOC to avoid
- * stealing all the page reserves and throttle to the writeout rate.
- * pdflush will have been woken by page reclaim. Let it do its work.
- */
- do {
- int flags = current->flags;
-
- current->flags &= ~PF_MEMALLOC;
- bio = loop_copy_bio(rbh);
- if (flags & PF_MEMALLOC)
- current->flags |= PF_MEMALLOC;
-
- if (bio == NULL)
- blk_congestion_wait(WRITE, HZ/10);
- } while (bio == NULL);
-
- bio->bi_end_io = loop_end_io_transfer;
- bio->bi_private = rbh;
- bio->bi_sector = rbh->bi_sector + (lo->lo_offset >> 9);
- bio->bi_rw = rbh->bi_rw;
- bio->bi_bdev = lo->lo_device;
-
- return bio;
-}
-
-static int loop_transfer_bio(struct loop_device *lo,
- struct bio *to_bio, struct bio *from_bio)
-{
- sector_t IV;
- struct bio_vec *from_bvec, *to_bvec;
- char *vto, *vfrom;
- int ret = 0, i;
-
- IV = from_bio->bi_sector + (lo->lo_offset >> 9);
-
- __bio_for_each_segment(from_bvec, from_bio, i, 0) {
- to_bvec = &to_bio->bi_io_vec[i];
-
- kmap(from_bvec->bv_page);
- kmap(to_bvec->bv_page);
- vfrom = page_address(from_bvec->bv_page) + from_bvec->bv_offset;
- vto = page_address(to_bvec->bv_page) + to_bvec->bv_offset;
- ret |= lo_do_transfer(lo, bio_data_dir(to_bio), vto, vfrom,
- from_bvec->bv_len, IV);
- kunmap(from_bvec->bv_page);
- kunmap(to_bvec->bv_page);
- IV += from_bvec->bv_len >> 9;
- }
-
- return ret;
-}
-
static int loop_make_request(request_queue_t *q, struct bio *old_bio)
{
- struct bio *new_bio = NULL;
struct loop_device *lo = q->queuedata;
int rw = bio_rw(old_bio);
printk(KERN_ERR "loop: unknown command (%x)\n", rw);
goto err;
}
-
- /*
- * file backed, queue for loop_thread to handle
- */
- if (lo->lo_flags & LO_FLAGS_DO_BMAP) {
- loop_add_bio(lo, old_bio);
- return 0;
- }
-
- /*
- * piggy old buffer on original, and submit for I/O
- */
- new_bio = loop_get_buffer(lo, old_bio);
- if (rw == WRITE) {
- if (loop_transfer_bio(lo, new_bio, old_bio))
- goto err;
- }
-
- generic_make_request(new_bio);
+ loop_add_bio(lo, old_bio);
return 0;
-
err:
if (atomic_dec_and_test(&lo->lo_pending))
up(&lo->lo_bh_mutex);
- loop_put_buffer(new_bio);
out:
bio_io_error(old_bio, old_bio->bi_size);
return 0;
{
int ret;
- /*
- * For block backed loop, we know this is a READ
- */
- if (lo->lo_flags & LO_FLAGS_DO_BMAP) {
- ret = do_bio_filebacked(lo, bio);
- bio_endio(bio, bio->bi_size, ret);
- } else {
- struct bio *rbh = bio->bi_private;
-
- ret = loop_transfer_bio(lo, bio, rbh);
-
- bio_endio(rbh, rbh->bi_size, ret);
- loop_put_buffer(bio);
- }
+ ret = do_bio_filebacked(lo, bio);
+ bio_endio(bio, bio->bi_size, ret);
}
/*
lo_flags |= LO_FLAGS_READ_ONLY;
error = -EINVAL;
- if (S_ISBLK(inode->i_mode)) {
- lo_device = I_BDEV(inode);
- if (lo_device == bdev) {
- error = -EBUSY;
- goto out_putf;
- }
- lo_blocksize = block_size(lo_device);
- if (bdev_read_only(lo_device))
- lo_flags |= LO_FLAGS_READ_ONLY;
- } else if (S_ISREG(inode->i_mode)) {
+ if (S_ISREG(inode->i_mode) || S_ISBLK(inode->i_mode)) {
struct address_space_operations *aops = mapping->a_ops;
/*
* If we can't read - sorry. If we only can't write - well,
* it's going to be read-only.
*/
- if (!inode->i_fop->sendfile)
+ if (!lo_file->f_op->sendfile)
goto out_putf;
if (!aops->prepare_write || !aops->commit_write)
lo_flags |= LO_FLAGS_READ_ONLY;
lo_blocksize = inode->i_blksize;
- lo_flags |= LO_FLAGS_DO_BMAP;
- } else
+ error = 0;
+ } else {
goto out_putf;
+ }
if (!(lo_file->f_mode & FMODE_WRITE))
lo_flags |= LO_FLAGS_READ_ONLY;
blk_queue_make_request(lo->lo_queue, loop_make_request);
lo->lo_queue->queuedata = lo;
- /*
- * we remap to a block device, make sure we correctly stack limits
- */
- if (S_ISBLK(inode->i_mode)) {
- request_queue_t *q = bdev_get_queue(lo_device);
-
- blk_queue_max_sectors(lo->lo_queue, q->max_sectors);
- blk_queue_max_phys_segments(lo->lo_queue,q->max_phys_segments);
- blk_queue_max_hw_segments(lo->lo_queue, q->max_hw_segments);
- blk_queue_hardsect_size(lo->lo_queue, queue_hardsect_size(q));
- blk_queue_max_segment_size(lo->lo_queue, q->max_segment_size);
- blk_queue_segment_boundary(lo->lo_queue, q->seg_boundary_mask);
- blk_queue_merge_bvec(lo->lo_queue, q->merge_bvec_fn);
- }
-
set_blocksize(bdev, lo_blocksize);
kernel_thread(loop_thread, lo, CLONE_KERNEL);
lo->lo_queue = blk_alloc_queue(GFP_KERNEL);
if (!lo->lo_queue)
goto out_mem4;
- disks[i]->queue = lo->lo_queue;
init_MUTEX(&lo->lo_ctl_mutex);
init_MUTEX_LOCKED(&lo->lo_sem);
init_MUTEX_LOCKED(&lo->lo_bh_mutex);
sprintf(disk->devfs_name, "loop/%d", i);
disk->private_data = lo;
disk->queue = lo->lo_queue;
- add_disk(disk);
}
+
+ /* We cannot fail after we call this, so another loop!*/
+ for (i = 0; i < max_loop; i++)
+ add_disk(disks[i]);
printk(KERN_INFO "loop: loaded (max %d devices)\n", max_loop);
return 0;
out_mem4:
while (i--)
blk_put_queue(loop_dev[i].lo_queue);
+ devfs_remove("loop");
i = max_loop;
out_mem3:
while (i--)
disk->first_minor = i;
disk->fops = &nbd_fops;
disk->private_data = &nbd_dev[i];
+ disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;
sprintf(disk->disk_name, "nbd%d", i);
sprintf(disk->devfs_name, "nbd/%d", i);
set_capacity(disk, 0x3ffffe);
/*
* ramdisk.c - Multiple RAM disk driver - gzip-loading version - v. 0.8 beta.
- *
- * (C) Chad Page, Theodore Ts'o, et. al, 1995.
+ *
+ * (C) Chad Page, Theodore Ts'o, et. al, 1995.
*
* This RAM disk is designed to have filesystems created on it and mounted
- * just like a regular floppy disk.
- *
+ * just like a regular floppy disk.
+ *
* It also does something suggested by Linus: use the buffer cache as the
* RAM disk data. This makes it possible to dynamically allocate the RAM disk
- * buffer - with some consequences I have to deal with as I write this.
- *
+ * buffer - with some consequences I have to deal with as I write this.
+ *
* This code is based on the original ramdisk.c, written mostly by
* Theodore Ts'o (TYT) in 1991. The code was largely rewritten by
* Chad Page to use the buffer cache to store the RAM disk data in
*
* Added initrd: Werner Almesberger & Hans Lermen, Feb '96
*
- * 4/25/96 : Made RAM disk size a parameter (default is now 4 MB)
+ * 4/25/96 : Made RAM disk size a parameter (default is now 4 MB)
* - Chad Page
*
* Add support for fs images split across >1 disk, Paul Gortmaker, Mar '98
#include <asm/uaccess.h>
/* The RAM disk size is now a parameter */
-#define NUM_RAMDISKS 16 /* This cannot be overridden (yet) */
+#define NUM_RAMDISKS 16 /* This cannot be overridden (yet) */
/* Various static variables go here. Most are used only in the RAM disk code.
*/
* Parameters for the boot-loading of the RAM disk. These are set by
* init/main.c (from arguments to the kernel command line) or from the
* architecture-specific setup routine (from the stored boot sector
- * information).
+ * information).
*/
int rd_size = CONFIG_BLK_DEV_RAM_SIZE; /* Size of the RAM disks */
/*
* 2000 Transmeta Corp.
* aops copied from ramfs.
*/
-static int ramdisk_readpage(struct file *file, struct page * page)
+static int ramdisk_readpage(struct file *file, struct page *page)
{
if (!PageUptodate(page)) {
void *kaddr = kmap_atomic(page, KM_USER0);
return 0;
}
-static int ramdisk_prepare_write(struct file *file, struct page *page, unsigned offset, unsigned to)
+static int ramdisk_prepare_write(struct file *file, struct page *page,
+ unsigned offset, unsigned to)
{
if (!PageUptodate(page)) {
void *kaddr = kmap_atomic(page, KM_USER0);
return 0;
}
-static int ramdisk_commit_write(struct file *file, struct page *page, unsigned offset, unsigned to)
+static int ramdisk_commit_write(struct file *file, struct page *page,
+ unsigned offset, unsigned to)
{
return 0;
}
* 19-JAN-1998 Richard Gooch <rgooch@atnf.csiro.au> Added devfs support
*
*/
-static int rd_make_request(request_queue_t * q, struct bio *bio)
+static int rd_make_request(request_queue_t *q, struct bio *bio)
{
struct block_device *bdev = bio->bi_bdev;
struct address_space * mapping = bdev->bd_inode->i_mapping;
return 0;
}
-static int rd_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg)
+static int rd_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
{
int error;
struct block_device *bdev = inode->i_bdev;
if (cmd != BLKFLSBUF)
return -EINVAL;
- /* special: we want to release the ramdisk memory,
- it's not like with the other blockdevices where
- this ioctl only flushes away the buffer cache. */
+ /*
+ * special: we want to release the ramdisk memory, it's not like with
+ * the other blockdevices where this ioctl only flushes away the buffer
+ * cache
+ */
error = -EBUSY;
down(&bdev->bd_sem);
if (bdev->bd_openers <= 2) {
.memory_backed = 1, /* Does not contribute to dirty memory */
};
-static int rd_open(struct inode * inode, struct file * filp)
+static int rd_open(struct inode *inode, struct file *filp)
{
unsigned unit = iminor(inode);
.ioctl = rd_ioctl,
};
-/* Before freeing the module, invalidate all of the protected buffers! */
-static void __exit rd_cleanup (void)
+/*
+ * Before freeing the module, invalidate all of the protected buffers!
+ */
+static void __exit rd_cleanup(void)
{
int i;
- for (i = 0 ; i < NUM_RAMDISKS; i++) {
+ for (i = 0; i < NUM_RAMDISKS; i++) {
struct block_device *bdev = rd_bdev[i];
rd_bdev[i] = NULL;
if (bdev) {
invalidate_bdev(bdev, 1);
- blkdev_put(bdev, BDEV_FILE);
+ blkdev_put(bdev);
}
del_gendisk(rd_disks[i]);
put_disk(rd_disks[i]);
}
devfs_remove("rd");
- unregister_blkdev(RAMDISK_MAJOR, "ramdisk" );
+ unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
}
-/* This is the registration and initialization section of the RAM disk driver */
-static int __init rd_init (void)
+/*
+ * This is the registration and initialization section of the RAM disk driver
+ */
+static int __init rd_init(void)
{
int i;
int err = -ENOMEM;
if (rd_blocksize > PAGE_SIZE || rd_blocksize < 512 ||
- (rd_blocksize & (rd_blocksize-1))) {
+ (rd_blocksize & (rd_blocksize-1))) {
printk("RAMDISK: wrong blocksize %d, reverting to defaults\n",
rd_blocksize);
rd_blocksize = BLOCK_SIZE;
disk->first_minor = i;
disk->fops = &rd_bd_op;
disk->queue = rd_queue[i];
+ disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;
sprintf(disk->disk_name, "ram%d", i);
sprintf(disk->devfs_name, "rd/%d", i);
set_capacity(disk, rd_size * 2);
/* rd_size is given in kB */
printk("RAMDISK driver initialized: "
- "%d RAM disks of %dK size %d blocksize\n",
- NUM_RAMDISKS, rd_size, rd_blocksize);
+ "%d RAM disks of %dK size %d blocksize\n",
+ NUM_RAMDISKS, rd_size, rd_blocksize);
return 0;
out_queue:
precision in some cases.
To compile this driver as a module, choose M here: the
- module will be called genrtc. To load the module automatically
- add 'alias char-major-10-135 genrtc' to your /etc/modules.conf
+ module will be called genrtc.
config GEN_RTC_X
bool "Extended RTC operation"
This option gives you AGP support for Apple machines with a
UniNorth bridge.
+config AGP_EFFICEON
+ tristate "Transmeta Efficeon support"
+ depends on AGP && X86 && !X86_64
+ help
+ This option fives you AGP support for the Transmeta Efficeon
+ series processors with integrated northbridges.
+
+ You should say Y here if you use XFree86 3.3.6 or 4.x and want to
+ use GLX or DRI. If unsure, say Y.
+
obj-$(CONFIG_AGP_AMD) += amd-k7-agp.o
obj-$(CONFIG_AGP_AMD64) += amd64-agp.o
obj-$(CONFIG_AGP_ALPHA_CORE) += alpha-agp.o
+obj-$(CONFIG_AGP_EFFICEON) += efficeon-agp.o
obj-$(CONFIG_AGP_HP_ZX1) += hp-agp.o
obj-$(CONFIG_AGP_I460) += i460-agp.o
obj-$(CONFIG_AGP_INTEL) += intel-agp.o
--- /dev/null
+/*
+ * Transmeta's Efficeon AGPGART driver.
+ *
+ * Based upon a diff by Linus around November '02.
+ *
+ * Ported to the 2.6 kernel by Carlos Puchol <cpglinux@puchol.com>
+ * and H. Peter Anvin <hpa@transmeta.com>.
+ */
+
+/*
+ * NOTE-cpg-040217:
+ *
+ * - when compiled as a module, after loading the module,
+ * it will refuse to unload, indicating it is in use,
+ * when it is not.
+ * - no s3 (suspend to ram) testing.
+ * - tested on the efficeon integrated nothbridge for tens
+ * of iterations of starting x and glxgears.
+ * - tested with radeon 9000 and radeon mobility m9 cards
+ * - tested with c3/c4 enabled (with the mobility m9 card)
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/agp_backend.h>
+#include <linux/gfp.h>
+#include <linux/page-flags.h>
+#include <linux/mm.h>
+#include "agp.h"
+
+/*
+ * The real differences to the generic AGP code is
+ * in the GART mappings - a two-level setup with the
+ * first level being an on-chip 64-entry table.
+ *
+ * The page array is filled through the ATTPAGE register
+ * (Aperture Translation Table Page Register) at 0xB8. Bits:
+ * 31:20: physical page address
+ * 11:9: Page Attribute Table Index (PATI)
+ * must match the PAT index for the
+ * mapped pages (the 2nd level page table pages
+ * themselves should be just regular WB-cacheable,
+ * so this is normally zero.)
+ * 8: Present
+ * 7:6: reserved, write as zero
+ * 5:0: GATT directory index: which 1st-level entry
+ *
+ * The Efficeon AGP spec requires pages to be WB-cacheable
+ * but to be explicitly CLFLUSH'd after any changes.
+ */
+#define EFFICEON_ATTPAGE 0xb8
+#define EFFICEON_L1_SIZE 64 /* Number of PDE pages */
+
+#define EFFICEON_PATI (0 << 9)
+#define EFFICEON_PRESENT (1 << 8)
+
+static struct _efficeon_private {
+ unsigned long l1_table[EFFICEON_L1_SIZE];
+} efficeon_private;
+
+static struct gatt_mask efficeon_generic_masks[] =
+{
+ {.mask = 0x00000001, .type = 0}
+};
+
+static struct aper_size_info_lvl2 efficeon_generic_sizes[4] =
+{
+ {256, 65536, 0},
+ {128, 32768, 32},
+ {64, 16384, 48},
+ {32, 8192, 56}
+};
+
+/*
+ * Control interfaces are largely identical to
+ * the legacy Intel 440BX..
+ */
+
+static int efficeon_fetch_size(void)
+{
+ int i;
+ u16 temp;
+ struct aper_size_info_lvl2 *values;
+
+ pci_read_config_word(agp_bridge->dev, INTEL_APSIZE, &temp);
+ values = A_SIZE_LVL2(agp_bridge->driver->aperture_sizes);
+
+ for (i = 0; i < agp_bridge->driver->num_aperture_sizes; i++) {
+ if (temp == values[i].size_value) {
+ agp_bridge->previous_size =
+ agp_bridge->current_size = (void *) (values + i);
+ agp_bridge->aperture_size_idx = i;
+ return values[i].size;
+ }
+ }
+
+ return 0;
+}
+
+static void efficeon_tlbflush(struct agp_memory * mem)
+{
+ printk(KERN_DEBUG PFX "efficeon_tlbflush()\n");
+ pci_write_config_dword(agp_bridge->dev, INTEL_AGPCTRL, 0x2200);
+ pci_write_config_dword(agp_bridge->dev, INTEL_AGPCTRL, 0x2280);
+}
+
+static void efficeon_cleanup(void)
+{
+ u16 temp;
+ struct aper_size_info_lvl2 *previous_size;
+
+ printk(KERN_DEBUG PFX "efficeon_cleanup()\n");
+ previous_size = A_SIZE_LVL2(agp_bridge->previous_size);
+ pci_read_config_word(agp_bridge->dev, INTEL_NBXCFG, &temp);
+ pci_write_config_word(agp_bridge->dev, INTEL_NBXCFG, temp & ~(1 << 9));
+ pci_write_config_word(agp_bridge->dev, INTEL_APSIZE,
+ previous_size->size_value);
+}
+
+static int efficeon_configure(void)
+{
+ u32 temp;
+ u16 temp2;
+ struct aper_size_info_lvl2 *current_size;
+
+ printk(KERN_DEBUG PFX "efficeon_configure()\n");
+
+ current_size = A_SIZE_LVL2(agp_bridge->current_size);
+
+ /* aperture size */
+ pci_write_config_word(agp_bridge->dev, INTEL_APSIZE,
+ current_size->size_value);
+
+ /* address to map to */
+ pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
+ agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
+
+ /* agpctrl */
+ pci_write_config_dword(agp_bridge->dev, INTEL_AGPCTRL, 0x2280);
+
+ /* paccfg/nbxcfg */
+ pci_read_config_word(agp_bridge->dev, INTEL_NBXCFG, &temp2);
+ pci_write_config_word(agp_bridge->dev, INTEL_NBXCFG,
+ (temp2 & ~(1 << 10)) | (1 << 9) | (1 << 11));
+ /* clear any possible error conditions */
+ pci_write_config_byte(agp_bridge->dev, INTEL_ERRSTS + 1, 7);
+ return 0;
+}
+
+static int efficeon_free_gatt_table(void)
+{
+ int index, freed = 0;
+
+ for (index = 0; index < EFFICEON_L1_SIZE; index++) {
+ unsigned long page = efficeon_private.l1_table[index];
+ if (page) {
+ efficeon_private.l1_table[index] = 0;
+ ClearPageReserved(virt_to_page((char *)page));
+ free_page(page);
+ freed++;
+ }
+ printk(KERN_DEBUG PFX "efficeon_free_gatt_table(%p, %02x, %08x)\n",
+ agp_bridge->dev, EFFICEON_ATTPAGE, index);
+ pci_write_config_dword(agp_bridge->dev,
+ EFFICEON_ATTPAGE, index);
+ }
+ printk(KERN_DEBUG PFX "efficeon_free_gatt_table() freed %d pages\n", freed);
+ return 0;
+}
+
+
+/*
+ * Since we don't need contigious memory we just try
+ * to get the gatt table once
+ */
+
+#define GET_PAGE_DIR_OFF(addr) (addr >> 22)
+#define GET_PAGE_DIR_IDX(addr) (GET_PAGE_DIR_OFF(addr) - \
+ GET_PAGE_DIR_OFF(agp_bridge->gart_bus_addr))
+#define GET_GATT_OFF(addr) ((addr & 0x003ff000) >> 12)
+#undef GET_GATT
+#define GET_GATT(addr) (efficeon_private.gatt_pages[\
+ GET_PAGE_DIR_IDX(addr)]->remapped)
+
+static int efficeon_create_gatt_table(void)
+{
+ int index;
+ const int pati = EFFICEON_PATI;
+ const int present = EFFICEON_PRESENT;
+ const int clflush_chunk = ((cpuid_ebx(1) >> 8) & 0xff) << 3;
+ int num_entries, l1_pages;
+
+ num_entries = A_SIZE_LVL2(agp_bridge->current_size)->num_entries;
+
+ printk(KERN_DEBUG PFX "efficeon_create_gatt_table(%d)\n", num_entries);
+
+ /* There are 2^10 PTE pages per PDE page */
+ BUG_ON(num_entries & 0x3ff);
+ l1_pages = num_entries >> 10;
+
+ for (index = 0 ; index < l1_pages ; index++) {
+ int offset;
+ unsigned long page;
+ unsigned long value;
+
+ page = efficeon_private.l1_table[index];
+ BUG_ON(page);
+
+ page = get_zeroed_page(GFP_KERNEL);
+ if (!page) {
+ efficeon_free_gatt_table();
+ return -ENOMEM;
+ }
+ SetPageReserved(virt_to_page((char *)page));
+
+ for (offset = 0; offset < PAGE_SIZE; offset += clflush_chunk)
+ asm volatile("clflush %0" : : "m" (*(char *)(page+offset)));
+
+ efficeon_private.l1_table[index] = page;
+
+ value = __pa(page) | pati | present | index;
+
+ pci_write_config_dword(agp_bridge->dev,
+ EFFICEON_ATTPAGE, value);
+ }
+
+ return 0;
+}
+
+static int efficeon_insert_memory(struct agp_memory * mem, off_t pg_start, int type)
+{
+ int i, count = mem->page_count, num_entries;
+ unsigned int *page, *last_page;
+ const int clflush_chunk = ((cpuid_ebx(1) >> 8) & 0xff) << 3;
+ const unsigned long clflush_mask = ~(clflush_chunk-1);
+
+ printk(KERN_DEBUG PFX "efficeon_insert_memory(%lx, %d)\n", pg_start, count);
+
+ num_entries = A_SIZE_LVL2(agp_bridge->current_size)->num_entries;
+ if ((pg_start + mem->page_count) > num_entries)
+ return -EINVAL;
+ if (type != 0 || mem->type != 0)
+ return -EINVAL;
+
+ if (mem->is_flushed == FALSE) {
+ global_cache_flush();
+ mem->is_flushed = TRUE;
+ }
+
+ last_page = NULL;
+ for (i = 0; i < count; i++) {
+ int index = pg_start + i;
+ unsigned long insert = mem->memory[i];
+
+ page = (unsigned int *) efficeon_private.l1_table[index >> 10];
+
+ if (!page)
+ continue;
+
+ page += (index & 0x3ff);
+ *page = insert;
+
+ /* clflush is slow, so don't clflush until we have to */
+ if ( last_page &&
+ ((unsigned long)page^(unsigned long)last_page) & clflush_mask )
+ asm volatile("clflush %0" : : "m" (*last_page));
+
+ last_page = page;
+ }
+
+ if ( last_page )
+ asm volatile("clflush %0" : : "m" (*last_page));
+
+ agp_bridge->driver->tlb_flush(mem);
+ return 0;
+}
+
+static int efficeon_remove_memory(struct agp_memory * mem, off_t pg_start, int type)
+{
+ int i, count = mem->page_count, num_entries;
+
+ printk(KERN_DEBUG PFX "efficeon_remove_memory(%lx, %d)\n", pg_start, count);
+
+ num_entries = A_SIZE_LVL2(agp_bridge->current_size)->num_entries;
+
+ if ((pg_start + mem->page_count) > num_entries)
+ return -EINVAL;
+ if (type != 0 || mem->type != 0)
+ return -EINVAL;
+
+ for (i = 0; i < count; i++) {
+ int index = pg_start + i;
+ unsigned int *page = (unsigned int *) efficeon_private.l1_table[index >> 10];
+
+ if (!page)
+ continue;
+ page += (index & 0x3ff);
+ *page = 0;
+ }
+ agp_bridge->driver->tlb_flush(mem);
+ return 0;
+}
+
+/* GATT entry: (physical address | 1) */
+static unsigned long efficeon_mask_memory(unsigned long addr, int type)
+{
+ /* Memory type is ignored */
+
+ return addr | agp_bridge->driver->masks[0].mask;
+}
+
+struct agp_bridge_driver efficeon_driver = {
+ .owner = THIS_MODULE,
+ .aperture_sizes = efficeon_generic_sizes,
+ .size_type = LVL2_APER_SIZE,
+ .num_aperture_sizes = 4,
+ .configure = efficeon_configure,
+ .fetch_size = efficeon_fetch_size,
+ .cleanup = efficeon_cleanup,
+ .tlb_flush = efficeon_tlbflush,
+ .mask_memory = efficeon_mask_memory,
+ .masks = efficeon_generic_masks,
+ .agp_enable = agp_generic_enable,
+ .cache_flush = global_cache_flush,
+
+ // Efficeon-specific GATT table setup / populate / teardown
+ .create_gatt_table = efficeon_create_gatt_table,
+ .free_gatt_table = efficeon_free_gatt_table,
+ .insert_memory = efficeon_insert_memory,
+ .remove_memory = efficeon_remove_memory,
+ .cant_use_aperture = 0, // 1 might be faster?
+
+ // Generic
+ .alloc_by_type = agp_generic_alloc_by_type,
+ .free_by_type = agp_generic_free_by_type,
+ .agp_alloc_page = agp_generic_alloc_page,
+ .agp_destroy_page = agp_generic_destroy_page,
+};
+
+
+static int agp_efficeon_resume(struct pci_dev *pdev)
+{
+ printk(KERN_DEBUG PFX "agp_efficeon_resume()\n");
+ return efficeon_configure();
+}
+
+static int __devinit agp_efficeon_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct agp_bridge_data *bridge;
+ u8 cap_ptr;
+ struct resource *r;
+
+ cap_ptr = pci_find_capability(pdev, PCI_CAP_ID_AGP);
+ if (!cap_ptr)
+ return -ENODEV;
+
+ /* Probe for Efficeon controller */
+ if (pdev->device != PCI_DEVICE_ID_EFFICEON) {
+ printk(KERN_ERR PFX "Unsupported Efficeon chipset (device id: %04x)\n",
+ pdev->device);
+ return -ENODEV;
+ }
+
+ printk(KERN_INFO PFX "Detected Transmeta Efficeon TM8000 series chipset\n");
+
+ bridge = agp_alloc_bridge();
+ if (!bridge)
+ return -ENOMEM;
+
+ bridge->driver = &efficeon_driver;
+ bridge->dev = pdev;
+ bridge->capndx = cap_ptr;
+
+ /*
+ * The following fixes the case where the BIOS has "forgotten" to
+ * provide an address range for the GART.
+ * 20030610 - hamish@zot.org
+ */
+ r = &pdev->resource[0];
+ if (!r->start && r->end) {
+ if(pci_assign_resource(pdev, 0)) {
+ printk(KERN_ERR PFX "could not assign resource 0\n");
+ return (-ENODEV);
+ }
+ }
+
+ /*
+ * If the device has not been properly setup, the following will catch
+ * the problem and should stop the system from crashing.
+ * 20030610 - hamish@zot.org
+ */
+ if (pci_enable_device(pdev)) {
+ printk(KERN_ERR PFX "Unable to Enable PCI device\n");
+ return (-ENODEV);
+ }
+
+ /* Fill in the mode register */
+ if (cap_ptr) {
+ pci_read_config_dword(pdev,
+ bridge->capndx+PCI_AGP_STATUS,
+ &bridge->mode);
+ }
+
+ pci_set_drvdata(pdev, bridge);
+ return agp_add_bridge(bridge);
+}
+
+static void __devexit agp_efficeon_remove(struct pci_dev *pdev)
+{
+ struct agp_bridge_data *bridge = pci_get_drvdata(pdev);
+
+ agp_remove_bridge(bridge);
+ agp_put_bridge(bridge);
+}
+
+static int agp_efficeon_suspend(struct pci_dev *dev, u32 state)
+{
+ return 0;
+}
+
+
+static struct pci_device_id agp_efficeon_pci_table[] = {
+ {
+ .class = (PCI_CLASS_BRIDGE_HOST << 8),
+ .class_mask = ~0,
+ .vendor = PCI_VENDOR_ID_TRANSMETA,
+ .device = PCI_ANY_ID,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ },
+ { }
+};
+
+MODULE_DEVICE_TABLE(pci, agp_efficeon_pci_table);
+
+static struct pci_driver agp_efficeon_pci_driver = {
+ .name = "agpgart-efficeon",
+ .id_table = agp_efficeon_pci_table,
+ .probe = agp_efficeon_probe,
+ .remove = agp_efficeon_remove,
+ .suspend = agp_efficeon_suspend,
+ .resume = agp_efficeon_resume,
+};
+
+static int __init agp_efficeon_init(void)
+{
+ static int agp_initialised=0;
+
+ if (agp_initialised == 1)
+ return 0;
+ agp_initialised=1;
+
+ return pci_module_init(&agp_efficeon_pci_driver);
+}
+
+static void __exit agp_efficeon_cleanup(void)
+{
+ pci_unregister_driver(&agp_efficeon_pci_driver);
+}
+
+module_init(agp_efficeon_init);
+module_exit(agp_efficeon_cleanup);
+
+MODULE_AUTHOR("Carlos Puchol <cpglinux@puchol.com>");
+MODULE_LICENSE("GPL and additional rights");
/* local variables
*/
-static int keep_module_locked = 1;
-
static void *zftc_wrk_mem = NULL;
static __u8 *zftc_buf = NULL;
static void *zftc_scratch_buf = NULL;
static void zftc_lock(void)
{
- MOD_INC_USE_COUNT; /* sets MOD_VISITED and MOD_USED_ONCE,
- * locking is done with can_unload()
- */
- keep_module_locked = 1;
}
/* this function is needed for zftape_reset_position in zftape-io.c
memset((void *)&cseg, '\0', sizeof(cseg));
zftc_stats();
- keep_module_locked = 0;
TRACE_EXIT;
}
int buf_pos_write = pos->seg_byte_pos;
TRACE_FUN(ft_t_flow);
- keep_module_locked = 1;
- MOD_INC_USE_COUNT; /* sets MOD_VISITED and MOD_USED_ONCE,
- * locking is done with can_unload()
- */
/* Note: we do not unlock the module because
* there are some values cached in that `cseg' variable. We
* don't don't want to use this information when being
int remaining = to_do;
TRACE_FUN(ft_t_flow);
- keep_module_locked = 1;
- MOD_INC_USE_COUNT; /* sets MOD_VISITED and MOD_USED_ONCE,
- * locking is done with can_unload()
- */
TRACE_CATCH(zft_allocate_cmpr_mem(volume->blk_sz),);
if (pos->seg_byte_pos == 0) {
/* new segment just read
int fast_seek_trials = 0;
TRACE_FUN(ft_t_flow);
- keep_module_locked = 1;
- MOD_INC_USE_COUNT; /* sets MOD_VISITED and MOD_USED_ONCE,
- * locking is done with can_unload()
- */
if (new_block_pos == 0) {
pos->seg_pos = volume->start_seg;
pos->seg_byte_pos = 0;
*/
int init_module(void)
{
- int result;
-
-#if 0 /* FIXME --RR */
- if (!mod_member_present(&__this_module, can_unload))
- return -EBUSY;
- __this_module.can_unload = can_unload;
-#endif
- result = zft_compressor_init();
- keep_module_locked = 0;
- return result;
+ return zft_compressor_init();
}
-/* Called by modules package when removing the driver
- */
-void cleanup_module(void)
-{
- TRACE_FUN(ft_t_flow);
-
- if (zft_cmpr_unregister() != &cmpr_ops) {
- TRACE(ft_t_info, "failed");
- } else {
- TRACE(ft_t_info, "successful");
- }
- zftc_cleanup();
- printk(KERN_INFO "zft-compressor successfully unloaded.\n");
- TRACE_EXIT;
-}
#endif /* MODULE */
MODULE_AUTHOR("Richard Zidlicky");
MODULE_LICENSE("GPL");
-
+MODULE_ALIAS_MISCDEV(RTC_MINOR);
* console warning.
* When the driver is loaded as a module these setting can be overridden on the
- * modprobe command line or on an option line in /etc/modules.conf.
+ * modprobe command line or on an option line in /etc/modprobe.conf.
* If the driver is built-in the configuration must be
* set here for ISA cards and address set to 1 and 2 for PCI and EISA.
*
/* this structure is zeroed out because the suggested method is to configure
* the driver as a module, set up the parameters with an options line in
- * /etc/modules.conf and load with modprobe, kerneld or kmod, the kernel
+ * /etc/modprobe.conf and load with modprobe or kmod, the kernel
* module loader
*/
* You can find the original tools for this direct from Multitech
* ftp://ftp.multitech.com/ISI-Cards/
*
- * Having installed the cards the module options (/etc/modules.conf)
+ * Having installed the cards the module options (/etc/modprobe.conf)
*
* options isicom io=card1,card2,card3,card4 irq=card1,card2,card3,card4
*
#include <linux/config.h>
#include <linux/mm.h>
#include <linux/miscdevice.h>
-#include <linux/tpqic02.h>
-#include <linux/ftape.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/mman.h>
if (!bdev)
goto out;
igrab(bdev->bd_inode);
- err = blkdev_get(bdev, filp->f_mode, 0, BDEV_RAW);
+ err = blkdev_get(bdev, filp->f_mode, 0);
if (err)
goto out;
err = bd_claim(bdev, raw_open);
out2:
bd_release(bdev);
out1:
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
out:
up(&raw_mutex);
return err;
up(&raw_mutex);
bd_release(bdev);
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
return 0;
}
va_list args;
va_start(args, fmt);
- printed_len = vsnprintf(printk_buf, sizeof(printk_buf), fmt, args);
+ printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
early_printk_sn_sal(printk_buf, printed_len);
va_end(args);
return printed_len;
spin_lock_irqsave(&consoleloglock, flags);
va_start(args, fmt);
- i = vsnprintf(buf, sizeof(buf) - 1, fmt, args);
+ i = vscnprintf(buf, sizeof(buf) - 1, fmt, args);
va_end(args);
buf[i++] = '\r';
HvCall_writeLogBuffer(buf, i);
else if (policy->policy == CPUFREQ_POLICY_PERFORMANCE)
return sprintf(buf, "performance\n");
else if (policy->governor)
- return snprintf(buf, CPUFREQ_NAME_LEN, "%s\n", policy->governor->name);
+ return scnprintf(buf, CPUFREQ_NAME_LEN, "%s\n", policy->governor->name);
return -EINVAL;
}
*/
static ssize_t show_scaling_driver (struct cpufreq_policy * policy, char *buf)
{
- return snprintf(buf, CPUFREQ_NAME_LEN, "%s\n", cpufreq_driver->name);
+ return scnprintf(buf, CPUFREQ_NAME_LEN, "%s\n", cpufreq_driver->name);
}
/**
list_for_each_entry(t, &cpufreq_governor_list, governor_list) {
if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char)) - (CPUFREQ_NAME_LEN + 2)))
goto out;
- i += snprintf(&buf[i], CPUFREQ_NAME_LEN, "%s ", t->name);
+ i += scnprintf(&buf[i], CPUFREQ_NAME_LEN, "%s ", t->name);
}
out:
i += sprintf(&buf[i], "\n");
break;
}
} else
- p += snprintf(p, CPUFREQ_NAME_LEN, "%s\n", policy.governor->name);
+ p += scnprintf(p, CPUFREQ_NAME_LEN, "%s\n", policy.governor->name);
}
end:
len = (p - page);
printk("%s: Can't allocate a cdrom structure\n", drive->name);
goto failed;
}
- if (ide_register_subdriver(drive, &ide_cdrom_driver, IDE_SUBDRIVER_VERSION)) {
+ if (ide_register_subdriver(drive, &ide_cdrom_driver)) {
printk("%s: Failed to register the driver with ide.c\n",
drive->name);
kfree(info);
static int idedefault_attach (ide_drive_t *drive)
{
- if (ide_register_subdriver(drive,
- &idedefault_driver, IDE_SUBDRIVER_VERSION)) {
+ if (ide_register_subdriver(drive, &idedefault_driver)) {
printk(KERN_ERR "ide-default: %s: Failed to register the "
"driver with ide.c\n", drive->name);
return 1;
ide_end_drive_cmd(drive, stat, err);
return ide_stopped;
}
-#if 0
- else if (rq->flags & REQ_DRIVE_TASKFILE) {
- rq->errors = 1;
- ide_end_taskfile(drive, stat, err);
- return ide_stopped;
- }
-#endif
#ifdef CONFIG_IDE_TASKFILE_IO
/* make rq completion pointers new submission pointers */
blk_rq_prep_restart(rq);
.busy = 0,
.supports_dsc_overlap = 0,
.cleanup = idedisk_cleanup,
- .flushcache = do_idedisk_flushcache,
.do_request = ide_do_rw_disk,
.sense = idedisk_dump_status,
.error = idedisk_error,
if (drive->media != ide_disk)
goto failed;
- if (ide_register_subdriver (drive, &idedisk_driver, IDE_SUBDRIVER_VERSION)) {
+ if (ide_register_subdriver(drive, &idedisk_driver)) {
printk (KERN_ERR "ide-disk: %s: Failed to register the driver with ide.c\n", drive->name);
goto failed;
}
printk (KERN_ERR "ide-floppy: %s: Can't allocate a floppy structure\n", drive->name);
goto failed;
}
- if (ide_register_subdriver (drive, &idefloppy_driver, IDE_SUBDRIVER_VERSION)) {
+ if (ide_register_subdriver(drive, &idefloppy_driver)) {
printk (KERN_ERR "ide-floppy: %s: Failed to register the driver with ide.c\n", drive->name);
kfree (floppy);
goto failed;
if (rq->flags & REQ_DRIVE_TASKFILE) {
rq->errors = 1;
ide_end_drive_cmd(drive, stat, err);
-// ide_end_taskfile(drive, stat, err);
return ide_stopped;
}
if (rq->flags & REQ_DRIVE_TASKFILE) {
rq->errors = 1;
ide_end_drive_cmd(drive, BUSY_STAT, 0);
-// ide_end_taskfile(drive, BUSY_STAT, 0);
return ide_stopped;
}
*/
if (id->config & (1<<7))
drive->removable = 1;
-
- /*
- * Prevent long system lockup probing later for non-existant
- * slave drive if the hwif is actually a flash memory card of
- * some variety:
- */
- drive->is_flash = 0;
- if (drive_is_flashcard(drive)) {
-#if 0
- /* The new IDE adapter widgets don't follow this heuristic
- so we must nowdays just bite the bullet and take the
- probe hit */
- ide_drive_t *mate = &hwif->drives[1^drive->select.b.unit];
- ide_drive_t *mate = &hwif->drives[1^drive->select.b.unit];
- if (!mate->ata_flash) {
- mate->present = 0;
- mate->noprobe = 1;
- }
-#endif
+
+ if (drive_is_flashcard(drive))
drive->is_flash = 1;
- }
drive->media = ide_disk;
printk("%s DISK drive\n", (drive->is_flash) ? "CFA" : "ATA" );
QUIRK_LIST(drive);
return -EINVAL;
}
-static struct proc_dir_entry * proc_ide_root = NULL;
-
-#ifdef CONFIG_BLK_DEV_IDEPCI
-#include <linux/delay.h>
-/*
- * This is the list of registered PCI chipset driver data structures.
- */
-static ide_pci_host_proc_t * ide_pci_host_proc_list;
-
-#endif /* CONFIG_BLK_DEV_IDEPCI */
-
static int proc_ide_write_config
(struct file *file, const char *buffer, unsigned long count, void *data)
{
}
}
+EXPORT_SYMBOL(create_proc_ide_interfaces);
+
#ifdef CONFIG_BLK_DEV_IDEPCI
-void ide_pci_register_host_proc (ide_pci_host_proc_t *p)
+void ide_pci_create_host_proc(const char *name, get_info_t *get_info)
{
- ide_pci_host_proc_t *tmp;
-
- if (!p) return;
- p->next = NULL;
- p->set = 1;
- if (ide_pci_host_proc_list) {
- tmp = ide_pci_host_proc_list;
- while (tmp->next) tmp = tmp->next;
- tmp->next = p;
- } else
- ide_pci_host_proc_list = p;
+ create_proc_info_entry(name, 0, proc_ide_root, get_info);
}
-EXPORT_SYMBOL(ide_pci_register_host_proc);
-
-#endif /* CONFIG_BLK_DEV_IDEPCI */
-
-EXPORT_SYMBOL(create_proc_ide_interfaces);
+EXPORT_SYMBOL_GPL(ide_pci_create_host_proc);
+#endif
void destroy_proc_ide_interfaces(void)
{
void proc_ide_create(void)
{
-#ifdef CONFIG_BLK_DEV_IDEPCI
- ide_pci_host_proc_t *p = ide_pci_host_proc_list;
-#endif /* CONFIG_BLK_DEV_IDEPCI */
struct proc_dir_entry *entry;
- proc_ide_root = proc_mkdir("ide", 0);
- if (!proc_ide_root) return;
+
+ if (!proc_ide_root)
+ return;
create_proc_ide_interfaces();
entry = create_proc_entry("drivers", 0, proc_ide_root);
if (entry)
entry->proc_fops = &ide_drivers_operations;
-
-#ifdef CONFIG_BLK_DEV_IDEPCI
- while (p != NULL)
- {
- if (p->name != NULL && p->set == 1 && p->get_info != NULL)
- {
- p->parent = proc_ide_root;
- create_proc_info_entry(p->name, 0, p->parent, p->get_info);
- p->set = 2;
- }
- p = p->next;
- }
-#endif /* CONFIG_BLK_DEV_IDEPCI */
}
EXPORT_SYMBOL(proc_ide_create);
void proc_ide_destroy(void)
{
-#ifdef CONFIG_BLK_DEV_IDEPCI
- ide_pci_host_proc_t *p;
-
- for (p = ide_pci_host_proc_list; p; p = p->next) {
- if (p->set == 2)
- remove_proc_entry(p->name, p->parent);
- }
-#endif /* CONFIG_BLK_DEV_IDEPCI */
remove_proc_entry("ide/drivers", proc_ide_root);
destroy_proc_ide_interfaces();
remove_proc_entry("ide", 0);
#include <asm/bitops.h>
/*
- * OnStream support
- */
-#define ONSTREAM_DEBUG (0)
-#define OS_CONFIG_PARTITION (0xff)
-#define OS_DATA_PARTITION (0)
-#define OS_PARTITION_VERSION (1)
-#define OS_EW 300
-#define OS_ADR_MINREV 2
-
-#define OS_DATA_STARTFRAME1 20
-#define OS_DATA_ENDFRAME1 2980
-/*
* partition
*/
typedef struct os_partition_s {
os_dat_entry_t dat_list[16];
} os_dat_t;
-/*
- * Frame types
- */
-#define OS_FRAME_TYPE_FILL (0)
-#define OS_FRAME_TYPE_EOD (1 << 0)
-#define OS_FRAME_TYPE_MARKER (1 << 1)
-#define OS_FRAME_TYPE_HEADER (1 << 3)
-#define OS_FRAME_TYPE_DATA (1 << 7)
-
-/*
- * AUX
- */
-typedef struct os_aux_s {
- __u32 format_id; /* hardware compatibility AUX is based on */
- char application_sig[4]; /* driver used to write this media */
- __u32 hdwr; /* reserved */
- __u32 update_frame_cntr; /* for configuration frame */
- __u8 frame_type;
- __u8 frame_type_reserved;
- __u8 reserved_18_19[2];
- os_partition_t partition;
- __u8 reserved_36_43[8];
- __u32 frame_seq_num;
- __u32 logical_blk_num_high;
- __u32 logical_blk_num;
- os_dat_t dat;
- __u8 reserved188_191[4];
- __u32 filemark_cnt;
- __u32 phys_fm;
- __u32 last_mark_addr;
- __u8 reserved204_223[20];
-
- /*
- * __u8 app_specific[32];
- *
- * Linux specific fields:
- */
- __u32 next_mark_addr; /* when known, points to next marker */
- __u8 linux_specific[28];
-
- __u8 reserved_256_511[256];
-} os_aux_t;
-
-typedef struct os_header_s {
- char ident_str[8];
- __u8 major_rev;
- __u8 minor_rev;
- __u8 reserved10_15[6];
- __u8 par_num;
- __u8 reserved1_3[3];
- os_partition_t partition;
-} os_header_t;
-
-/*
- * OnStream Tape Parameters Page
- */
-typedef struct {
- unsigned page_code :6; /* Page code - Should be 0x2b */
- unsigned reserved1_6 :1;
- unsigned ps :1;
- __u8 reserved2;
- __u8 density; /* kbpi */
- __u8 reserved3,reserved4;
- __u16 segtrk; /* segment of per track */
- __u16 trks; /* tracks per tape */
- __u8 reserved5,reserved6,reserved7,reserved8,reserved9,reserved10;
-} onstream_tape_paramtr_page_t;
-
-/*
- * OnStream ADRL frame
- */
-#define OS_FRAME_SIZE (32 * 1024 + 512)
-#define OS_DATA_SIZE (32 * 1024)
-#define OS_AUX_SIZE (512)
-
-/*
- * internal error codes for onstream
- */
-#define OS_PART_ERROR 2
-#define OS_WRITE_ERROR 1
-
#include <linux/mtio.h>
/**************************** Tunable parameters *****************************/
struct request rq; /* The corresponding request */
struct idetape_bh *bh; /* The data buffers */
struct idetape_stage_s *next; /* Pointer to the next stage */
- os_aux_t *aux; /* OnStream aux ptr */
} idetape_stage_t;
/*
char write_prot;
/*
- * OnStream flags
- */
- /* the tape is an OnStream tape */
- int onstream;
- /* OnStream raw access (32.5KB block size) */
- int raw;
- /* current number of frames in internal buffer */
- int cur_frames;
- /* max number of frames in internal buffer */
- int max_frames;
- /* logical block number */
- int logical_blk_num;
- /* write pass counter */
- __u16 wrt_pass_cntr;
- /* update frame counter */
- __u32 update_frame_cntr;
- struct completion *waiting;
- /* write error recovery active */
- int onstream_write_error;
- /* header frame verified ok */
- int header_ok;
- /* reading linux-specific media */
- int linux_media;
- int linux_media_version;
- /* application signature */
- char application_sig[5];
- int filemark_cnt;
- int first_mark_addr;
- int last_mark_addr;
- int eod_frame_addr;
- unsigned long cmd_start_time;
- unsigned long max_cmd_time;
- unsigned capacity;
-
- /*
- * Optimize the number of "buffer filling"
- * mode sense commands.
- */
- /* last time in which we issued fill cmd */
- unsigned long last_buffer_fill;
- /* buffer fill command requested */
- int req_buffer_fill;
- int writes_since_buffer_fill;
- int reads_since_buffer_fill;
-
- /*
* Limit the number of times a request can
* be postponed, to avoid an infinite postpone
* deadlock.
* Function declarations
*
*/
-static void idetape_onstream_mode_sense_tape_parameter_page(ide_drive_t *drive, int debug);
static int idetape_chrdev_release (struct inode *inode, struct file *filp);
static void idetape_write_release (ide_drive_t *drive, unsigned int minor);
#endif /* IDETAPE_DEBUG_LOG_VERBOSE */
#endif /* IDETAPE_DEBUG_LOG */
- if (tape->onstream && result->sense_key == 2 &&
- result->asc == 0x53 && result->ascq == 2) {
- clear_bit(PC_DMA_ERROR, &pc->flags);
- ide_stall_queue(drive, HZ / 2);
- return;
- }
-
/*
* Correct pc->actually_transferred by asking the tape.
*/
set_bit(PC_ABORT, &pc->flags);
}
if (!test_bit(PC_ABORT, &pc->flags) &&
- (tape->onstream || pc->actually_transferred))
+ pc->actually_transferred)
pc->retries = IDETAPE_MAX_PC_RETRIES + 1;
}
}
int error;
int remove_stage = 0;
idetape_stage_t *active_stage;
-#if ONSTREAM_DEBUG
- idetape_stage_t *stage;
- os_aux_t *aux;
- unsigned char *p;
-#endif
#if IDETAPE_DEBUG_LOG
if (tape->debug_level >= 4)
tape->active_data_request = NULL;
tape->nr_pending_stages--;
if (rq->cmd[0] & REQ_IDETAPE_WRITE) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2) {
- if (tape->onstream) {
- stage = tape->first_stage;
- aux = stage->aux;
- p = stage->bh->b_data;
- if (ntohl(aux->logical_blk_num) < 11300 && ntohl(aux->logical_blk_num) > 11100)
- printk(KERN_INFO "ide-tape: finished writing logical blk %u (data %x %x %x %x)\n", ntohl(aux->logical_blk_num), *p++, *p++, *p++, *p++);
- }
- }
-#endif
- if (tape->onstream && !tape->raw) {
- if (tape->first_frame_position == OS_DATA_ENDFRAME1) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk("ide-tape: %s: skipping over config partition.\n", tape->name);
-#endif
- tape->onstream_write_error = OS_PART_ERROR;
- if (tape->waiting) {
- rq->waiting = NULL;
- complete(tape->waiting);
- }
- }
- }
remove_stage = 1;
if (error) {
set_bit(IDETAPE_PIPELINE_ERROR, &tape->flags);
if (error == IDETAPE_ERROR_EOD)
idetape_abort_pipeline(drive, active_stage);
- if (tape->onstream && !tape->raw &&
- error == IDETAPE_ERROR_GENERAL &&
- tape->sense.sense_key == 3) {
- clear_bit(IDETAPE_PIPELINE_ERROR, &tape->flags);
- printk(KERN_ERR "ide-tape: %s: write error, enabling error recovery\n", tape->name);
- tape->onstream_write_error = OS_WRITE_ERROR;
- remove_stage = 0;
- tape->nr_pending_stages++;
- tape->next_stage = tape->first_stage;
- rq->current_nr_sectors = rq->nr_sectors;
- if (tape->waiting) {
- rq->waiting = NULL;
- complete(tape->waiting);
- }
- }
}
} else if (rq->cmd[0] & REQ_IDETAPE_READ) {
if (error == IDETAPE_ERROR_EOD) {
idetape_abort_pipeline(drive, active_stage);
}
}
- if (tape->next_stage != NULL && !tape->onstream_write_error) {
+ if (tape->next_stage != NULL) {
idetape_active_next_stage(drive);
/*
*/
(void) ide_do_drive_cmd(drive, tape->active_data_request, ide_end);
} else if (!error) {
- if (!tape->onstream)
idetape_increase_max_pipeline_stages(drive);
}
}
idetape_pc_t *pc = tape->pc;
unsigned int temp;
- unsigned long cmd_time;
#if SIMULATE_ERRORS
static int error_sim_count = 0;
#endif
/* No more interrupts */
if (!status.b.drq) {
- cmd_time = (jiffies - tape->cmd_start_time) * 1000 / HZ;
- tape->max_cmd_time = max(cmd_time, tape->max_cmd_time);
#if IDETAPE_DEBUG_LOG
if (tape->debug_level >= 2)
printk(KERN_INFO "ide-tape: Packet command completed, %d bytes transferred\n", pc->actually_transferred);
return idetape_retry_pc(drive);
}
pc->error = 0;
- if (!tape->onstream &&
- test_bit(PC_WAIT_FOR_DSC, &pc->flags) &&
+ if (test_bit(PC_WAIT_FOR_DSC, &pc->flags) &&
!status.b.dsc) {
/* Media access command */
tape->dsc_polling_start = jiffies;
"a packet command\n");
return ide_do_reset(drive);
}
- tape->cmd_start_time = jiffies;
/* Set the interrupt routine */
ide_set_handler(drive, &idetape_pc_intr, IDETAPE_WAIT_CMD, NULL);
#ifdef CONFIG_BLK_DEV_IDEDMA
tape->name, pc->c[0],
tape->sense_key, tape->asc,
tape->ascq);
- if (tape->onstream &&
- pc->c[0] == IDETAPE_READ_CMD &&
- tape->sense_key == 3 &&
- tape->asc == 0x11)
- /* AJN-1: 11 should be 0x11 */
- printk(KERN_ERR "ide-tape: %s: enabling read error recovery\n", tape->name);
}
/* Giving up */
pc->error = IDETAPE_ERROR_GENERAL;
pc->callback = &idetape_pc_callback;
}
-static ide_startstop_t idetape_onstream_buffer_fill_callback (ide_drive_t *drive)
-{
- idetape_tape_t *tape = drive->driver_data;
-
- tape->max_frames = tape->pc->buffer[4 + 2];
- tape->cur_frames = tape->pc->buffer[4 + 3];
- if (tape->chrdev_direction == idetape_direction_write)
- tape->tape_head = tape->buffer_head - tape->cur_frames;
- else
- tape->tape_head = tape->buffer_head + tape->cur_frames;
- if (tape->tape_head != tape->last_tape_head) {
- tape->last_tape_head = tape->tape_head;
- tape->tape_still_time_begin = jiffies;
- if (tape->tape_still_time > 200)
- tape->measure_insert_time = 1;
- }
- tape->tape_still_time = (jiffies - tape->tape_still_time_begin) * 1000 / HZ;
-#if USE_IOTRACE
- IO_trace(IO_IDETAPE_FIFO, tape->pipeline_head, tape->buffer_head,
- tape->tape_head, tape->minor);
-#endif
-#if IDETAPE_DEBUG_LOG
- if (tape->debug_level >= 1)
- printk(KERN_INFO "ide-tape: buffer fill callback, %d/%d\n",
- tape->cur_frames, tape->max_frames);
-#endif
- idetape_end_request(drive, tape->pc->error ? 0 : 1, 0);
- return ide_stopped;
-}
-
-static void idetape_queue_onstream_buffer_fill (ide_drive_t *drive)
-{
- idetape_pc_t *pc;
- struct request *rq;
-
- pc = idetape_next_pc_storage(drive);
- rq = idetape_next_rq_storage(drive);
- idetape_create_mode_sense_cmd(pc, IDETAPE_BUFFER_FILLING_PAGE);
- pc->callback = idetape_onstream_buffer_fill_callback;
- idetape_queue_pc_head(drive, pc, rq);
-}
-
static void calculate_speeds(ide_drive_t *drive)
{
idetape_tape_t *tape = drive->driver_data;
idetape_pc_t *pc = tape->pc;
atapi_status_t status;
- if (tape->onstream)
- printk(KERN_INFO "ide-tape: bug: onstream, media_access_finished\n");
status.all = HWIF(drive)->INB(IDE_STATUS_REG);
if (status.b.dsc) {
if (status.b.check) {
static void idetape_create_read_cmd(idetape_tape_t *tape, idetape_pc_t *pc, unsigned int length, struct idetape_bh *bh)
{
- struct idetape_bh *p = bh;
idetape_init_pc(pc);
pc->c[0] = IDETAPE_READ_CMD;
put_unaligned(htonl(length), (unsigned int *) &pc->c[1]);
pc->bh = bh;
atomic_set(&bh->b_count, 0);
pc->buffer = NULL;
- if (tape->onstream) {
- while (p) {
- atomic_set(&p->b_count, 0);
- p = p->b_reqnext;
- }
- }
- if (!tape->onstream) {
- pc->request_transfer = pc->buffer_size = length * tape->tape_block_size;
- if (pc->request_transfer == tape->stage_size)
- set_bit(PC_DMA_RECOMMENDED, &pc->flags);
- } else {
- if (length) {
- pc->request_transfer = pc->buffer_size = 32768 + 512;
- set_bit(PC_DMA_RECOMMENDED, &pc->flags);
- } else
- pc->request_transfer = 0;
- }
+ pc->request_transfer = pc->buffer_size = length * tape->tape_block_size;
+ if (pc->request_transfer == tape->stage_size)
+ set_bit(PC_DMA_RECOMMENDED, &pc->flags);
}
static void idetape_create_read_buffer_cmd(idetape_tape_t *tape, idetape_pc_t *pc, unsigned int length, struct idetape_bh *bh)
static void idetape_create_write_cmd(idetape_tape_t *tape, idetape_pc_t *pc, unsigned int length, struct idetape_bh *bh)
{
- struct idetape_bh *p = bh;
-
idetape_init_pc(pc);
pc->c[0] = IDETAPE_WRITE_CMD;
put_unaligned(htonl(length), (unsigned int *) &pc->c[1]);
pc->c[1] = 1;
pc->callback = &idetape_rw_callback;
set_bit(PC_WRITING, &pc->flags);
- if (tape->onstream) {
- while (p) {
- atomic_set(&p->b_count, p->b_size);
- p = p->b_reqnext;
- }
- }
pc->bh = bh;
pc->b_data = bh->b_data;
pc->b_count = atomic_read(&bh->b_count);
pc->buffer = NULL;
- if (!tape->onstream) {
- pc->request_transfer = pc->buffer_size = length * tape->tape_block_size;
- if (pc->request_transfer == tape->stage_size)
- set_bit(PC_DMA_RECOMMENDED, &pc->flags);
- } else {
- if (length) {
- pc->request_transfer = pc->buffer_size = 32768 + 512;
- set_bit(PC_DMA_RECOMMENDED, &pc->flags);
- } else
- pc->request_transfer = 0;
- }
+ pc->request_transfer = pc->buffer_size = length * tape->tape_block_size;
+ if (pc->request_transfer == tape->stage_size)
+ set_bit(PC_DMA_RECOMMENDED, &pc->flags);
}
/*
*/
status.all = HWIF(drive)->INB(IDE_STATUS_REG);
- /*
- * The OnStream tape drive doesn't support DSC. Assume
- * that DSC is always set.
- */
- if (tape->onstream)
- status.b.dsc = 1;
if (!drive->dsc_overlap && !(rq->cmd[0] & REQ_IDETAPE_PC2))
set_bit(IDETAPE_IGNORE_DSC, &tape->flags);
- /*
- * For the OnStream tape, check the current status of the tape
- * internal buffer using data gathered from the buffer fill
- * mode page, and postpone our request, effectively "disconnecting"
- * from the IDE bus, in case the buffer is full (writing) or
- * empty (reading), and there is a danger that our request will
- * hold the IDE bus during actual media access.
- */
if (tape->tape_still_time > 100 && tape->tape_still_time < 200)
tape->measure_insert_time = 1;
- if (tape->req_buffer_fill &&
- (rq->cmd[0] & (REQ_IDETAPE_WRITE | REQ_IDETAPE_READ))) {
- tape->req_buffer_fill = 0;
- tape->writes_since_buffer_fill = 0;
- tape->reads_since_buffer_fill = 0;
- tape->last_buffer_fill = jiffies;
- idetape_queue_onstream_buffer_fill(drive);
- if (time_after(jiffies, tape->insert_time))
- tape->insert_speed = tape->insert_size / 1024 * HZ / (jiffies - tape->insert_time);
- return ide_stopped;
- }
if (time_after(jiffies, tape->insert_time))
tape->insert_speed = tape->insert_size / 1024 * HZ / (jiffies - tape->insert_time);
calculate_speeds(drive);
- if (tape->onstream && tape->max_frames &&
- (((rq->cmd[0] & REQ_IDETAPE_WRITE) &&
- ( tape->cur_frames == tape->max_frames ||
- ( tape->speed_control && tape->cur_frames > 5 &&
- (tape->insert_speed > tape->max_insert_speed ||
- (0 /* tape->cur_frames > 30 && tape->tape_still_time > 200 */) ) ) ) ) ||
- ((rq->cmd[0] & REQ_IDETAPE_READ) &&
- ( tape->cur_frames == 0 ||
- ( tape->speed_control && (tape->cur_frames < tape->max_frames - 5) &&
- tape->insert_speed > tape->max_insert_speed ) ) && rq->nr_sectors) ) ) {
-#if IDETAPE_DEBUG_LOG
- if (tape->debug_level >= 4)
- printk(KERN_INFO "ide-tape: postponing request, "
- "cmd %ld, cur %d, max %d\n",
- rq->cmd[0], tape->cur_frames, tape->max_frames);
-#endif
- if (tape->postpone_cnt++ < 500) {
- status.b.dsc = 0;
- tape->req_buffer_fill = 1;
- }
-#if ONSTREAM_DEBUG
- else if (tape->debug_level >= 4)
- printk(KERN_INFO "ide-tape: %s: postpone_cnt %d\n",
- tape->name, tape->postpone_cnt);
-#endif
- }
if (!test_and_clear_bit(IDETAPE_IGNORE_DSC, &tape->flags) &&
!status.b.dsc) {
if (postponed_rq == NULL) {
IO_trace(IO_IDETAPE_FIFO, tape->pipeline_head, tape->buffer_head, tape->tape_head, tape->minor);
#endif
tape->postpone_cnt = 0;
- tape->reads_since_buffer_fill++;
- if (tape->onstream) {
- if (tape->cur_frames - tape->reads_since_buffer_fill <= 0)
- tape->req_buffer_fill = 1;
- if (time_after(jiffies, tape->last_buffer_fill + 5 * HZ / 100))
- tape->req_buffer_fill = 1;
- }
pc = idetape_next_pc_storage(drive);
idetape_create_read_cmd(tape, pc, rq->current_nr_sectors, (struct idetape_bh *)rq->special);
goto out;
IO_trace(IO_IDETAPE_FIFO, tape->pipeline_head, tape->buffer_head, tape->tape_head, tape->minor);
#endif
tape->postpone_cnt = 0;
- tape->writes_since_buffer_fill++;
- if (tape->onstream) {
- if (tape->cur_frames + tape->writes_since_buffer_fill >= tape->max_frames)
- tape->req_buffer_fill = 1;
- if (time_after(jiffies, tape->last_buffer_fill + 5 * HZ / 100))
- tape->req_buffer_fill = 1;
- calculate_speeds(drive);
- }
pc = idetape_next_pc_storage(drive);
idetape_create_write_cmd(tape, pc, rq->current_nr_sectors, (struct idetape_bh *)rq->special);
goto out;
bh->b_size -= tape->excess_bh_size;
if (full)
atomic_sub(tape->excess_bh_size, &bh->b_count);
- if (tape->onstream)
- stage->aux = (os_aux_t *) (bh->b_data + bh->b_size - OS_AUX_SIZE);
return stage;
abort:
__idetape_kfree_stage(stage);
static void idetape_switch_buffers (idetape_tape_t *tape, idetape_stage_t *stage)
{
struct idetape_bh *tmp;
- os_aux_t *tmp_aux;
tmp = stage->bh;
- tmp_aux = stage->aux;
stage->bh = tape->merge_stage->bh;
- stage->aux = tape->merge_stage->aux;
tape->merge_stage->bh = tmp;
- tape->merge_stage->aux = tmp_aux;
idetape_init_merge_stage(tape);
}
}
/*
- * Initialize the OnStream AUX
- */
-static void idetape_init_stage (ide_drive_t *drive, idetape_stage_t *stage, int frame_type, int logical_blk_num)
-{
- idetape_tape_t *tape = drive->driver_data;
- os_aux_t *aux = stage->aux;
- os_partition_t *par = &aux->partition;
- os_dat_t *dat = &aux->dat;
-
- if (!tape->onstream || tape->raw)
- return;
- memset(aux, 0, sizeof(*aux));
- aux->format_id = htonl(0);
- memcpy(aux->application_sig, "LIN3", 4);
- aux->hdwr = htonl(0);
- aux->frame_type = frame_type;
-
- if (frame_type == OS_FRAME_TYPE_HEADER) {
- aux->update_frame_cntr = htonl(tape->update_frame_cntr);
- par->partition_num = OS_CONFIG_PARTITION;
- par->par_desc_ver = OS_PARTITION_VERSION;
- par->wrt_pass_cntr = htons(0xffff);
- par->first_frame_addr = htonl(0);
- par->last_frame_addr = htonl(0xbb7); /* 2999 */
- aux->frame_seq_num = htonl(0);
- aux->logical_blk_num_high = htonl(0);
- aux->logical_blk_num = htonl(0);
- aux->next_mark_addr = htonl(tape->first_mark_addr);
- } else {
- aux->update_frame_cntr = htonl(0);
- par->partition_num = OS_DATA_PARTITION;
- par->par_desc_ver = OS_PARTITION_VERSION;
- par->wrt_pass_cntr = htons(tape->wrt_pass_cntr);
- par->first_frame_addr = htonl(OS_DATA_STARTFRAME1);
- par->last_frame_addr = htonl(tape->capacity);
- aux->frame_seq_num = htonl(logical_blk_num);
- aux->logical_blk_num_high = htonl(0);
- aux->logical_blk_num = htonl(logical_blk_num);
- dat->dat_sz = 8;
- dat->reserved1 = 0;
- dat->entry_cnt = 1;
- dat->reserved3 = 0;
- if (frame_type == OS_FRAME_TYPE_DATA)
- dat->dat_list[0].blk_sz = htonl(32 * 1024);
- else
- dat->dat_list[0].blk_sz = 0;
- dat->dat_list[0].blk_cnt = htons(1);
- if (frame_type == OS_FRAME_TYPE_MARKER)
- dat->dat_list[0].flags = OS_DAT_FLAGS_MARK;
- else
- dat->dat_list[0].flags = OS_DAT_FLAGS_DATA;
- dat->dat_list[0].reserved = 0;
- }
- /* shouldn't this be htonl ?? */
- aux->filemark_cnt = ntohl(tape->filemark_cnt);
- /* shouldn't this be htonl ?? */
- aux->phys_fm = ntohl(0xffffffff);
- /* shouldn't this be htonl ?? */
- aux->last_mark_addr = ntohl(tape->last_mark_addr);
-}
-
-/*
* idetape_wait_for_request installs a completion in a pending request
* and sleeps until it is serviced.
*
}
#endif /* IDETAPE_DEBUG_BUGS */
rq->waiting = &wait;
- tape->waiting = &wait;
spin_unlock_irq(&tape->spinlock);
wait_for_completion(&wait);
/* The stage and its struct request have been deallocated */
- tape->waiting = NULL;
spin_lock_irq(&tape->spinlock);
}
*/
static void idetape_create_write_filemark_cmd (ide_drive_t *drive, idetape_pc_t *pc,int write_filemark)
{
- idetape_tape_t *tape = drive->driver_data;
-
idetape_init_pc(pc);
pc->c[0] = IDETAPE_WRITE_FILEMARK_CMD;
- if (tape->onstream)
- pc->c[1] = 1; /* Immed bit */
- pc->c[4] = write_filemark; /* not used for OnStream ?? */
+ pc->c[4] = write_filemark;
set_bit(PC_WAIT_FOR_DSC, &pc->flags);
pc->callback = &idetape_pc_callback;
}
static void idetape_create_load_unload_cmd (ide_drive_t *drive, idetape_pc_t *pc,int cmd)
{
- idetape_tape_t *tape = drive->driver_data;
-
idetape_init_pc(pc);
pc->c[0] = IDETAPE_LOAD_UNLOAD_CMD;
pc->c[4] = cmd;
- if (tape->onstream) {
- pc->c[1] = 1;
- if (cmd == !IDETAPE_LU_LOAD_MASK)
- pc->c[4] = 4;
- }
set_bit(PC_WAIT_FOR_DSC, &pc->flags);
pc->callback = &idetape_pc_callback;
}
static int idetape_queue_pc_tail (ide_drive_t *drive,idetape_pc_t *pc)
{
- idetape_tape_t *tape = drive->driver_data;
- int rc;
-
- rc = __idetape_queue_pc_tail(drive, pc);
- if (rc)
- return rc;
- if (tape->onstream && test_bit(PC_WAIT_FOR_DSC, &pc->flags)) {
- /* AJN-4: Changed from 5 to 10 minutes;
- * because retension takes approx.
- * 8:20 with Onstream 30GB tape
- */
- rc = idetape_wait_ready(drive, 60 * 10 * HZ);
- }
- return rc;
+ return __idetape_queue_pc_tail(drive, pc);
}
static int idetape_flush_tape_buffers (ide_drive_t *drive)
static void idetape_create_locate_cmd (ide_drive_t *drive, idetape_pc_t *pc, unsigned int block, u8 partition, int skip)
{
- idetape_tape_t *tape = drive->driver_data;
-
idetape_init_pc(pc);
pc->c[0] = IDETAPE_LOCATE_CMD;
- if (tape->onstream)
- pc->c[1] = 1; /* Immediate bit */
- else
- pc->c[1] = 2;
+ pc->c[1] = 2;
put_unaligned(htonl(block), (unsigned int *) &pc->c[3]);
pc->c[8] = partition;
- if (tape->onstream)
- /*
- * Set SKIP bit.
- * In case of write error this will write buffered
- * data in the drive to this new position!
- */
- pc->c[9] = skip << 7;
set_bit(PC_WAIT_FOR_DSC, &pc->flags);
pc->callback = &idetape_pc_callback;
}
cnt = __idetape_discard_read_pipeline(drive);
if (restore_position) {
position = idetape_read_position(drive);
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: address %u, nr_stages %d\n", position, cnt);
-#endif
seek = position > cnt ? position - cnt : 0;
if (idetape_position_tape(drive, seek, 0, 0)) {
printk(KERN_INFO "ide-tape: %s: position_tape failed in discard_pipeline()\n", tape->name);
}
}
-static void idetape_update_stats (ide_drive_t *drive)
-{
- idetape_pc_t pc;
-
- idetape_create_mode_sense_cmd(&pc, IDETAPE_BUFFER_FILLING_PAGE);
- pc.callback = idetape_onstream_buffer_fill_callback;
- (void) idetape_queue_pc_tail(drive, &pc);
-}
-
/*
* idetape_queue_rw_tail generates a read/write request for the block
* device interface and wait for it to be serviced.
rq.special = (void *)bh;
rq.sector = tape->first_frame_position;
rq.nr_sectors = rq.current_nr_sectors = blocks;
- if (tape->onstream)
- tape->postpone_cnt = 600;
(void) ide_do_drive_cmd(drive, &rq, ide_wait);
if ((cmd & (REQ_IDETAPE_READ | REQ_IDETAPE_WRITE)) == 0)
}
/*
- * Read back the drive's internal buffer contents, as a part
- * of the write error recovery mechanism for old OnStream
- * firmware revisions.
- */
-static void idetape_onstream_read_back_buffer (ide_drive_t *drive)
-{
- idetape_tape_t *tape = drive->driver_data;
- int frames, i, logical_blk_num;
- idetape_stage_t *stage, *first = NULL, *last = NULL;
- os_aux_t *aux;
- struct request *rq;
- unsigned char *p;
- unsigned long flags;
-
- idetape_update_stats(drive);
- frames = tape->cur_frames;
- logical_blk_num = ntohl(tape->first_stage->aux->logical_blk_num) - frames;
- printk(KERN_INFO "ide-tape: %s: reading back %d frames from the drive's internal buffer\n", tape->name, frames);
- for (i = 0; i < frames; i++) {
- stage = __idetape_kmalloc_stage(tape, 0, 0);
- if (!first)
- first = stage;
- aux = stage->aux;
- p = stage->bh->b_data;
- idetape_queue_rw_tail(drive, REQ_IDETAPE_READ_BUFFER, tape->capabilities.ctl, stage->bh);
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: read back logical block %d, data %x %x %x %x\n", tape->name, logical_blk_num, *p++, *p++, *p++, *p++);
-#endif
- rq = &stage->rq;
- idetape_init_rq(rq, REQ_IDETAPE_WRITE);
- rq->sector = tape->first_frame_position;
- rq->nr_sectors = rq->current_nr_sectors = tape->capabilities.ctl;
- idetape_init_stage(drive, stage, OS_FRAME_TYPE_DATA, logical_blk_num++);
- stage->next = NULL;
- if (last)
- last->next = stage;
- last = stage;
- }
- if (frames) {
- spin_lock_irqsave(&tape->spinlock, flags);
- last->next = tape->first_stage;
- tape->next_stage = tape->first_stage = first;
- tape->nr_stages += frames;
- tape->nr_pending_stages += frames;
- spin_unlock_irqrestore(&tape->spinlock, flags);
- }
- idetape_update_stats(drive);
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: frames left in buffer: %d\n", tape->name, tape->cur_frames);
-#endif
-}
-
-/*
- * Error recovery algorithm for the OnStream tape.
- */
-static void idetape_onstream_write_error_recovery (ide_drive_t *drive)
-{
- idetape_tape_t *tape = drive->driver_data;
- unsigned int block;
-
- if (tape->onstream_write_error == OS_WRITE_ERROR) {
- printk(KERN_ERR "ide-tape: %s: onstream_write_error_recovery: detected physical bad block at %u, logical %u first frame %u last_frame %u bufblocks %u stages %u skipping %u frames\n",
- tape->name, ntohl(tape->sense.information), tape->logical_blk_num,
- tape->first_frame_position, tape->last_frame_position,
- tape->blocks_in_buffer, tape->nr_stages,
- (ntohl(tape->sense.command_specific) >> 16) & 0xff );
- block = ntohl(tape->sense.information) + ((ntohl(tape->sense.command_specific) >> 16) & 0xff);
- idetape_update_stats(drive);
- printk(KERN_ERR "ide-tape: %s: relocating %d buffered logical blocks to physical block %u\n", tape->name, tape->cur_frames, block);
-#if 0 /* isn't once enough ??? MM */
- idetape_update_stats(drive);
-#endif
- if (tape->firmware_revision_num >= 106)
- idetape_position_tape(drive, block, 0, 1);
- else {
- idetape_onstream_read_back_buffer(drive);
- idetape_position_tape(drive, block, 0, 0);
- }
-#if 0 /* already done in idetape_position_tape MM */
- idetape_read_position(drive);
-#endif
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 1)
- printk(KERN_ERR "ide-tape: %s: positioning complete, cur_frames %d, pos %d, tape pos %d\n", tape->name, tape->cur_frames, tape->first_frame_position, tape->last_frame_position);
-#endif
- } else if (tape->onstream_write_error == OS_PART_ERROR) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 1)
- printk(KERN_INFO "ide-tape: %s: skipping over config partition\n", tape->name);
-#endif
- idetape_flush_tape_buffers(drive);
- block = idetape_read_position(drive);
- if (block != OS_DATA_ENDFRAME1)
- printk(KERN_ERR "ide-tape: warning, current position %d, expected %d\n", block, OS_DATA_ENDFRAME1);
- idetape_position_tape(drive, 0xbb8, 0, 0); /* 3000 */
- }
- tape->onstream_write_error = 0;
-}
-
-/*
* idetape_insert_pipeline_into_queue is used to start servicing the
* pipeline stages, starting from tape->next_stage.
*/
if (tape->next_stage == NULL)
return;
if (!idetape_pipeline_active(tape)) {
- if (tape->onstream_write_error)
- idetape_onstream_write_error_recovery(drive);
set_bit(IDETAPE_PIPELINE_ACTIVE, &tape->flags);
idetape_active_next_stage(drive);
(void) ide_do_drive_cmd(drive, tape->active_data_request, ide_end);
static void idetape_create_rewind_cmd (ide_drive_t *drive, idetape_pc_t *pc)
{
- idetape_tape_t *tape = drive->driver_data;
-
idetape_init_pc(pc);
pc->c[0] = IDETAPE_REWIND_CMD;
- if (tape->onstream)
- pc->c[1] = 1;
set_bit(PC_WAIT_FOR_DSC, &pc->flags);
pc->callback = &idetape_pc_callback;
}
+#if 0
static void idetape_create_mode_select_cmd (idetape_pc_t *pc, int length)
{
idetape_init_pc(pc);
pc->request_transfer = 255;
pc->callback = &idetape_pc_callback;
}
+#endif
static void idetape_create_erase_cmd (idetape_pc_t *pc)
{
pc->callback = &idetape_pc_callback;
}
-/*
- * Verify that we have the correct tape frame
- */
-static int idetape_verify_stage (ide_drive_t *drive, idetape_stage_t *stage, int logical_blk_num, int quiet)
-{
- idetape_tape_t *tape = drive->driver_data;
- os_aux_t *aux = stage->aux;
- os_partition_t *par = &aux->partition;
- struct request *rq = &stage->rq;
- struct idetape_bh *bh;
-
- if (!tape->onstream)
- return 1;
- if (tape->raw) {
- if (rq->errors) {
- bh = stage->bh;
- while (bh) {
- memset(bh->b_data, 0, bh->b_size);
- bh = bh->b_reqnext;
- }
- strcpy(stage->bh->b_data, "READ ERROR ON FRAME");
- }
- return 1;
- }
- if (rq->errors == IDETAPE_ERROR_GENERAL) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, read error\n", tape->name, tape->first_frame_position);
- return 0;
- }
- if (rq->errors == IDETAPE_ERROR_EOD) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, eod\n", tape->name, tape->first_frame_position);
- return 0;
- }
- if (ntohl(aux->format_id) != 0) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, format_id %u\n", tape->name, tape->first_frame_position, ntohl(aux->format_id));
- return 0;
- }
- if (memcmp(aux->application_sig, tape->application_sig, 4) != 0) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, incorrect application signature\n", tape->name, tape->first_frame_position);
- return 0;
- }
- if (aux->frame_type != OS_FRAME_TYPE_DATA &&
- aux->frame_type != OS_FRAME_TYPE_EOD &&
- aux->frame_type != OS_FRAME_TYPE_MARKER) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, frame type %x\n", tape->name, tape->first_frame_position, aux->frame_type);
- return 0;
- }
- if (par->partition_num != OS_DATA_PARTITION) {
- if (!tape->linux_media || tape->linux_media_version != 2) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, partition num %d\n", tape->name, tape->first_frame_position, par->partition_num);
- return 0;
- }
- }
- if (par->par_desc_ver != OS_PARTITION_VERSION) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, partition version %d\n", tape->name, tape->first_frame_position, par->par_desc_ver);
- return 0;
- }
- if (ntohs(par->wrt_pass_cntr) != tape->wrt_pass_cntr) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, wrt_pass_cntr %d (expected %d)(logical_blk_num %u)\n", tape->name, tape->first_frame_position, ntohs(par->wrt_pass_cntr), tape->wrt_pass_cntr, ntohl(aux->logical_blk_num));
- return 0;
- }
- if (aux->frame_seq_num != aux->logical_blk_num) {
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, seq != logical\n", tape->name, tape->first_frame_position);
- return 0;
- }
- if (logical_blk_num != -1 && ntohl(aux->logical_blk_num) != logical_blk_num) {
- if (!quiet)
- printk(KERN_INFO "ide-tape: %s: skipping frame %d, logical_blk_num %u (expected %d)\n", tape->name, tape->first_frame_position, ntohl(aux->logical_blk_num), logical_blk_num);
- return 0;
- }
- if (aux->frame_type == OS_FRAME_TYPE_MARKER) {
- rq->errors = IDETAPE_ERROR_FILEMARK;
- rq->current_nr_sectors = rq->nr_sectors;
- }
- return 1;
-}
-
static void idetape_wait_first_stage (ide_drive_t *drive)
{
idetape_tape_t *tape = drive->driver_data;
rq->nr_sectors = rq->current_nr_sectors = blocks;
idetape_switch_buffers(tape, new_stage);
- idetape_init_stage(drive, new_stage, OS_FRAME_TYPE_DATA, tape->logical_blk_num);
- tape->logical_blk_num++;
idetape_add_stage_tail(drive, new_stage);
tape->pipeline_head++;
#if USE_IOTRACE
* writing anymore, wait for the pipeline to be full enough
* (90%) before starting to service requests, so that we will
* be able to keep up with the higher speeds of the tape.
- *
- * For the OnStream drive, we can query the number of pending
- * frames in the drive's internal buffer. As long as the tape
- * is still writing, it is better to write frames immediately
- * rather than gather them in the pipeline. This will give the
- * tape's firmware the ability to sense the current incoming
- * data rate more accurately, and since the OnStream tape
- * supports variable speeds, it can try to adjust itself to the
- * incoming data rate.
*/
if (!idetape_pipeline_active(tape)) {
if (tape->nr_stages >= tape->max_stages * 9 / 10 ||
tape->insert_size = 0;
tape->insert_speed = 0;
idetape_insert_pipeline_into_queue(drive);
- } else if (tape->onstream) {
- idetape_update_stats(drive);
- if (tape->cur_frames > 5)
- idetape_insert_pipeline_into_queue(drive);
}
}
if (test_and_clear_bit(IDETAPE_PIPELINE_ERROR, &tape->flags))
tape->restart_speed_control_req = 0;
tape->pipeline_head = 0;
- tape->buffer_head = tape->tape_head = tape->cur_frames;
tape->controlled_last_pipeline_head = tape->uncontrolled_last_pipeline_head = 0;
tape->controlled_previous_pipeline_head = tape->uncontrolled_previous_pipeline_head = 0;
tape->pipeline_head_speed = tape->controlled_pipeline_head_speed = 5000;
if ((tape->merge_stage = __idetape_kmalloc_stage(tape, 0, 0)) == NULL)
return -ENOMEM;
tape->chrdev_direction = idetape_direction_read;
- tape->logical_blk_num = 0;
/*
* Issue a read 0 command to ensure that DSC handshake
tape->insert_size = 0;
tape->insert_speed = 0;
idetape_insert_pipeline_into_queue(drive);
- } else if (tape->onstream) {
- idetape_update_stats(drive);
- if (tape->cur_frames < tape->max_frames - 5)
- idetape_insert_pipeline_into_queue(drive);
}
}
return 0;
}
-static int idetape_get_logical_blk (ide_drive_t *drive, int logical_blk_num, int max_stages, int quiet)
-{
- idetape_tape_t *tape = drive->driver_data;
- unsigned long flags;
- int cnt = 0, x, position;
-
- /*
- * Search and wait for the next logical tape block
- */
- while (1) {
- if (cnt++ > 1000) { /* AJN: was 100 */
- printk(KERN_INFO "ide-tape: %s: couldn't find logical block %d, aborting\n", tape->name, logical_blk_num);
- return 0;
- }
- idetape_initiate_read(drive, max_stages);
- if (tape->first_stage == NULL) {
- if (tape->onstream) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 1)
- printk(KERN_INFO "ide-tape: %s: first_stage == NULL, pipeline error %ld\n", tape->name, (long)test_bit(IDETAPE_PIPELINE_ERROR, &tape->flags));
-#endif
- clear_bit(IDETAPE_PIPELINE_ERROR, &tape->flags);
- position = idetape_read_position(drive);
- printk(KERN_INFO "ide-tape: %s: blank block detected at %d\n", tape->name, position);
- if (position >= 3000 && position < 3080)
- /* Why is this check and number ??? MM */
- position += 32;
- if (position >= OS_DATA_ENDFRAME1 &&
- position < 3000)
- position = 3000;
- else
- /*
- * compensate for write errors that
- * generally skip 80 frames, expect
- * around 20 read errors in a row...
- */
- position += 60;
- if (position >= OS_DATA_ENDFRAME1 &&
- position < 3000)
- position = 3000;
- printk(KERN_INFO "ide-tape: %s: positioning tape to block %d\n", tape->name, position);
-
- /* seems to be needed to correctly position
- * at block 3000 MM
- */
- if (position == 3000)
- idetape_position_tape(drive, 0, 0, 0);
- idetape_position_tape(drive, position, 0, 0);
- cnt += 40;
- continue;
- } else
- return 0;
- }
- idetape_wait_first_stage(drive);
- if (idetape_verify_stage(drive, tape->first_stage, logical_blk_num, quiet))
- break;
- if (tape->first_stage->rq.errors == IDETAPE_ERROR_EOD)
- cnt--;
- if (idetape_verify_stage(drive, tape->first_stage, -1, quiet)) {
- x = ntohl(tape->first_stage->aux->logical_blk_num);
- if (x > logical_blk_num) {
- printk(KERN_ERR "ide-tape: %s: couldn't find logical block %d, aborting (block %d found)\n", tape->name, logical_blk_num, x);
- return 0;
- }
- }
- spin_lock_irqsave(&tape->spinlock, flags);
- idetape_remove_stage_head(drive);
- spin_unlock_irqrestore(&tape->spinlock, flags);
- }
- if (tape->onstream)
- tape->logical_blk_num = ntohl(tape->first_stage->aux->logical_blk_num);
- return 1;
-}
-
/*
* idetape_add_chrdev_read_request is called from idetape_chrdev_read
* to service a character device read request and add read-ahead
return 0;
/*
- * Wait for the next logical block to be available at the head
+ * Wait for the next block to be available at the head
* of the pipeline
*/
- if (!idetape_get_logical_blk(drive, tape->logical_blk_num, tape->max_stages, 0)) {
- if (tape->onstream) {
- set_bit(IDETAPE_READ_ERROR, &tape->flags);
- return 0;
- }
+ idetape_initiate_read(drive, tape->max_stages);
+ if (tape->first_stage == NULL) {
if (test_bit(IDETAPE_PIPELINE_ERROR, &tape->flags))
- return 0;
+ return 0;
return idetape_queue_rw_tail(drive, REQ_IDETAPE_READ, blocks, tape->merge_stage->bh);
}
+ idetape_wait_first_stage(drive);
rq_ptr = &tape->first_stage->rq;
bytes_read = tape->tape_block_size * (rq_ptr->nr_sectors - rq_ptr->current_nr_sectors);
rq_ptr->nr_sectors = rq_ptr->current_nr_sectors = 0;
- if (tape->onstream && !tape->raw &&
- tape->first_stage->aux->frame_type == OS_FRAME_TYPE_EOD) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: EOD reached\n",
- tape->name);
-#endif
- return 0;
- }
if (rq_ptr->errors == IDETAPE_ERROR_EOD)
return 0;
else {
idetape_switch_buffers(tape, tape->first_stage);
- if (rq_ptr->errors == IDETAPE_ERROR_GENERAL) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 1)
- printk(KERN_INFO "ide-tape: error detected, bytes_read %d\n", bytes_read);
-#endif
- }
if (rq_ptr->errors == IDETAPE_ERROR_FILEMARK)
set_bit(IDETAPE_FILEMARK, &tape->flags);
spin_lock_irqsave(&tape->spinlock, flags);
idetape_remove_stage_head(drive);
spin_unlock_irqrestore(&tape->spinlock, flags);
- tape->logical_blk_num++;
tape->pipeline_head++;
#if USE_IOTRACE
IO_trace(IO_IDETAPE_FIFO, tape->pipeline_head, tape->buffer_head, tape->tape_head, tape->minor);
{
int retval;
idetape_pc_t pc;
- idetape_tape_t *tape = drive->driver_data;
#if IDETAPE_DEBUG_LOG
+ idetape_tape_t *tape = drive->driver_data;
if (tape->debug_level >= 2)
printk(KERN_INFO "ide-tape: Reached idetape_rewind_tape\n");
#endif /* IDETAPE_DEBUG_LOG */
retval = idetape_queue_pc_tail(drive, &pc);
if (retval)
return retval;
- tape->logical_blk_num = 0;
return 0;
}
set_bit(IDETAPE_IGNORE_DSC, &tape->flags);
}
-static int idetape_onstream_space_over_filemarks_backward (ide_drive_t *drive,short mt_op,int mt_count)
+/*
+ * idetape_space_over_filemarks is now a bit more complicated than just
+ * passing the command to the tape since we may have crossed some
+ * filemarks during our pipelined read-ahead mode.
+ *
+ * As a minor side effect, the pipeline enables us to support MTFSFM when
+ * the filemark is in our internal pipeline even if the tape doesn't
+ * support spacing over filemarks in the reverse direction.
+ */
+static int idetape_space_over_filemarks (ide_drive_t *drive,short mt_op,int mt_count)
{
idetape_tape_t *tape = drive->driver_data;
- int cnt = 0;
- int last_mark_addr;
+ idetape_pc_t pc;
unsigned long flags;
+ int retval,count=0;
- if (!idetape_get_logical_blk(drive, -1, 10, 0)) {
- printk(KERN_INFO "ide-tape: %s: couldn't get logical blk num in space_filemarks_bwd\n", tape->name);
- return -EIO;
- }
- while (cnt != mt_count) {
- last_mark_addr = ntohl(tape->first_stage->aux->last_mark_addr);
- if (last_mark_addr == -1)
- return -EIO;
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: positioning to last mark at %d\n", last_mark_addr);
-#endif
- idetape_position_tape(drive, last_mark_addr, 0, 0);
- cnt++;
- if (!idetape_get_logical_blk(drive, -1, 10, 0)) {
- printk(KERN_INFO "ide-tape: %s: couldn't get logical blk num in space_filemarks\n", tape->name);
- return -EIO;
- }
- if (tape->first_stage->aux->frame_type != OS_FRAME_TYPE_MARKER) {
- printk(KERN_INFO "ide-tape: %s: expected to find marker at block %d, not found\n", tape->name, last_mark_addr);
- return -EIO;
- }
- }
- if (mt_op == MTBSFM) {
- spin_lock_irqsave(&tape->spinlock, flags);
- idetape_remove_stage_head(drive);
- tape->logical_blk_num++;
- spin_unlock_irqrestore(&tape->spinlock, flags);
- }
- return 0;
-}
-
-/*
- * ADRL 1.1 compatible "slow" space filemarks fwd version
- *
- * Just scans for the filemark sequentially.
- */
-static int idetape_onstream_space_over_filemarks_forward_slow (ide_drive_t *drive,short mt_op,int mt_count)
-{
- idetape_tape_t *tape = drive->driver_data;
- int cnt = 0;
- unsigned long flags;
-
- if (!idetape_get_logical_blk(drive, -1, 10, 0)) {
- printk(KERN_INFO "ide-tape: %s: couldn't get logical blk num in space_filemarks_fwd\n", tape->name);
- return -EIO;
- }
- while (1) {
- if (!idetape_get_logical_blk(drive, -1, 10, 0)) {
- printk(KERN_INFO "ide-tape: %s: couldn't get logical blk num in space_filemarks\n", tape->name);
- return -EIO;
- }
- if (tape->first_stage->aux->frame_type == OS_FRAME_TYPE_MARKER)
- cnt++;
- if (tape->first_stage->aux->frame_type == OS_FRAME_TYPE_EOD) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: space_fwd: EOD reached\n", tape->name);
-#endif
- return -EIO;
- }
- if (cnt == mt_count)
- break;
- spin_lock_irqsave(&tape->spinlock, flags);
- idetape_remove_stage_head(drive);
- spin_unlock_irqrestore(&tape->spinlock, flags);
- }
- if (mt_op == MTFSF) {
- spin_lock_irqsave(&tape->spinlock, flags);
- idetape_remove_stage_head(drive);
- tape->logical_blk_num++;
- spin_unlock_irqrestore(&tape->spinlock, flags);
- }
- return 0;
-}
-
-
-/*
- * Fast linux specific version of OnStream FSF
- */
-static int idetape_onstream_space_over_filemarks_forward_fast (ide_drive_t *drive,short mt_op,int mt_count)
-{
- idetape_tape_t *tape = drive->driver_data;
- int cnt = 0, next_mark_addr;
- unsigned long flags;
-
- if (!idetape_get_logical_blk(drive, -1, 10, 0)) {
- printk(KERN_INFO "ide-tape: %s: couldn't get logical blk num in space_filemarks_fwd\n", tape->name);
- return -EIO;
- }
-
- /*
- * Find nearest (usually previous) marker
- */
- while (1) {
- if (tape->first_stage->aux->frame_type == OS_FRAME_TYPE_MARKER)
- break;
- if (tape->first_stage->aux->frame_type == OS_FRAME_TYPE_EOD) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: space_fwd: EOD reached\n", tape->name);
-#endif
- return -EIO;
- }
- if (ntohl(tape->first_stage->aux->filemark_cnt) == 0) {
- if (tape->first_mark_addr == -1) {
- printk(KERN_INFO "ide-tape: %s: reverting to slow filemark space\n", tape->name);
- return idetape_onstream_space_over_filemarks_forward_slow(drive, mt_op, mt_count);
- }
- idetape_position_tape(drive, tape->first_mark_addr, 0, 0);
- if (!idetape_get_logical_blk(drive, -1, 10, 0)) {
- printk(KERN_INFO "ide-tape: %s: couldn't get logical blk num in space_filemarks_fwd_fast\n", tape->name);
- return -EIO;
- }
- if (tape->first_stage->aux->frame_type != OS_FRAME_TYPE_MARKER) {
- printk(KERN_INFO "ide-tape: %s: expected to find filemark at %d\n", tape->name, tape->first_mark_addr);
- return -EIO;
- }
- } else {
- if (idetape_onstream_space_over_filemarks_backward(drive, MTBSF, 1) < 0)
- return -EIO;
- mt_count++;
- }
- }
- cnt++;
- while (cnt != mt_count) {
- next_mark_addr = ntohl(tape->first_stage->aux->next_mark_addr);
- if (!next_mark_addr || next_mark_addr > tape->eod_frame_addr) {
- printk(KERN_INFO "ide-tape: %s: reverting to slow filemark space\n", tape->name);
- return idetape_onstream_space_over_filemarks_forward_slow(drive, mt_op, mt_count - cnt);
-#if ONSTREAM_DEBUG
- } else if (tape->debug_level >= 2) {
- printk(KERN_INFO "ide-tape: positioning to next mark at %d\n", next_mark_addr);
-#endif
- }
- idetape_position_tape(drive, next_mark_addr, 0, 0);
- cnt++;
- if (!idetape_get_logical_blk(drive, -1, 10, 0)) {
- printk(KERN_INFO "ide-tape: %s: couldn't get logical blk num in space_filemarks\n", tape->name);
- return -EIO;
- }
- if (tape->first_stage->aux->frame_type != OS_FRAME_TYPE_MARKER) {
- printk(KERN_INFO "ide-tape: %s: expected to find marker at block %d, not found\n", tape->name, next_mark_addr);
- return -EIO;
- }
- }
- if (mt_op == MTFSF) {
- spin_lock_irqsave(&tape->spinlock, flags);
- idetape_remove_stage_head(drive);
- tape->logical_blk_num++;
- spin_unlock_irqrestore(&tape->spinlock, flags);
- }
- return 0;
-}
-
-/*
- * idetape_space_over_filemarks is now a bit more complicated than just
- * passing the command to the tape since we may have crossed some
- * filemarks during our pipelined read-ahead mode.
- *
- * As a minor side effect, the pipeline enables us to support MTFSFM when
- * the filemark is in our internal pipeline even if the tape doesn't
- * support spacing over filemarks in the reverse direction.
- */
-static int idetape_space_over_filemarks (ide_drive_t *drive,short mt_op,int mt_count)
-{
- idetape_tape_t *tape = drive->driver_data;
- idetape_pc_t pc;
- unsigned long flags;
- int retval,count=0;
- int speed_control;
-
- if (tape->onstream) {
- if (tape->raw)
- return -EIO;
- speed_control = tape->speed_control;
- tape->speed_control = 0;
- if (mt_op == MTFSF || mt_op == MTFSFM) {
- if (tape->linux_media)
- retval = idetape_onstream_space_over_filemarks_forward_fast(drive, mt_op, mt_count);
- else
- retval = idetape_onstream_space_over_filemarks_forward_slow(drive, mt_op, mt_count);
- } else
- retval = idetape_onstream_space_over_filemarks_backward(drive, mt_op, mt_count);
- tape->speed_control = speed_control;
- tape->restart_speed_control_req = 1;
- return retval;
- }
-
- if (mt_count == 0)
- return 0;
- if (MTBSF == mt_op || MTBSFM == mt_op) {
- if (!tape->capabilities.sprev)
+ if (mt_count == 0)
+ return 0;
+ if (MTBSF == mt_op || MTBSFM == mt_op) {
+ if (!tape->capabilities.sprev)
return -EIO;
mt_count = - mt_count;
}
/* "A request was outside the capabilities of the device." */
return -ENXIO;
}
- if (tape->onstream && (count != tape->tape_block_size)) {
- printk(KERN_ERR "ide-tape: %s: use %d bytes as block size (%Zd used)\n", tape->name, tape->tape_block_size, count);
- return -EINVAL;
- }
#if IDETAPE_DEBUG_LOG
if (tape->debug_level >= 3)
printk(KERN_INFO "ide-tape: Reached idetape_chrdev_read, count %Zd\n", count);
idetape_space_over_filemarks(drive, MTFSF, 1);
return 0;
}
- if (tape->onstream && !actually_read &&
- test_and_clear_bit(IDETAPE_READ_ERROR, &tape->flags)) {
- printk(KERN_ERR "ide-tape: %s: unrecovered read error on "
- "logical block number %d, skipping\n",
- tape->name, tape->logical_blk_num);
- tape->logical_blk_num++;
- return -EIO;
- }
return actually_read;
}
-static void idetape_update_last_marker (ide_drive_t *drive, int last_mark_addr, int next_mark_addr)
-{
- idetape_tape_t *tape = drive->driver_data;
- idetape_stage_t *stage;
- os_aux_t *aux;
- int position;
-
- if (!tape->onstream || tape->raw)
- return;
- if (last_mark_addr == -1)
- return;
- stage = __idetape_kmalloc_stage(tape, 0, 0);
- if (stage == NULL)
- return;
- idetape_flush_tape_buffers(drive);
- position = idetape_read_position(drive);
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: current position (2) %d, "
- "lblk %d\n", position, tape->logical_blk_num);
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: current position (2) "
- "tape block %d\n", tape->last_frame_position);
-#endif
- idetape_position_tape(drive, last_mark_addr, 0, 0);
- if (!idetape_queue_rw_tail(drive, REQ_IDETAPE_READ, 1, stage->bh)) {
- printk(KERN_INFO "ide-tape: %s: couldn't read last marker\n",
- tape->name);
- __idetape_kfree_stage(stage);
- idetape_position_tape(drive, position, 0, 0);
- return;
- }
- aux = stage->aux;
- if (aux->frame_type != OS_FRAME_TYPE_MARKER) {
- printk(KERN_INFO "ide-tape: %s: expected to find marker "
- "at addr %d\n", tape->name, last_mark_addr);
- __idetape_kfree_stage(stage);
- idetape_position_tape(drive, position, 0, 0);
- return;
- }
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: writing back marker\n");
-#endif
- aux->next_mark_addr = htonl(next_mark_addr);
- idetape_position_tape(drive, last_mark_addr, 0, 0);
- if (!idetape_queue_rw_tail(drive, REQ_IDETAPE_WRITE, 1, stage->bh)) {
- printk(KERN_INFO "ide-tape: %s: couldn't write back marker "
- "frame at %d\n", tape->name, last_mark_addr);
- __idetape_kfree_stage(stage);
- idetape_position_tape(drive, position, 0, 0);
- return;
- }
- __idetape_kfree_stage(stage);
- idetape_flush_tape_buffers(drive);
- idetape_position_tape(drive, position, 0, 0);
- return;
-}
-
-static void idetape_write_filler (ide_drive_t *drive, int block, int cnt)
-{
- idetape_tape_t *tape = drive->driver_data;
- idetape_stage_t *stage;
- int rc;
-
- if (!tape->onstream || tape->raw)
- return;
- stage = __idetape_kmalloc_stage(tape, 1, 1);
- if (stage == NULL)
- return;
- idetape_init_stage(drive, stage, OS_FRAME_TYPE_FILL, 0);
- idetape_wait_ready(drive, 60 * 5 * HZ);
- rc = idetape_position_tape(drive, block, 0, 0);
-#if ONSTREAM_DEBUG
- printk(KERN_INFO "write_filler: positioning failed it returned %d\n", rc);
-#endif
- if (rc != 0)
- /* don't write fillers if we cannot position the tape. */
- return;
-
- strcpy(stage->bh->b_data, "Filler");
- while (cnt--) {
- if (!idetape_queue_rw_tail(drive, REQ_IDETAPE_WRITE, 1, stage->bh)) {
- printk(KERN_INFO "ide-tape: %s: write_filler: "
- "couldn't write header frame\n", tape->name);
- __idetape_kfree_stage(stage);
- return;
- }
- }
- __idetape_kfree_stage(stage);
-}
-
-static void __idetape_write_header (ide_drive_t *drive, int block, int cnt)
-{
- idetape_tape_t *tape = drive->driver_data;
- idetape_stage_t *stage;
- os_header_t header;
-
- stage = __idetape_kmalloc_stage(tape, 1, 1);
- if (stage == NULL)
- return;
- idetape_init_stage(drive, stage, OS_FRAME_TYPE_HEADER, tape->logical_blk_num);
- idetape_wait_ready(drive, 60 * 5 * HZ);
- idetape_position_tape(drive, block, 0, 0);
- memset(&header, 0, sizeof(header));
- strcpy(header.ident_str, "ADR_SEQ");
- header.major_rev = 1;
- header.minor_rev = OS_ADR_MINREV;
- header.par_num = 1;
- header.partition.partition_num = OS_DATA_PARTITION;
- header.partition.par_desc_ver = OS_PARTITION_VERSION;
- header.partition.first_frame_addr = htonl(OS_DATA_STARTFRAME1);
- header.partition.last_frame_addr = htonl(tape->capacity);
- header.partition.wrt_pass_cntr = htons(tape->wrt_pass_cntr);
- header.partition.eod_frame_addr = htonl(tape->eod_frame_addr);
- memcpy(stage->bh->b_data, &header, sizeof(header));
- while (cnt--) {
- if (!idetape_queue_rw_tail(drive, REQ_IDETAPE_WRITE, 1, stage->bh)) {
- printk(KERN_INFO "ide-tape: %s: couldn't write "
- "header frame\n", tape->name);
- __idetape_kfree_stage(stage);
- return;
- }
- }
- __idetape_kfree_stage(stage);
- idetape_flush_tape_buffers(drive);
-}
-
-static void idetape_write_header (ide_drive_t *drive, int locate_eod)
-{
- idetape_tape_t *tape = drive->driver_data;
-
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: writing tape header\n",
- tape->name);
-#endif
- if (!tape->onstream || tape->raw)
- return;
- tape->update_frame_cntr++;
- __idetape_write_header(drive, 5, 5);
- __idetape_write_header(drive, 0xbae, 5); /* 2990 */
- if (locate_eod) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: locating back to eod "
- "frame addr %d\n", tape->name,
- tape->eod_frame_addr);
-#endif
- idetape_position_tape(drive, tape->eod_frame_addr, 0, 0);
- }
-}
-
static ssize_t idetape_chrdev_write (struct file *file, const char *buf,
size_t count, loff_t *ppos)
{
- struct inode *inode = file->f_dentry->d_inode;
ide_drive_t *drive = file->private_data;
idetape_tape_t *tape = drive->driver_data;
- unsigned int minor = iminor(inode);
ssize_t retval, actually_written = 0;
- int position;
if (ppos != &file->f_pos) {
/* "A request was outside the capabilities of the device." */
"count %Zd\n", count);
#endif /* IDETAPE_DEBUG_LOG */
- if (tape->onstream) {
- if (count != tape->tape_block_size) {
- printk(KERN_ERR "ide-tape: %s: chrdev_write: use %d "
- "bytes as block size (%Zd used)\n",
- tape->name, tape->tape_block_size, count);
- return -EINVAL;
- }
- /*
- * Check if we reach the end of the tape. Just assume the whole
- * pipeline is filled with write requests!
- */
- if (tape->first_frame_position + tape->nr_stages >= tape->capacity - OS_EW) {
-#if ONSTREAM_DEBUG
- printk(KERN_INFO, "chrdev_write: Write truncated at "
- "EOM early warning");
-#endif
- if (tape->chrdev_direction == idetape_direction_write)
- idetape_write_release(drive, minor);
- return -ENOSPC;
- }
- }
-
/* Initialize write operation */
if (tape->chrdev_direction != idetape_direction_write) {
if (tape->chrdev_direction == idetape_direction_read)
tape->chrdev_direction = idetape_direction_write;
idetape_init_merge_stage(tape);
- if (tape->onstream) {
- position = idetape_read_position(drive);
- if (position <= OS_DATA_STARTFRAME1) {
- tape->logical_blk_num = 0;
- tape->wrt_pass_cntr++;
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: logical block num 0, setting eod to %d\n", tape->name, OS_DATA_STARTFRAME1);
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: allocating new write pass counter %d\n", tape->name, tape->wrt_pass_cntr);
-#endif
- tape->filemark_cnt = 0;
- tape->eod_frame_addr = OS_DATA_STARTFRAME1;
- tape->first_mark_addr = tape->last_mark_addr = -1;
- idetape_write_header(drive, 1);
- }
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: positioning "
- "tape to eod at %d\n",
- tape->name, tape->eod_frame_addr);
-#endif
- position = idetape_read_position(drive);
- if (position != tape->eod_frame_addr)
- idetape_position_tape(drive, tape->eod_frame_addr, 0, 0);
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: "
- "first_frame_position %d\n",
- tape->name, tape->first_frame_position);
-#endif
- }
-
/*
* Issue a write 0 command to ensure that DSC handshake
* is switched from completion mode to buffer available
return retval;
}
}
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk("ide-tape: first_frame_position %d\n",
- tape->first_frame_position);
-#endif
}
if (count == 0)
return (0);
static int idetape_write_filemark (ide_drive_t *drive)
{
- idetape_tape_t *tape = drive->driver_data;
- int last_mark_addr;
idetape_pc_t pc;
- if (!tape->onstream) {
- /* Write a filemark */
- idetape_create_write_filemark_cmd(drive, &pc, 1);
- if (idetape_queue_pc_tail(drive, &pc)) {
- printk(KERN_ERR "ide-tape: Couldn't write a filemark\n");
- return -EIO;
- }
- } else if (!tape->raw) {
- last_mark_addr = idetape_read_position(drive);
- tape->merge_stage = __idetape_kmalloc_stage(tape, 1, 0);
- if (tape->merge_stage != NULL) {
- idetape_init_stage(drive, tape->merge_stage, OS_FRAME_TYPE_MARKER, tape->logical_blk_num);
- idetape_pad_zeros(drive, tape->stage_size);
- tape->logical_blk_num++;
- __idetape_kfree_stage(tape->merge_stage);
- tape->merge_stage = NULL;
- }
- if (tape->filemark_cnt)
- idetape_update_last_marker(drive, tape->last_mark_addr, last_mark_addr);
- tape->last_mark_addr = last_mark_addr;
- if (tape->filemark_cnt++ == 0)
- tape->first_mark_addr = last_mark_addr;
- }
- return 0;
-}
-
-static void idetape_write_eod (ide_drive_t *drive)
-{
- idetape_tape_t *tape = drive->driver_data;
-
- if (!tape->onstream || tape->raw)
- return;
- tape->merge_stage = __idetape_kmalloc_stage(tape, 1, 0);
- if (tape->merge_stage != NULL) {
- tape->eod_frame_addr = idetape_read_position(drive);
- idetape_init_stage(drive, tape->merge_stage, OS_FRAME_TYPE_EOD, tape->logical_blk_num);
- idetape_pad_zeros(drive, tape->stage_size);
- __idetape_kfree_stage(tape->merge_stage);
- tape->merge_stage = NULL;
- }
- return;
-}
-
-int idetape_seek_logical_blk (ide_drive_t *drive, int logical_blk_num)
-{
- idetape_tape_t *tape = drive->driver_data;
- int estimated_address = logical_blk_num + 20;
- int retries = 0;
- int speed_control;
-
- speed_control = tape->speed_control;
- tape->speed_control = 0;
- if (logical_blk_num < 0)
- logical_blk_num = 0;
- if (idetape_get_logical_blk(drive, logical_blk_num, 10, 1))
- goto ok;
- while (++retries < 10) {
- idetape_discard_read_pipeline(drive, 0);
- idetape_position_tape(drive, estimated_address, 0, 0);
- if (idetape_get_logical_blk(drive, logical_blk_num, 10, 1))
- goto ok;
- if (!idetape_get_logical_blk(drive, -1, 10, 1))
- goto error;
- if (tape->logical_blk_num < logical_blk_num)
- estimated_address += logical_blk_num - tape->logical_blk_num;
- else
- break;
+ /* Write a filemark */
+ idetape_create_write_filemark_cmd(drive, &pc, 1);
+ if (idetape_queue_pc_tail(drive, &pc)) {
+ printk(KERN_ERR "ide-tape: Couldn't write a filemark\n");
+ return -EIO;
}
-error:
- tape->speed_control = speed_control;
- tape->restart_speed_control_req = 1;
- printk(KERN_INFO "ide-tape: %s: couldn't seek to logical block %d "
- "(at %d), %d retries\n", tape->name, logical_blk_num,
- tape->logical_blk_num, retries);
- return -EIO;
-ok:
- tape->speed_control = speed_control;
- tape->restart_speed_control_req = 1;
return 0;
}
idetape_discard_read_pipeline(drive, 0);
if (idetape_rewind_tape(drive))
return -EIO;
- if (tape->onstream && !tape->raw)
- return idetape_position_tape(drive, OS_DATA_STARTFRAME1, 0, 0);
return 0;
case MTLOAD:
idetape_discard_read_pipeline(drive, 0);
idetape_create_load_unload_cmd(drive, &pc,IDETAPE_LU_RETENSION_MASK | IDETAPE_LU_LOAD_MASK);
return (idetape_queue_pc_tail(drive, &pc));
case MTEOM:
- if (tape->onstream) {
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: positioning tape to eod at %d\n", tape->name, tape->eod_frame_addr);
-#endif
- idetape_position_tape(drive, tape->eod_frame_addr, 0, 0);
- if (!idetape_get_logical_blk(drive, -1, 10, 0))
- return -EIO;
- if (tape->first_stage->aux->frame_type != OS_FRAME_TYPE_EOD)
- return -EIO;
- return 0;
- }
idetape_create_space_cmd(&pc, 0, IDETAPE_SPACE_TO_EOD);
return (idetape_queue_pc_tail(drive, &pc));
case MTERASE:
- if (tape->onstream) {
- tape->eod_frame_addr = OS_DATA_STARTFRAME1;
- tape->logical_blk_num = 0;
- tape->first_mark_addr = tape->last_mark_addr = -1;
- idetape_position_tape(drive, tape->eod_frame_addr, 0, 0);
- idetape_write_eod(drive);
- idetape_flush_tape_buffers(drive);
- idetape_write_header(drive, 0);
- /*
- * write filler frames to the unused frames...
- * REMOVE WHEN going to LIN4 application type...
- */
- idetape_write_filler(drive, OS_DATA_STARTFRAME1 - 10, 10);
- idetape_write_filler(drive, OS_DATA_ENDFRAME1, 10);
- idetape_flush_tape_buffers(drive);
- (void) idetape_rewind_tape(drive);
- return 0;
- }
(void) idetape_rewind_tape(drive);
idetape_create_erase_cmd(&pc);
return (idetape_queue_pc_tail(drive, &pc));
case MTSETBLK:
- if (tape->onstream) {
- if (mt_count != tape->tape_block_size) {
- printk(KERN_INFO "ide-tape: %s: MTSETBLK %d -- only %d bytes block size supported\n", tape->name, mt_count, tape->tape_block_size);
- return -EINVAL;
- }
- return 0;
- }
if (mt_count) {
if (mt_count < tape->tape_block_size || mt_count % tape->tape_block_size)
return -EIO;
set_bit(IDETAPE_DETECT_BS, &tape->flags);
return 0;
case MTSEEK:
- if (!tape->onstream || tape->raw) {
- idetape_discard_read_pipeline(drive, 0);
- return idetape_position_tape(drive, mt_count * tape->user_bs_factor, tape->partition, 0);
- }
- return idetape_seek_logical_blk(drive, mt_count);
+ idetape_discard_read_pipeline(drive, 0);
+ return idetape_position_tape(drive, mt_count * tape->user_bs_factor, tape->partition, 0);
case MTSETPART:
idetape_discard_read_pipeline(drive, 0);
- if (tape->onstream)
- return -EIO;
return (idetape_position_tape(drive, 0, mt_count, 0));
case MTFSR:
case MTBSR:
- if (tape->onstream) {
- if (!idetape_get_logical_blk(drive, -1, 10, 0))
- return -EIO;
- if (mt_op == MTFSR)
- return idetape_seek_logical_blk(drive, tape->logical_blk_num + mt_count);
- else {
- idetape_discard_read_pipeline(drive, 0);
- return idetape_seek_logical_blk(drive, tape->logical_blk_num - mt_count);
- }
- }
case MTLOCK:
if (!idetape_create_prevent_cmd(drive, &pc, 1))
return 0;
case MTIOCGET:
memset(&mtget, 0, sizeof (struct mtget));
mtget.mt_type = MT_ISSCSI2;
- if (!tape->onstream || tape->raw)
- mtget.mt_blkno = position / tape->user_bs_factor - block_offset;
- else {
- if (!idetape_get_logical_blk(drive, -1, 10, 0))
- mtget.mt_blkno = -1;
- else
- mtget.mt_blkno = tape->logical_blk_num;
- }
+ mtget.mt_blkno = position / tape->user_bs_factor - block_offset;
mtget.mt_dsreg = ((tape->tape_block_size * tape->user_bs_factor) << MT_ST_BLKSIZE_SHIFT) & MT_ST_BLKSIZE_MASK;
- if (tape->onstream) {
- mtget.mt_gstat |= GMT_ONLINE(0xffffffff);
- if (tape->first_stage && tape->first_stage->aux->frame_type == OS_FRAME_TYPE_EOD)
- mtget.mt_gstat |= GMT_EOD(0xffffffff);
- if (position <= OS_DATA_STARTFRAME1)
- mtget.mt_gstat |= GMT_BOT(0xffffffff);
- } else if (tape->drv_write_prot) {
+ if (tape->drv_write_prot) {
mtget.mt_gstat |= GMT_WR_PROT(0xffffffff);
}
if (copy_to_user((char *) arg,(char *) &mtget, sizeof(struct mtget)))
return -EFAULT;
return 0;
case MTIOCPOS:
- if (tape->onstream && !tape->raw) {
- if (!idetape_get_logical_blk(drive, -1, 10, 0))
- return -EIO;
- mtpos.mt_blkno = tape->logical_blk_num;
- } else
- mtpos.mt_blkno = position / tape->user_bs_factor - block_offset;
+ mtpos.mt_blkno = position / tape->user_bs_factor - block_offset;
if (copy_to_user((char *) arg,(char *) &mtpos, sizeof(struct mtpos)))
return -EFAULT;
return 0;
}
}
-static int __idetape_analyze_headers (ide_drive_t *drive, int block)
-{
- idetape_tape_t *tape = drive->driver_data;
- idetape_stage_t *stage;
- os_header_t *header;
- os_aux_t *aux;
-
- if (!tape->onstream || tape->raw) {
- tape->header_ok = tape->linux_media = 1;
- return 1;
- }
- tape->header_ok = tape->linux_media = 0;
- tape->update_frame_cntr = 0;
- tape->wrt_pass_cntr = 0;
- tape->eod_frame_addr = OS_DATA_STARTFRAME1;
- tape->first_mark_addr = tape->last_mark_addr = -1;
- stage = __idetape_kmalloc_stage(tape, 0, 0);
- if (stage == NULL)
- return 0;
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: reading header\n", tape->name);
-#endif
- idetape_position_tape(drive, block, 0, 0);
- if (!idetape_queue_rw_tail(drive, REQ_IDETAPE_READ, 1, stage->bh)) {
- printk(KERN_INFO "ide-tape: %s: couldn't read header frame\n",
- tape->name);
- __idetape_kfree_stage(stage);
- return 0;
- }
- header = (os_header_t *) stage->bh->b_data;
- aux = stage->aux;
- if (strncmp(header->ident_str, "ADR_SEQ", 7) != 0) {
- printk(KERN_INFO "ide-tape: %s: invalid header identification string\n", tape->name);
- __idetape_kfree_stage(stage);
- return 0;
- }
- if (header->major_rev != 1 || (header->minor_rev > OS_ADR_MINREV))
- printk(KERN_INFO "ide-tape: warning: revision %d.%d "
- "detected (up to 1.%d supported)\n",
- header->major_rev, header->minor_rev, OS_ADR_MINREV);
- if (header->par_num != 1)
- printk(KERN_INFO "ide-tape: warning: %d partitions defined, only one supported\n", header->par_num);
- tape->wrt_pass_cntr = ntohs(header->partition.wrt_pass_cntr);
- tape->eod_frame_addr = ntohl(header->partition.eod_frame_addr);
- tape->filemark_cnt = ntohl(aux->filemark_cnt);
- tape->first_mark_addr = ntohl(aux->next_mark_addr);
- tape->last_mark_addr = ntohl(aux->last_mark_addr);
- tape->update_frame_cntr = ntohl(aux->update_frame_cntr);
- memcpy(tape->application_sig, aux->application_sig, 4);
- tape->application_sig[4] = 0;
- if (memcmp(tape->application_sig, "LIN", 3) == 0) {
- tape->linux_media = 1;
- tape->linux_media_version = tape->application_sig[3] - '0';
- if (tape->linux_media_version != 3)
- printk(KERN_INFO "ide-tape: %s: Linux media version "
- "%d detected (current 3)\n",
- tape->name, tape->linux_media_version);
- } else {
- printk(KERN_INFO "ide-tape: %s: non Linux media detected "
- "(%s)\n", tape->name, tape->application_sig);
- tape->linux_media = 0;
- }
-#if ONSTREAM_DEBUG
- if (tape->debug_level >= 2)
- printk(KERN_INFO "ide-tape: %s: detected write pass counter "
- "%d, eod frame addr %d\n", tape->name,
- tape->wrt_pass_cntr, tape->eod_frame_addr);
-#endif
- __idetape_kfree_stage(stage);
- return 1;
-}
-
-static int idetape_analyze_headers (ide_drive_t *drive)
-{
- idetape_tape_t *tape = drive->driver_data;
- int position, block;
-
- if (!tape->onstream || tape->raw) {
- tape->header_ok = tape->linux_media = 1;
- return 1;
- }
- tape->header_ok = tape->linux_media = 0;
- position = idetape_read_position(drive);
- for (block = 5; block < 10; block++)
- if (__idetape_analyze_headers(drive, block))
- goto ok;
- for (block = 0xbae; block < 0xbb3; block++) /* 2990 - 2994 */
- if (__idetape_analyze_headers(drive, block))
- goto ok;
- printk(KERN_ERR "ide-tape: %s: failed to find valid ADRL header\n", tape->name);
- return 0;
-ok:
- if (position < OS_DATA_STARTFRAME1)
- position = OS_DATA_STARTFRAME1;
- idetape_position_tape(drive, position, 0, 0);
- tape->header_ok = 1;
- return 1;
-}
-
static void idetape_get_blocksize_from_block_descriptor(ide_drive_t *drive);
/*
if (test_and_set_bit(IDETAPE_BUSY, &tape->flags))
return -EBUSY;
- if (tape->onstream) {
- if (minor & 64) {
- tape->tape_block_size = tape->stage_size = 32768 + 512;
- tape->raw = 1;
- } else {
- tape->tape_block_size = tape->stage_size = 32768;
- tape->raw = 0;
- }
- idetape_onstream_mode_sense_tape_parameter_page(drive, tape->debug_level);
- }
retval = idetape_wait_ready(drive, 60 * HZ);
if (retval) {
clear_bit(IDETAPE_BUSY, &tape->flags);
/*
* Lock the tape drive door so user can't eject.
- * Analyze headers for OnStream drives.
*/
if (tape->chrdev_direction == idetape_direction_none) {
if (idetape_create_prevent_cmd(drive, &pc, 1)) {
tape->door_locked = DOOR_LOCKED;
}
}
- idetape_analyze_headers(drive);
}
- tape->max_frames = tape->cur_frames = tape->req_buffer_fill = 0;
idetape_restart_speed_control(drive);
tape->restart_speed_control_req = 0;
return 0;
tape->merge_stage = NULL;
}
idetape_write_filemark(drive);
- idetape_write_eod(drive);
idetape_flush_tape_buffers(drive);
- idetape_write_header(drive, minor >= 128);
idetape_flush_tape_buffers(drive);
}
}
/*
- * Notify vendor ID to the OnStream tape drive
- */
-static void idetape_onstream_set_vendor (ide_drive_t *drive, char *vendor)
-{
- idetape_pc_t pc;
- idetape_mode_parameter_header_t *header;
-
- idetape_create_mode_select_cmd(&pc, sizeof(*header) + 8);
- pc.buffer[0] = 3 + 8; /* Mode Data Length */
- pc.buffer[1] = 0; /* Medium Type - ignoring */
- pc.buffer[2] = 0; /* Reserved */
- pc.buffer[3] = 0; /* Block Descriptor Length */
- pc.buffer[4 + 0] = 0x36 | (1 << 7);
- pc.buffer[4 + 1] = 6;
- pc.buffer[4 + 2] = vendor[0];
- pc.buffer[4 + 3] = vendor[1];
- pc.buffer[4 + 4] = vendor[2];
- pc.buffer[4 + 5] = vendor[3];
- pc.buffer[4 + 6] = 0;
- pc.buffer[4 + 7] = 0;
- if (idetape_queue_pc_tail(drive, &pc))
- printk(KERN_ERR "ide-tape: Couldn't set vendor name to %s\n", vendor);
-
-}
-
-/*
- * Various unused OnStream commands
- */
-#if ONSTREAM_DEBUG
-static void idetape_onstream_set_retries (ide_drive_t *drive, int retries)
-{
- idetape_pc_t pc;
-
- idetape_create_mode_select_cmd(&pc, sizeof(idetape_mode_parameter_header_t) + 4);
- pc.buffer[0] = 3 + 4;
- pc.buffer[1] = 0; /* Medium Type - ignoring */
- pc.buffer[2] = 0; /* Reserved */
- pc.buffer[3] = 0; /* Block Descriptor Length */
- pc.buffer[4 + 0] = 0x2f | (1 << 7);
- pc.buffer[4 + 1] = 2;
- pc.buffer[4 + 2] = 4;
- pc.buffer[4 + 3] = retries;
- if (idetape_queue_pc_tail(drive, &pc))
- printk(KERN_ERR "ide-tape: Couldn't set retries to %d\n", retries);
-}
-#endif
-
-/*
- * Configure 32.5KB block size.
- */
-static void idetape_onstream_configure_block_size (ide_drive_t *drive)
-{
- idetape_pc_t pc;
- idetape_mode_parameter_header_t *header;
- idetape_block_size_page_t *bs;
-
- /*
- * Get the current block size from the block size mode page
- */
- idetape_create_mode_sense_cmd(&pc, IDETAPE_BLOCK_SIZE_PAGE);
- if (idetape_queue_pc_tail(drive, &pc))
- printk(KERN_ERR "ide-tape: can't get tape block size mode page\n");
- header = (idetape_mode_parameter_header_t *) pc.buffer;
- bs = (idetape_block_size_page_t *) (pc.buffer + sizeof(idetape_mode_parameter_header_t) + header->bdl);
-
-#if IDETAPE_DEBUG_INFO
- printk(KERN_INFO "ide-tape: 32KB play back: %s\n", bs->play32 ? "Yes" : "No");
- printk(KERN_INFO "ide-tape: 32.5KB play back: %s\n", bs->play32_5 ? "Yes" : "No");
- printk(KERN_INFO "ide-tape: 32KB record: %s\n", bs->record32 ? "Yes" : "No");
- printk(KERN_INFO "ide-tape: 32.5KB record: %s\n", bs->record32_5 ? "Yes" : "No");
-#endif /* IDETAPE_DEBUG_INFO */
-
- /*
- * Configure default auto columns mode, 32.5KB block size
- */
- bs->one = 1;
- bs->play32 = 0;
- bs->play32_5 = 1;
- bs->record32 = 0;
- bs->record32_5 = 1;
- idetape_create_mode_select_cmd(&pc, sizeof(*header) + sizeof(*bs));
- if (idetape_queue_pc_tail(drive, &pc))
- printk(KERN_ERR "ide-tape: Couldn't set tape block size mode page\n");
-
-#if ONSTREAM_DEBUG
- /*
- * In debug mode, we want to see as many errors as possible
- * to test the error recovery mechanism.
- */
- idetape_onstream_set_retries(drive, 0);
-#endif
-}
-
-/*
* Use INQUIRY to get the firmware revision
*/
static void idetape_get_inquiry_results (ide_drive_t *drive)
r = tape->firmware_revision;
if (*(r + 1) == '.')
tape->firmware_revision_num = (*r - '0') * 100 + (*(r + 2) - '0') * 10 + *(r + 3) - '0';
- else if (tape->onstream)
- tape->firmware_revision_num = (*r - '0') * 100 + (*(r + 1) - '0') * 10 + *(r + 2) - '0';
printk(KERN_INFO "ide-tape: %s <-> %s: %s %s rev %s\n", drive->name, tape->name, tape->vendor_id, tape->product_id, tape->firmware_revision);
}
/*
- * Configure the OnStream ATAPI tape drive for default operation
- */
-static void idetape_configure_onstream (ide_drive_t *drive)
-{
- idetape_tape_t *tape = drive->driver_data;
-
- if (tape->firmware_revision_num < 105) {
- printk(KERN_INFO "ide-tape: %s: Old OnStream firmware revision detected (%s)\n", tape->name, tape->firmware_revision);
- printk(KERN_INFO "ide-tape: %s: An upgrade to version 1.05 or above is recommended\n", tape->name);
- }
-
- /*
- * Configure 32.5KB (data+aux) block size.
- */
- idetape_onstream_configure_block_size(drive);
-
- /*
- * Set vendor name to 'LIN3' for "Linux support version 3".
- */
- idetape_onstream_set_vendor(drive, "LIN3");
-}
-
-/*
- * idetape_get_mode_sense_parameters asks the tape about its various
- * parameters. This may work for other drives to???
- */
-static void idetape_onstream_mode_sense_tape_parameter_page(ide_drive_t *drive, int debug)
-{
- idetape_tape_t *tape = drive->driver_data;
- idetape_pc_t pc;
- idetape_mode_parameter_header_t *header;
- onstream_tape_paramtr_page_t *prm;
-
- idetape_create_mode_sense_cmd(&pc, IDETAPE_PARAMTR_PAGE);
- if (idetape_queue_pc_tail(drive, &pc)) {
- printk(KERN_ERR "ide-tape: Can't get tape parameters page - probably no tape inserted in onstream drive\n");
- return;
- }
- header = (idetape_mode_parameter_header_t *) pc.buffer;
- prm = (onstream_tape_paramtr_page_t *) (pc.buffer + sizeof(idetape_mode_parameter_header_t) + header->bdl);
-
- tape->capacity = ntohs(prm->segtrk) * ntohs(prm->trks);
- if (debug) {
- printk(KERN_INFO "ide-tape: %s <-> %s: Tape length %dMB (%d frames/track, %d tracks = %d blocks, density: %dKbpi)\n",
- drive->name, tape->name, tape->capacity/32, ntohs(prm->segtrk), ntohs(prm->trks), tape->capacity, prm->density);
- }
-
- return;
-}
-
-/*
* idetape_get_mode_sense_results asks the tape about its various
* parameters. In particular, we will adjust our data transfer buffer
* size to the recommended value as returned by the tape.
tape->tape_block_size = 512;
else if (capabilities->blk1024)
tape->tape_block_size = 1024;
- else if (tape->onstream && capabilities->blk32768)
- tape->tape_block_size = 32768;
#if IDETAPE_DEBUG_INFO
printk(KERN_INFO "ide-tape: Dumping the results of the MODE SENSE packet command\n");
ide_add_setting(drive, "pipeline_head_speed_u",SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->uncontrolled_pipeline_head_speed, NULL);
ide_add_setting(drive, "avg_speed", SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->avg_speed, NULL);
ide_add_setting(drive, "debug_level",SETTING_RW, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->debug_level, NULL);
- if (tape->onstream) {
- ide_add_setting(drive, "cur_frames", SETTING_READ, -1, -1, TYPE_SHORT, 0, 0xffff, 1, 1, &tape->cur_frames, NULL);
- ide_add_setting(drive, "max_frames", SETTING_READ, -1, -1, TYPE_SHORT, 0, 0xffff, 1, 1, &tape->max_frames, NULL);
- ide_add_setting(drive, "insert_speed", SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->insert_speed, NULL);
- ide_add_setting(drive, "speed_control",SETTING_RW, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->speed_control, NULL);
- ide_add_setting(drive, "tape_still_time",SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->tape_still_time, NULL);
- ide_add_setting(drive, "max_insert_speed",SETTING_RW, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->max_insert_speed, NULL);
- ide_add_setting(drive, "insert_size", SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->insert_size, NULL);
- ide_add_setting(drive, "capacity", SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->capacity, NULL);
- ide_add_setting(drive, "first_frame", SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->first_frame_position, NULL);
- ide_add_setting(drive, "logical_blk", SETTING_READ, -1, -1, TYPE_INT, 0, 0xffff, 1, 1, &tape->logical_blk_num, NULL);
- }
}
/*
drive->driver_data = tape;
/* An ATAPI device ignores DRDY */
drive->ready_stat = 0;
- if (strstr(drive->id->model, "OnStream DI-"))
- tape->onstream = 1;
drive->dsc_overlap = 1;
#ifdef CONFIG_BLK_DEV_IDEPCI
- if (!tape->onstream && HWIF(drive)->pci_dev != NULL) {
+ if (HWIF(drive)->pci_dev != NULL) {
/*
* These two ide-pci host adapters appear to need DSC overlap disabled.
* This probably needs further analysis.
idetape_get_inquiry_results(drive);
idetape_get_mode_sense_results(drive);
idetape_get_blocksize_from_block_descriptor(drive);
- if (tape->onstream) {
- idetape_onstream_mode_sense_tape_parameter_page(drive, 1);
- idetape_configure_onstream(drive);
- }
tape->user_bs_factor = 1;
tape->stage_size = tape->capabilities.ctl * tape->tape_block_size;
while (tape->stage_size > 0xffff) {
tape->stage_size = tape->capabilities.ctl * tape->tape_block_size;
}
stage_size = tape->stage_size;
- if (tape->onstream)
- stage_size = 32768 + 512;
tape->pages_per_stage = stage_size / PAGE_SIZE;
if (stage_size % PAGE_SIZE) {
tape->pages_per_stage++;
printk(KERN_ERR "ide-tape: %s: Can't allocate a tape structure\n", drive->name);
goto failed;
}
- if (ide_register_subdriver (drive, &idetape_driver, IDE_SUBDRIVER_VERSION)) {
+ if (ide_register_subdriver(drive, &idetape_driver)) {
printk(KERN_ERR "ide-tape: %s: Failed to register the driver with ide.c\n", drive->name);
kfree(tape);
goto failed;
module_init(idetape_init);
module_exit(idetape_exit);
+MODULE_ALIAS_CHARDEV_MAJOR(IDETAPE_MAJOR);
EXPORT_SYMBOL(do_rw_taskfile);
/*
- * Clean up after success/failure of an explicit taskfile operation.
- */
-void ide_end_taskfile (ide_drive_t *drive, u8 stat, u8 err)
-{
- ide_hwif_t *hwif = HWIF(drive);
- unsigned long flags;
- struct request *rq;
- ide_task_t *args;
- task_ioreg_t command;
-
- spin_lock_irqsave(&ide_lock, flags);
- rq = HWGROUP(drive)->rq;
- spin_unlock_irqrestore(&ide_lock, flags);
- args = (ide_task_t *) rq->special;
-
- command = args->tfRegister[IDE_COMMAND_OFFSET];
-
- if (rq->errors == 0)
- rq->errors = !OK_STAT(stat,READY_STAT,BAD_STAT);
-
- if (args->tf_in_flags.b.data) {
- u16 data = hwif->INW(IDE_DATA_REG);
- args->tfRegister[IDE_DATA_OFFSET] = (data) & 0xFF;
- args->hobRegister[IDE_DATA_OFFSET_HOB] = (data >> 8) & 0xFF;
- }
- args->tfRegister[IDE_ERROR_OFFSET] = err;
- args->tfRegister[IDE_NSECTOR_OFFSET] = hwif->INB(IDE_NSECTOR_REG);
- args->tfRegister[IDE_SECTOR_OFFSET] = hwif->INB(IDE_SECTOR_REG);
- args->tfRegister[IDE_LCYL_OFFSET] = hwif->INB(IDE_LCYL_REG);
- args->tfRegister[IDE_HCYL_OFFSET] = hwif->INB(IDE_HCYL_REG);
- args->tfRegister[IDE_SELECT_OFFSET] = hwif->INB(IDE_SELECT_REG);
- args->tfRegister[IDE_STATUS_OFFSET] = stat;
- if ((drive->id->command_set_2 & 0x0400) &&
- (drive->id->cfs_enable_2 & 0x0400) &&
- (drive->addressing == 1)) {
- hwif->OUTB(drive->ctl|0x80, IDE_CONTROL_REG_HOB);
- args->hobRegister[IDE_FEATURE_OFFSET_HOB] = hwif->INB(IDE_FEATURE_REG);
- args->hobRegister[IDE_NSECTOR_OFFSET_HOB] = hwif->INB(IDE_NSECTOR_REG);
- args->hobRegister[IDE_SECTOR_OFFSET_HOB] = hwif->INB(IDE_SECTOR_REG);
- args->hobRegister[IDE_LCYL_OFFSET_HOB] = hwif->INB(IDE_LCYL_REG);
- args->hobRegister[IDE_HCYL_OFFSET_HOB] = hwif->INB(IDE_HCYL_REG);
- }
-
-#if 0
-/* taskfile_settings_update(drive, args, command); */
-
- if (args->posthandler != NULL)
- args->posthandler(drive, args);
-#endif
-
- spin_lock_irqsave(&ide_lock, flags);
- blkdev_dequeue_request(rq);
- HWGROUP(drive)->rq = NULL;
- end_that_request_last(rq);
- spin_unlock_irqrestore(&ide_lock, flags);
-}
-
-EXPORT_SYMBOL(ide_end_taskfile);
-
-/*
* set_multmode_intr() is invoked on completion of a WIN_SETMULT cmd.
*/
ide_startstop_t set_multmode_intr (ide_drive_t *drive)
*/
int ide_cmd_ioctl (ide_drive_t *drive, unsigned int cmd, unsigned long arg)
{
-#if 1
int err = 0;
u8 args[4], *argbuf = args;
u8 xfer_rate = 0;
if (argsize > 4)
kfree(argbuf);
return err;
-
-#else
-
- int err = -EIO;
- u8 args[4], *argbuf = args;
- u8 xfer_rate = 0;
- int argsize = 0;
- ide_task_t tfargs;
-
- if (NULL == (void *) arg) {
- struct request rq;
- ide_init_drive_cmd(&rq);
- return ide_do_drive_cmd(drive, &rq, ide_wait);
- }
-
- if (copy_from_user(args, (void *)arg, 4))
- return -EFAULT;
-
- memset(&tfargs, 0, sizeof(ide_task_t));
- tfargs.tfRegister[IDE_FEATURE_OFFSET] = args[2];
- tfargs.tfRegister[IDE_NSECTOR_OFFSET] = args[3];
- tfargs.tfRegister[IDE_SECTOR_OFFSET] = args[1];
- tfargs.tfRegister[IDE_LCYL_OFFSET] = 0x00;
- tfargs.tfRegister[IDE_HCYL_OFFSET] = 0x00;
- tfargs.tfRegister[IDE_SELECT_OFFSET] = 0x00;
- tfargs.tfRegister[IDE_COMMAND_OFFSET] = args[0];
-
- if (args[3]) {
- argsize = (SECTOR_WORDS * 4 * args[3]);
- argbuf = kmalloc(argsize, GFP_KERNEL);
- if (argbuf == NULL)
- return -ENOMEM;
- }
-
- if (set_transfer(drive, &tfargs)) {
- xfer_rate = args[1];
- if (ide_ata66_check(drive, &tfargs))
- goto abort;
- }
-
- tfargs.command_type = ide_cmd_type_parser(&tfargs);
- err = ide_raw_taskfile(drive, &tfargs, argbuf);
-
- if (!err && xfer_rate) {
- /* active-retuning-calls future */
- ide_set_xfer_rate(driver, xfer_rate);
- ide_driveid_update(drive);
- }
-abort:
- args[0] = tfargs.tfRegister[IDE_COMMAND_OFFSET];
- args[1] = tfargs.tfRegister[IDE_FEATURE_OFFSET];
- args[2] = tfargs.tfRegister[IDE_NSECTOR_OFFSET];
- args[3] = 0;
-
- if (copy_to_user((void *)arg, argbuf, 4))
- err = -EFAULT;
- if (argbuf != NULL) {
- if (copy_to_user((void *)arg, argbuf + 4, argsize))
- err = -EFAULT;
- kfree(argbuf);
- }
- return err;
-
-#endif
}
EXPORT_SYMBOL(ide_cmd_ioctl);
};
#ifdef CONFIG_PROC_FS
+struct proc_dir_entry *proc_ide_root;
+
ide_proc_entry_t generic_subdriver_entries[] = {
{ "capacity", S_IFREG|S_IRUGO, proc_ide_read_capacity, NULL },
{ NULL, 0, NULL, NULL }
drive = &hwif->drives[unit];
if (!drive->present)
continue;
- if (drive->usage)
- goto abort;
- if (DRIVER(drive)->shutdown(drive))
+ if (drive->usage || DRIVER(drive)->busy)
goto abort;
+ drive->dead = 1;
}
hwif->present = 0;
ide_add_setting(drive, "keepsettings", SETTING_RW, HDIO_GET_KEEPSETTINGS, HDIO_SET_KEEPSETTINGS, TYPE_BYTE, 0, 1, 1, 1, &drive->keep_settings, NULL);
ide_add_setting(drive, "nice1", SETTING_RW, -1, -1, TYPE_BYTE, 0, 1, 1, 1, &drive->nice1, NULL);
ide_add_setting(drive, "pio_mode", SETTING_WRITE, -1, HDIO_SET_PIO_MODE, TYPE_BYTE, 0, 255, 1, 1, NULL, set_pio_mode);
- ide_add_setting(drive, "slow", SETTING_RW, -1, -1, TYPE_BYTE, 0, 1, 1, 1, &drive->slow, NULL);
ide_add_setting(drive, "unmaskirq", drive->no_unmask ? SETTING_READ : SETTING_RW, HDIO_GET_UNMASKINTR, HDIO_SET_UNMASKINTR, TYPE_BYTE, 0, 1, 1, 1, &drive->unmask, NULL);
ide_add_setting(drive, "using_dma", SETTING_RW, HDIO_GET_DMA, HDIO_SET_DMA, TYPE_BYTE, 0, 1, 1, 1, &drive->using_dma, set_using_dma);
ide_add_setting(drive, "init_speed", SETTING_RW, -1, -1, TYPE_BYTE, 0, 70, 1, 1, &drive->init_speed, NULL);
/*
* ide_setup() gets called VERY EARLY during initialization,
- * to handle kernel "command line" strings beginning with "hdx="
- * or "ide". Here is the complete set currently supported:
- *
- * "hdx=" is recognized for all "x" from "a" to "h", such as "hdc".
- * "idex=" is recognized for all "x" from "0" to "3", such as "ide1".
- *
- * "hdx=noprobe" : drive may be present, but do not probe for it
- * "hdx=none" : drive is NOT present, ignore cmos and do not probe
- * "hdx=nowerr" : ignore the WRERR_STAT bit on this drive
- * "hdx=cdrom" : drive is present, and is a cdrom drive
- * "hdx=cyl,head,sect" : disk drive is present, with specified geometry
- * "hdx=remap63" : add 63 to all sector numbers (for OnTrack DM)
- * "hdx=remap" : remap 0->1 (for EZDrive)
- * "hdx=autotune" : driver will attempt to tune interface speed
- * to the fastest PIO mode supported,
- * if possible for this drive only.
- * Not fully supported by all chipset types,
- * and quite likely to cause trouble with
- * older/odd IDE drives.
- * "hdx=slow" : insert a huge pause after each access to the data
- * port. Should be used only as a last resort.
- *
- * "hdx=swapdata" : when the drive is a disk, byte swap all data
- * "hdx=bswap" : same as above..........
- * "hdxlun=xx" : set the drive last logical unit.
- * "hdx=flash" : allows for more than one ata_flash disk to be
- * registered. In most cases, only one device
- * will be present.
- * "hdx=scsi" : the return of the ide-scsi flag, this is useful for
- * allowing ide-floppy, ide-tape, and ide-cdrom|writers
- * to use ide-scsi emulation on a device specific option.
- * "idebus=xx" : inform IDE driver of VESA/PCI bus speed in MHz,
- * where "xx" is between 20 and 66 inclusive,
- * used when tuning chipset PIO modes.
- * For PCI bus, 25 is correct for a P75 system,
- * 30 is correct for P90,P120,P180 systems,
- * and 33 is used for P100,P133,P166 systems.
- * If in doubt, use idebus=33 for PCI.
- * As for VLB, it is safest to not specify it.
+ * to handle kernel "command line" strings beginning with "hdx=" or "ide".
*
- * "idex=noprobe" : do not attempt to access/use this interface
- * "idex=base" : probe for an interface at the addr specified,
- * where "base" is usually 0x1f0 or 0x170
- * and "ctl" is assumed to be "base"+0x206
- * "idex=base,ctl" : specify both base and ctl
- * "idex=base,ctl,irq" : specify base, ctl, and irq number
- * "idex=autotune" : driver will attempt to tune interface speed
- * to the fastest PIO mode supported,
- * for all drives on this interface.
- * Not fully supported by all chipset types,
- * and quite likely to cause trouble with
- * older/odd IDE drives.
- * "idex=noautotune" : driver will NOT attempt to tune interface speed
- * This is the default for most chipsets,
- * except the cmd640.
- * "idex=serialize" : do not overlap operations on idex and ide(x^1)
- * "idex=four" : four drives on idex and ide(x^1) share same ports
- * "idex=reset" : reset interface before first use
- * "idex=dma" : enable DMA by default on both drives if possible
- * "idex=ata66" : informs the interface that it has an 80c cable
- * for chipsets that are ATA-66 capable, but
- * the ablity to bit test for detection is
- * currently unknown.
- * "ide=reverse" : Formerly called to pci sub-system, but now local.
- *
- * The following are valid ONLY on ide0, (except dc4030)
- * and the defaults for the base,ctl ports must not be altered.
- *
- * "ide0=dtc2278" : probe/support DTC2278 interface
- * "ide0=ht6560b" : probe/support HT6560B interface
- * "ide0=cmd640_vlb" : *REQUIRED* for VLB cards with the CMD640 chip
- * (not for PCI -- automatically detected)
- * "ide0=qd65xx" : probe/support qd65xx interface
- * "ide0=ali14xx" : probe/support ali14xx chipsets (ALI M1439, M1443, M1445)
- * "ide0=umc8672" : probe/support umc8672 chipsets
- * "idex=dc4030" : probe/support Promise DC4030VL interface
- * "ide=doubler" : probe/support IDE doublers on Amiga
+ * Remember to update Documentation/ide.txt if you change something here.
*/
int __init ide_setup (char *s)
{
if (s[0] == 'h' && s[1] == 'd' && s[2] >= 'a' && s[2] <= max_drive) {
const char *hd_words[] = {
"none", "noprobe", "nowerr", "cdrom", "serialize",
- "autotune", "noautotune", "slow", "swapdata", "bswap",
- "flash", "remap", "remap63", "scsi", NULL };
+ "autotune", "noautotune", "minus8", "swapdata", "bswap",
+ "minus11", "remap", "remap63", "scsi", NULL };
unit = s[2] - 'a';
hw = unit / MAX_DRIVES;
unit = unit % MAX_DRIVES;
}
switch (match_parm(&s[3], hd_words, vals, 3)) {
case -1: /* "none" */
- drive->nobios = 1; /* drop into "noprobe" */
case -2: /* "noprobe" */
drive->noprobe = 1;
goto done;
case -7: /* "noautotune" */
drive->autotune = IDE_TUNE_NOAUTO;
goto done;
- case -8: /* "slow" */
- drive->slow = 1;
- goto done;
case -9: /* "swapdata" */
case -10: /* "bswap" */
drive->bswap = 1;
goto done;
- case -11: /* "flash" */
- drive->ata_flash = 1;
- goto done;
case -12: /* "remap" */
drive->remap_0_to_1 = 1;
goto done;
return ide_unregister_subdriver(drive);
}
-/*
- * Check if we can unregister the subdriver. Called with the
- * request lock held.
- */
-
-static int default_shutdown(ide_drive_t *drive)
-{
- if (drive->usage || DRIVER(drive)->busy) {
- return 1;
- }
- drive->dead = 1;
- return 0;
-}
-
-/*
- * Default function to use for the cache flush operation. This
- * must be replaced for disk devices (see ATA specification
- * documents on cache flush and drive suspend rules)
- *
- * If we have no device attached or the device is not writable
- * this handler is sufficient.
- */
-
-static int default_flushcache (ide_drive_t *drive)
-{
- return 0;
-}
-
static ide_startstop_t default_do_request (ide_drive_t *drive, struct request *rq, sector_t block)
{
ide_end_request(drive, 0, 0);
static void setup_driver_defaults (ide_driver_t *d)
{
if (d->cleanup == NULL) d->cleanup = default_cleanup;
- if (d->shutdown == NULL) d->shutdown = default_shutdown;
- if (d->flushcache == NULL) d->flushcache = default_flushcache;
if (d->do_request == NULL) d->do_request = default_do_request;
if (d->end_request == NULL) d->end_request = default_end_request;
if (d->sense == NULL) d->sense = default_sense;
d->start_power_step = default_start_power_step;
}
-int ide_register_subdriver (ide_drive_t *drive, ide_driver_t *driver, int version)
+int ide_register_subdriver(ide_drive_t *drive, ide_driver_t *driver)
{
unsigned long flags;
-
- BUG_ON(drive->driver == NULL);
-
+
+ BUG_ON(!drive->driver);
+
spin_lock_irqsave(&ide_lock, flags);
- if (version != IDE_SUBDRIVER_VERSION || !drive->present ||
- drive->driver != &idedefault_driver || drive->usage || drive->dead) {
+ if (!drive->present || drive->driver != &idedefault_driver ||
+ drive->usage || drive->dead) {
spin_unlock_irqrestore(&ide_lock, flags);
return 1;
}
printk(KERN_ERR "%s: cleanup_module() called while still busy\n", drive->name);
BUG();
}
- /* We must remove proc entries defined in this module.
- Otherwise we oops while accessing these entries */
-#ifdef CONFIG_PROC_FS
- if (drive->proc)
- ide_remove_proc_entries(drive->proc, driver->proc);
-#endif
ata_attach(drive);
}
}
init_ide_data();
+#ifdef CONFIG_PROC_FS
+ proc_ide_root = proc_mkdir("ide", 0);
+#endif
+
#ifdef CONFIG_BLK_DEV_ALI14XX
if (probe_ali14xx)
(void)ali14xx_init();
if (!aec62xx_proc) {
aec62xx_proc = 1;
- ide_pci_register_host_proc(&aec62xx_procs[0]);
+ ide_pci_create_host_proc("aec62xx", aec62xx_get_info);
}
#endif /* DISPLAY_AEC62XX_TIMINGS && CONFIG_PROC_FS */
#define BUSCLOCK(D) \
((struct chipset_bus_clock_list_entry *) pci_get_drvdata((D)))
-#if defined(DISPLAY_AEC62XX_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 aec62xx_proc;
-
-static int aec62xx_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t aec62xx_procs[] = {
- {
- .name = "aec62xx",
- .set = 1,
- .get_info = aec62xx_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_AEC62XX_TIMINGS && CONFIG_PROC_FS */
-
static void init_setup_aec6x80(struct pci_dev *, ide_pci_device_t *);
static void init_setup_aec62xx(struct pci_dev *, ide_pci_device_t *);
static unsigned int init_chipset_aec62xx(struct pci_dev *, const char *);
if (!ali_proc) {
ali_proc = 1;
bmide_dev = dev;
- ide_pci_register_host_proc(&ali_procs[0]);
+ ide_pci_create_host_proc("ali", ali_get_info);
}
#endif /* defined(DISPLAY_ALI_TIMINGS) && defined(CONFIG_PROC_FS) */
#define DISPLAY_ALI_TIMINGS
-#if defined(DISPLAY_ALI_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 ali_proc;
-
-static int ali_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t ali_procs[] = {
- {
- .name = "ali",
- .set = 1,
- .get_info = ali_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_ALI_TIMINGS && CONFIG_PROC_FS */
-
static unsigned int init_chipset_ali15x3(struct pci_dev *, const char *);
static void init_hwif_common_ali15x3(ide_hwif_t *);
static void init_hwif_ali15x3(ide_hwif_t *);
#include <linux/stat.h>
#include <linux/proc_fs.h>
+static u8 amd74xx_proc;
+
static unsigned char amd_udma2cyc[] = { 4, 6, 8, 10, 3, 2, 1, 15 };
static unsigned long amd_base;
static struct pci_dev *bmide_dev;
if (!amd74xx_proc) {
amd_base = pci_resource_start(dev, 4);
bmide_dev = dev;
- ide_pci_register_host_proc(&amd74xx_procs[0]);
+ ide_pci_create_host_proc("amd74xx", amd74xx_get_info);
amd74xx_proc = 1;
}
#endif /* DISPLAY_AMD_TIMINGS && CONFIG_PROC_FS */
#define DISPLAY_AMD_TIMINGS
-#if defined(DISPLAY_AMD_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 amd74xx_proc;
-
-static int amd74xx_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t amd74xx_procs[] = {
- {
- .name = "amd74xx",
- .set = 1,
- .get_info = amd74xx_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_AMD_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static unsigned int init_chipset_amd74xx(struct pci_dev *, const char *);
static void init_hwif_amd74xx(ide_hwif_t *);
if (!cmd64x_proc) {
cmd64x_proc = 1;
- ide_pci_register_host_proc(&cmd64x_procs[0]);
+ ide_pci_create_host_proc("cmd64x", cmd64x_get_info);
}
#endif /* DISPLAY_CMD64X_TIMINGS && CONFIG_PROC_FS */
#define UDIDETCR1 0x7B
#define DTPR1 0x7C
-#if defined(DISPLAY_CMD64X_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 cmd64x_proc;
-
-static char * print_cmd64x_get_info(char *, struct pci_dev *, int);
-static int cmd64x_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t cmd64x_procs[] = {
- {
- .name = "cmd64x",
- .set = 1,
- .get_info = cmd64x_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_CMD64X_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static unsigned int init_chipset_cmd64x(struct pci_dev *, const char *);
static void init_hwif_cmd64x(ide_hwif_t *);
if (!cs5520_proc) {
cs5520_proc = 1;
bmide_dev = dev;
- ide_pci_register_host_proc(&cs5520_procs[0]);
+ ide_pci_create_host_proc("cs5520", cs5520_get_info);
}
#endif /* DISPLAY_CS5520_TIMINGS && CONFIG_PROC_FS */
return 0;
#define DISPLAY_CS5520_TIMINGS
-#if defined(DISPLAY_CS5520_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 cs5520_proc;
-
-static int cs5520_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t cs5520_procs[] = {
- {
- .name = "cs5520",
- .set = 1,
- .get_info = cs5520_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_CS5520_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static unsigned int init_chipset_cs5520(struct pci_dev *, const char *);
static void init_hwif_cs5520(ide_hwif_t *);
static void cs5520_init_setup_dma(struct pci_dev *dev, struct ide_pci_device_s *d, ide_hwif_t *hwif);
if (!cs5530_proc) {
cs5530_proc = 1;
bmide_dev = dev;
- ide_pci_register_host_proc(&cs5530_procs[0]);
+ ide_pci_create_host_proc("cs5530", cs5530_get_info);
}
#endif /* DISPLAY_CS5530_TIMINGS && CONFIG_PROC_FS */
#define DISPLAY_CS5530_TIMINGS
-#if defined(DISPLAY_CS5530_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 cs5530_proc;
-
-static int cs5530_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t cs5530_procs[] = {
- {
- .name = "cs5530",
- .set = 1,
- .get_info = cs5530_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_CS5530_TIMINGS && CONFIG_PROC_FS */
-
static unsigned int init_chipset_cs5530(struct pci_dev *, const char *);
static void init_hwif_cs5530(ide_hwif_t *);
if (!hpt34x_proc) {
hpt34x_proc = 1;
- ide_pci_register_host_proc(&hpt34x_procs[0]);
+ ide_pci_create_host_proc("hpt34x", hpt34x_get_info);
}
#endif /* DISPLAY_HPT34X_TIMINGS && CONFIG_PROC_FS */
#undef DISPLAY_HPT34X_TIMINGS
-#if defined(DISPLAY_HPT34X_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 hpt34x_proc;
-
-static int hpt34x_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t hpt34x_procs[] = {
- {
- .name = "hpt34x",
- .set = 1,
- .get_info = hpt34x_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_HPT34X_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static unsigned int init_chipset_hpt34x(struct pci_dev *, const char *);
static void init_hwif_hpt34x(ide_hwif_t *);
if (!hpt366_proc) {
hpt366_proc = 1;
- ide_pci_register_host_proc(&hpt366_procs[0]);
+ ide_pci_create_host_proc("hpt366", hpt366_get_info);
}
#endif /* DISPLAY_HPT366_TIMINGS && CONFIG_PROC_FS */
#define F_LOW_PCI_50 0x2d
#define F_LOW_PCI_66 0x42
-#if defined(DISPLAY_HPT366_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 hpt366_proc;
-
-static int hpt366_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t hpt366_procs[] = {
- {
- .name = "hpt366",
- .set = 1,
- .get_info = hpt366_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_HPT366_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static void init_setup_hpt366(struct pci_dev *, ide_pci_device_t *);
static void init_setup_hpt37x(struct pci_dev *, ide_pci_device_t *);
static void init_setup_hpt374(struct pci_dev *, ide_pci_device_t *);
if (!pdcnew_proc) {
pdcnew_proc = 1;
- ide_pci_register_host_proc(&pdcnew_procs[0]);
+ ide_pci_create_host_proc("pdcnew", pdcnew_get_info);
}
#endif /* DISPLAY_PDC202XX_TIMINGS && CONFIG_PROC_FS */
#define DISPLAY_PDC202XX_TIMINGS
-#if defined(DISPLAY_PDC202XX_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 pdcnew_proc;
-
-static int pdcnew_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t pdcnew_procs[] = {
- {
- .name = "pdcnew",
- .set = 1,
- .get_info = pdcnew_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_PDC202XX_TIMINGS && CONFIG_PROC_FS */
-
-
static void init_setup_pdcnew(struct pci_dev *, ide_pci_device_t *);
static void init_setup_pdc20270(struct pci_dev *, ide_pci_device_t *);
static void init_setup_pdc20276(struct pci_dev *dev, ide_pci_device_t *d);
if (!pdc202xx_proc) {
pdc202xx_proc = 1;
- ide_pci_register_host_proc(&pdc202xx_procs[0]);
+ ide_pci_create_host_proc("pdc202xx", pdc202xx_get_info);
}
#endif /* DISPLAY_PDC202XX_TIMINGS && CONFIG_PROC_FS */
#define DISPLAY_PDC202XX_TIMINGS
-#if defined(DISPLAY_PDC202XX_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 pdc202xx_proc;
-
-static int pdc202xx_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t pdc202xx_procs[] = {
- {
- .name = "pdc202xx",
- .set = 1,
- .get_info = pdc202xx_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_PDC202XX_TIMINGS && CONFIG_PROC_FS */
-
-
static void init_setup_pdc202ata4(struct pci_dev *dev, ide_pci_device_t *d);
static void init_setup_pdc20265(struct pci_dev *, ide_pci_device_t *);
static void init_setup_pdc202xx(struct pci_dev *, ide_pci_device_t *);
if (!piix_proc) {
piix_proc = 1;
- ide_pci_register_host_proc(&piix_procs[0]);
+ ide_pci_create_host_proc("piix", piix_get_info);
}
#endif /* DISPLAY_PIIX_TIMINGS && CONFIG_PROC_FS */
return 0;
#define DISPLAY_PIIX_TIMINGS
-#if defined(DISPLAY_PIIX_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 piix_proc;
-
-static int piix_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t piix_procs[] = {
- {
- .name = "piix",
- .set = 1,
- .get_info = piix_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_PIIX_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static void init_setup_piix(struct pci_dev *, ide_pci_device_t *);
static unsigned int __devinit init_chipset_piix(struct pci_dev *, const char *);
static void init_hwif_piix(ide_hwif_t *);
if (!bmide_dev) {
sc1200_proc = 1;
bmide_dev = dev;
- ide_pci_register_host_proc(&sc1200_procs[0]);
+ ide_pci_create_host_proc("sc1200", sc1200_get_info);
}
#endif /* DISPLAY_SC1200_TIMINGS && CONFIG_PROC_FS */
return 0;
#define DISPLAY_SC1200_TIMINGS
-#if defined(DISPLAY_SC1200_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 sc1200_proc;
-
-static int sc1200_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t sc1200_procs[] = {
- {
- .name = "sc1200",
- .set = 1,
- .get_info = sc1200_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_SC1200_TIMINGS && CONFIG_PROC_FS */
-
static unsigned int init_chipset_sc1200(struct pci_dev *, const char *);
static void init_hwif_sc1200(ide_hwif_t *);
if (!svwks_proc) {
svwks_proc = 1;
- ide_pci_register_host_proc(&svwks_procs[0]);
+ ide_pci_create_host_proc("svwks", svwks_get_info);
}
#endif /* DISPLAY_SVWKS_TIMINGS && CONFIG_PROC_FS */
#define DISPLAY_SVWKS_TIMINGS 1
-#if defined(DISPLAY_SVWKS_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 svwks_proc;
-
-static int svwks_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t svwks_procs[] = {
-{
- .name = "svwks",
- .set = 1,
- .get_info = svwks_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_SVWKS_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static void init_setup_svwks(struct pci_dev *, ide_pci_device_t *);
static void init_setup_csb6(struct pci_dev *, ide_pci_device_t *);
static unsigned int init_chipset_svwks(struct pci_dev *, const char *);
if (!siimage_proc) {
siimage_proc = 1;
- ide_pci_register_host_proc(&siimage_procs[0]);
+ ide_pci_create_host_proc("siimage", siimage_get_info);
}
#endif /* DISPLAY_SIIMAGE_TIMINGS && CONFIG_PROC_FS */
}
#define siiprintk(x...)
#endif
-
-#if defined(DISPLAY_SIIMAGE_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static char * print_siimage_get_info(char *, struct pci_dev *, int);
-static int siimage_get_info(char *, char **, off_t, int);
-
-static u8 siimage_proc;
-
-static ide_pci_host_proc_t siimage_procs[] = {
- {
- .name = "siimage",
- .set = 1,
- .get_info = siimage_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_SIIMAGE_TIMINGS && CONFIG_PROC_FS */
-
static unsigned int init_chipset_siimage(struct pci_dev *, const char *);
static void init_iops_siimage(ide_hwif_t *);
static void init_hwif_siimage(ide_hwif_t *);
if (!sis_proc) {
sis_proc = 1;
bmide_dev = dev;
- ide_pci_register_host_proc(&sis_procs[0]);
+ ide_pci_create_host_proc("sis", sis_get_info);
}
#endif
}
#define DISPLAY_SIS_TIMINGS
-#if defined(DISPLAY_SIS_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 sis_proc;
-
-static int sis_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t sis_procs[] = {
-{
- .name = "sis",
- .set = 1,
- .get_info = sis_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_SIS_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static unsigned int init_chipset_sis5513(struct pci_dev *, const char *);
static void init_hwif_sis5513(ide_hwif_t *);
if (!slc90e66_proc) {
slc90e66_proc = 1;
bmide_dev = dev;
- ide_pci_register_host_proc(&slc90e66_procs[0]);
+ ide_pci_create_host_proc("slc90e66", slc90e66_get_info);
}
#endif /* DISPLAY_SLC90E66_TIMINGS && CONFIG_PROC_FS */
return 0;
#define SLC90E66_DEBUG_DRIVE_INFO 0
-#if defined(DISPLAY_SLC90E66_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 slc90e66_proc;
-
-static int slc90e66_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t slc90e66_procs[] = {
- {
- .name = "slc90e66",
- .set = 1,
- .get_info = slc90e66_get_info,
- .parent = NULL,
- },
-};
-#endif /* defined(DISPLAY_SLC90E66_TIMINGS) && defined(CONFIG_PROC_FS) */
-
static unsigned int init_chipset_slc90e66(struct pci_dev *, const char *);
static void init_hwif_slc90e66(ide_hwif_t *);
const char *name)
{
#ifdef CONFIG_PROC_FS
- ide_pci_register_host_proc(&triflex_proc);
+ ide_pci_create_host_proc("triflex", triflex_get_info);
#endif
return 0;
}
static unsigned int __devinit init_chipset_triflex(struct pci_dev *, const char *);
static void init_hwif_triflex(ide_hwif_t *);
-#ifdef CONFIG_PROC_FS
-static int triflex_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t triflex_proc = {
- .name = "triflex",
- .set = 1,
- .get_info = triflex_get_info,
-};
-#endif
static ide_pci_device_t triflex_devices[] __devinitdata = {
{
via_base = pci_resource_start(dev, 4);
bmide_dev = dev;
isa_dev = isa;
- ide_pci_register_host_proc(&via_procs[0]);
+ ide_pci_create_host_proc("via", via_get_info);
via_proc = 1;
}
#endif /* DISPLAY_VIA_TIMINGS && CONFIG_PROC_FS */
#define DISPLAY_VIA_TIMINGS
-#if defined(DISPLAY_VIA_TIMINGS) && defined(CONFIG_PROC_FS)
-#include <linux/stat.h>
-#include <linux/proc_fs.h>
-
-static u8 via_proc;
-
-static int via_get_info(char *, char **, off_t, int);
-
-static ide_pci_host_proc_t via_procs[] = {
- {
- .name = "via",
- .set = 1,
- .get_info = via_get_info,
- .parent = NULL,
- },
-};
-#endif /* DISPLAY_VIA_TIMINGS && CONFIG_PROC_FS */
-
static unsigned int init_chipset_via82cxxx(struct pci_dev *, const char *);
static void init_hwif_via82cxxx(ide_hwif_t *);
#define IDE_PMAC_DEBUG
-#define DMA_WAIT_TIMEOUT 100
+#define DMA_WAIT_TIMEOUT 50
typedef struct pmac_ide_hwif {
unsigned long regbase;
dstat = readl(&dma->status);
writel(((RUN|WAKE|DEAD) << 16), &dma->control);
pmac_ide_destroy_dmatable(drive);
- /* verify good dma status */
- return (dstat & (RUN|DEAD|ACTIVE)) != RUN;
+ /* verify good dma status. we don't check for ACTIVE beeing 0. We should...
+ * in theory, but with ATAPI decices doing buffer underruns, that would
+ * cause us to disable DMA, which isn't what we want
+ */
+ return (dstat & (RUN|DEAD)) != RUN;
}
/*
{
pmac_ide_hwif_t* pmif = (pmac_ide_hwif_t *)HWIF(drive)->hwif_data;
volatile struct dbdma_regs *dma;
- unsigned long status;
+ unsigned long status, timeout;
if (pmif == NULL)
return 0;
* - The dbdma fifo hasn't yet finished flushing to
* to system memory when the disk interrupt occurs.
*
- * The trick here is to increment drive->waiting_for_dma,
- * and return as if no interrupt occurred. If the counter
- * reach a certain timeout value, we then return 1. If
- * we really got the interrupt, it will happen right away
- * again.
- * Apple's solution here may be more elegant. They issue
- * a DMA channel interrupt (a separate irq line) via a DBDMA
- * NOP command just before the STOP, and wait for both the
- * disk and DBDMA interrupts to have completed.
*/
-
+
/* If ACTIVE is cleared, the STOP command have passed and
* transfer is complete.
*/
called while not waiting\n", HWIF(drive)->index);
/* If dbdma didn't execute the STOP command yet, the
- * active bit is still set */
- drive->waiting_for_dma++;
- if (drive->waiting_for_dma >= DMA_WAIT_TIMEOUT) {
- printk(KERN_WARNING "ide%d, timeout waiting \
- for dbdma command stop\n", HWIF(drive)->index);
- return 1;
- }
- udelay(5);
- return 0;
+ * active bit is still set. We consider that we aren't
+ * sharing interrupts (which is hopefully the case with
+ * those controllers) and so we just try to flush the
+ * channel for pending data in the fifo
+ */
+ udelay(1);
+ writel((FLUSH << 16) | FLUSH, &dma->control);
+ timeout = 0;
+ for (;;) {
+ udelay(1);
+ status = readl(&dma->status);
+ if ((status & FLUSH) == 0)
+ break;
+ if (++timeout > 100) {
+ printk(KERN_WARNING "ide%d, ide_dma_test_irq \
+ timeout flushing channel\n", HWIF(drive)->index);
+ break;
+ }
+ }
+ return 1;
}
static int __pmac
if (request_region(portbase, ACT2000_PORTLEN, "act2000isa")) {
ret = act2000_isa_reset(portbase);
- release_region(portbase, ISA_REGION);
+ release_region(portbase, ISA_REGION);
}
return ret;
}
debugl1(cs, "hdlc_empty_fifo: incoming packet too large");
return;
}
- ptr = (u_int *) p = bcs->hw.hdlc.rcvbuf + bcs->hw.hdlc.rcvidx;
+ p = bcs->hw.hdlc.rcvbuf + bcs->hw.hdlc.rcvidx;
+ ptr = (u_int *)p;
bcs->hw.hdlc.rcvidx += count;
if (cs->subtyp == AVM_FRITZ_PCI) {
outl(idx, cs->hw.avm.cfg_reg + 4);
}
if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
debugl1(cs, "hdlc_fill_fifo %d/%ld", count, bcs->tx_skb->len);
- ptr = (u_int *) p = bcs->tx_skb->data;
+ p = bcs->tx_skb->data;
+ ptr = (u_int *)p;
skb_pull(bcs->tx_skb, count);
bcs->tx_cnt -= count;
bcs->hw.hdlc.count += count;
ic.driver = cs->myid;
ic.command = ISDN_STAT_REDIR;
ic.arg = chan;
- (ulong)(ic.parm.num[0]) = result;
+ ic.parm.num[0] = result;
cs->iif.statcallb(&ic);
} /* stat_redir_result */
/* Allocate memory for FIFOS */
/* Because the HFC-PCI needs a 32K physical alignment, we */
/* need to allocate the double mem and align the address */
- if (!((void *) cs->hw.hfcpci.share_start = kmalloc(65536, GFP_KERNEL))) {
+ if (!(cs->hw.hfcpci.share_start = kmalloc(65536, GFP_KERNEL))) {
printk(KERN_WARNING "HFC-PCI: Error allocating memory for FIFO!\n");
return 0;
}
- (ulong) cs->hw.hfcpci.fifos =
+ cs->hw.hfcpci.fifos = (void *)
(((ulong) cs->hw.hfcpci.share_start) & ~0x7FFF) + 0x8000;
pci_write_config_dword(cs->hw.hfcpci.dev, 0x80, (u_int) virt_to_bus(cs->hw.hfcpci.fifos));
cs->hw.hfcpci.pci_io = ioremap((ulong) cs->hw.hfcpci.pci_io, 256);
spin_unlock_irqrestore(&bcs->aclock, flags);
schedule_event(bcs, B_ACKPENDING);
}
- }
+ }
dev_kfree_skb_irq(bcs->tx_skb);
- bcs->hw.hscx.count = 0;
+ bcs->hw.hscx.count = 0;
bcs->tx_skb = NULL;
- }
+ }
if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
bcs->hw.hscx.count = 0;
set_bit(BC_FLG_BUSY, &bcs->Flag);
return (0);
inf->usage_cnt--; /* new usage count */
- (struct log_data **) file->private_data = &inf->next; /* next structure */
+ file->private_data = &inf->next; /* next structure */
if ((len = strlen(inf->log_start)) <= count) {
if (copy_to_user(buf, inf->log_start, len))
return -EFAULT;
cli();
pd->if_used++;
if (pd->log_head)
- (struct log_data **) filep->private_data = &(pd->log_tail->next);
+ filep->private_data = &pd->log_tail->next;
else
- (struct log_data **) filep->private_data = &(pd->log_head);
+ filep->private_data = &pd->log_head;
restore_flags(flags);
} else { /* simultaneous read/write access forbidden ! */
unlock_kernel();
return -EINVAL;
switch (cmd) {
-
#define PPP_VERSION "2.3.7"
case SIOCGPPPVER:
r = (char *) ifr->ifr_ifru.ifru_data;
Recent tools use a new version of the ioctl interface, only
select this option if you intend using such tools.
+config DM_CRYPT
+ tristate "Crypt target support"
+ depends on BLK_DEV_DM && EXPERIMENTAL
+ select CRYPTO
+ ---help---
+ This device-mapper target allows you to create a device that
+ transparently encrypts the data on it. You'll need to activate
+ the ciphers you're going to use in the cryptoapi configuration.
+
+ Information on how to use dm-crypt can be found on
+
+ http://www.saout.de/misc/dm-crypt/
+
+ To compile this code as a module, choose M here: the module will
+ be called dm-crypt.
+
+ If unsure, say N.
+
endmenu
obj-$(CONFIG_MD_MULTIPATH) += multipath.o
obj-$(CONFIG_BLK_DEV_MD) += md.o
obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
+obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
quiet_cmd_unroll = UNROLL $@
cmd_unroll = $(PERL) $(srctree)/$(src)/unroll.pl $(UNROLL) \
--- /dev/null
+/*
+ * Copyright (C) 2004 Red Hat UK Ltd.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef DM_BIO_LIST_H
+#define DM_BIO_LIST_H
+
+#include <linux/bio.h>
+
+struct bio_list {
+ struct bio *head;
+ struct bio *tail;
+};
+
+static inline void bio_list_init(struct bio_list *bl)
+{
+ bl->head = bl->tail = NULL;
+}
+
+static inline void bio_list_add(struct bio_list *bl, struct bio *bio)
+{
+ bio->bi_next = NULL;
+
+ if (bl->tail)
+ bl->tail->bi_next = bio;
+ else
+ bl->head = bio;
+
+ bl->tail = bio;
+}
+
+static inline void bio_list_merge(struct bio_list *bl, struct bio_list *bl2)
+{
+ if (bl->tail)
+ bl->tail->bi_next = bl2->head;
+ else
+ bl->head = bl2->head;
+
+ bl->tail = bl2->tail;
+}
+
+static inline struct bio *bio_list_pop(struct bio_list *bl)
+{
+ struct bio *bio = bl->head;
+
+ if (bio) {
+ bl->head = bl->head->bi_next;
+ if (!bl->head)
+ bl->tail = NULL;
+
+ bio->bi_next = NULL;
+ }
+
+ return bio;
+}
+
+static inline struct bio *bio_list_get(struct bio_list *bl)
+{
+ struct bio *bio = bl->head;
+
+ bl->head = bl->tail = NULL;
+
+ return bio;
+}
+
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2003 Christophe Saout <christophe@saout.de>
+ *
+ * This file is released under the GPL.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/bio.h>
+#include <linux/mempool.h>
+#include <linux/slab.h>
+#include <linux/crypto.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include <asm/scatterlist.h>
+
+#include "dm.h"
+
+/*
+ * per bio private data
+ */
+struct crypt_io {
+ struct dm_target *target;
+ struct bio *bio;
+ struct bio *first_clone;
+ struct work_struct work;
+ atomic_t pending;
+ int error;
+};
+
+/*
+ * context holding the current state of a multi-part conversion
+ */
+struct convert_context {
+ struct bio *bio_in;
+ struct bio *bio_out;
+ unsigned int offset_in;
+ unsigned int offset_out;
+ int idx_in;
+ int idx_out;
+ sector_t sector;
+ int write;
+};
+
+/*
+ * Crypt: maps a linear range of a block device
+ * and encrypts / decrypts at the same time.
+ */
+struct crypt_config {
+ struct dm_dev *dev;
+ sector_t start;
+
+ /*
+ * pool for per bio private data and
+ * for encryption buffer pages
+ */
+ mempool_t *io_pool;
+ mempool_t *page_pool;
+
+ /*
+ * crypto related data
+ */
+ struct crypto_tfm *tfm;
+ sector_t iv_offset;
+ int (*iv_generator)(struct crypt_config *cc, u8 *iv, sector_t sector);
+ int iv_size;
+ int key_size;
+ u8 key[0];
+};
+
+#define MIN_IOS 256
+#define MIN_POOL_PAGES 32
+#define MIN_BIO_PAGES 8
+
+static kmem_cache_t *_crypt_io_pool;
+
+/*
+ * Mempool alloc and free functions for the page
+ */
+static void *mempool_alloc_page(int gfp_mask, void *data)
+{
+ return alloc_page(gfp_mask);
+}
+
+static void mempool_free_page(void *page, void *data)
+{
+ __free_page(page);
+}
+
+
+/*
+ * Different IV generation algorithms
+ */
+static int crypt_iv_plain(struct crypt_config *cc, u8 *iv, sector_t sector)
+{
+ *(u32 *)iv = cpu_to_le32(sector & 0xffffffff);
+ if (cc->iv_size > sizeof(u32) / sizeof(u8))
+ memset(iv + (sizeof(u32) / sizeof(u8)), 0,
+ cc->iv_size - (sizeof(u32) / sizeof(u8)));
+
+ return 0;
+}
+
+static inline int
+crypt_convert_scatterlist(struct crypt_config *cc, struct scatterlist *out,
+ struct scatterlist *in, unsigned int length,
+ int write, sector_t sector)
+{
+ u8 iv[cc->iv_size];
+ int r;
+
+ if (cc->iv_generator) {
+ r = cc->iv_generator(cc, iv, sector);
+ if (r < 0)
+ return r;
+
+ if (write)
+ r = crypto_cipher_encrypt_iv(cc->tfm, out, in, length, iv);
+ else
+ r = crypto_cipher_decrypt_iv(cc->tfm, out, in, length, iv);
+ } else {
+ if (write)
+ r = crypto_cipher_encrypt(cc->tfm, out, in, length);
+ else
+ r = crypto_cipher_decrypt(cc->tfm, out, in, length);
+ }
+
+ return r;
+}
+
+static void
+crypt_convert_init(struct crypt_config *cc, struct convert_context *ctx,
+ struct bio *bio_out, struct bio *bio_in,
+ sector_t sector, int write)
+{
+ ctx->bio_in = bio_in;
+ ctx->bio_out = bio_out;
+ ctx->offset_in = 0;
+ ctx->offset_out = 0;
+ ctx->idx_in = bio_in ? bio_in->bi_idx : 0;
+ ctx->idx_out = bio_out ? bio_out->bi_idx : 0;
+ ctx->sector = sector + cc->iv_offset;
+ ctx->write = write;
+}
+
+/*
+ * Encrypt / decrypt data from one bio to another one (can be the same one)
+ */
+static int crypt_convert(struct crypt_config *cc,
+ struct convert_context *ctx)
+{
+ int r = 0;
+
+ while(ctx->idx_in < ctx->bio_in->bi_vcnt &&
+ ctx->idx_out < ctx->bio_out->bi_vcnt) {
+ struct bio_vec *bv_in = bio_iovec_idx(ctx->bio_in, ctx->idx_in);
+ struct bio_vec *bv_out = bio_iovec_idx(ctx->bio_out, ctx->idx_out);
+ struct scatterlist sg_in = {
+ .page = bv_in->bv_page,
+ .offset = bv_in->bv_offset + ctx->offset_in,
+ .length = 1 << SECTOR_SHIFT
+ };
+ struct scatterlist sg_out = {
+ .page = bv_out->bv_page,
+ .offset = bv_out->bv_offset + ctx->offset_out,
+ .length = 1 << SECTOR_SHIFT
+ };
+
+ ctx->offset_in += sg_in.length;
+ if (ctx->offset_in >= bv_in->bv_len) {
+ ctx->offset_in = 0;
+ ctx->idx_in++;
+ }
+
+ ctx->offset_out += sg_out.length;
+ if (ctx->offset_out >= bv_out->bv_len) {
+ ctx->offset_out = 0;
+ ctx->idx_out++;
+ }
+
+ r = crypt_convert_scatterlist(cc, &sg_out, &sg_in, sg_in.length,
+ ctx->write, ctx->sector);
+ if (r < 0)
+ break;
+
+ ctx->sector++;
+ }
+
+ return r;
+}
+
+/*
+ * Generate a new unfragmented bio with the given size
+ * This should never violate the device limitations
+ * May return a smaller bio when running out of pages
+ */
+static struct bio *
+crypt_alloc_buffer(struct crypt_config *cc, unsigned int size,
+ struct bio *base_bio, int *bio_vec_idx)
+{
+ struct bio *bio;
+ int nr_iovecs = dm_div_up(size, PAGE_SIZE);
+ int gfp_mask = GFP_NOIO | __GFP_HIGHMEM;
+ int flags = current->flags;
+ int i;
+
+ /*
+ * Tell VM to act less aggressively and fail earlier.
+ * This is not necessary but increases throughput.
+ * FIXME: Is this really intelligent?
+ */
+ current->flags &= ~PF_MEMALLOC;
+
+ if (base_bio)
+ bio = bio_clone(base_bio, GFP_NOIO);
+ else
+ bio = bio_alloc(GFP_NOIO, nr_iovecs);
+ if (!bio) {
+ if (flags & PF_MEMALLOC)
+ current->flags |= PF_MEMALLOC;
+ return NULL;
+ }
+
+ /* if the last bio was not complete, continue where that one ended */
+ bio->bi_idx = *bio_vec_idx;
+ bio->bi_vcnt = *bio_vec_idx;
+ bio->bi_size = 0;
+ bio->bi_flags &= ~(1 << BIO_SEG_VALID);
+
+ /* bio->bi_idx pages have already been allocated */
+ size -= bio->bi_idx * PAGE_SIZE;
+
+ for(i = bio->bi_idx; i < nr_iovecs; i++) {
+ struct bio_vec *bv = bio_iovec_idx(bio, i);
+
+ bv->bv_page = mempool_alloc(cc->page_pool, gfp_mask);
+ if (!bv->bv_page)
+ break;
+
+ /*
+ * if additional pages cannot be allocated without waiting,
+ * return a partially allocated bio, the caller will then try
+ * to allocate additional bios while submitting this partial bio
+ */
+ if ((i - bio->bi_idx) == (MIN_BIO_PAGES - 1))
+ gfp_mask = (gfp_mask | __GFP_NOWARN) & ~__GFP_WAIT;
+
+ bv->bv_offset = 0;
+ if (size > PAGE_SIZE)
+ bv->bv_len = PAGE_SIZE;
+ else
+ bv->bv_len = size;
+
+ bio->bi_size += bv->bv_len;
+ bio->bi_vcnt++;
+ size -= bv->bv_len;
+ }
+
+ if (flags & PF_MEMALLOC)
+ current->flags |= PF_MEMALLOC;
+
+ if (!bio->bi_size) {
+ bio_put(bio);
+ return NULL;
+ }
+
+ /*
+ * Remember the last bio_vec allocated to be able
+ * to correctly continue after the splitting.
+ */
+ *bio_vec_idx = bio->bi_vcnt;
+
+ return bio;
+}
+
+static void crypt_free_buffer_pages(struct crypt_config *cc,
+ struct bio *bio, unsigned int bytes)
+{
+ unsigned int start, end;
+ struct bio_vec *bv;
+ int i;
+
+ /*
+ * This is ugly, but Jens Axboe thinks that using bi_idx in the
+ * endio function is too dangerous at the moment, so I calculate the
+ * correct position using bi_vcnt and bi_size.
+ * The bv_offset and bv_len fields might already be modified but we
+ * know that we always allocated whole pages.
+ * A fix to the bi_idx issue in the kernel is in the works, so
+ * we will hopefully be able to revert to the cleaner solution soon.
+ */
+ i = bio->bi_vcnt - 1;
+ bv = bio_iovec_idx(bio, i);
+ end = (i << PAGE_SHIFT) + (bv->bv_offset + bv->bv_len) - bio->bi_size;
+ start = end - bytes;
+
+ start >>= PAGE_SHIFT;
+ if (!bio->bi_size)
+ end = bio->bi_vcnt;
+ else
+ end >>= PAGE_SHIFT;
+
+ for(i = start; i < end; i++) {
+ bv = bio_iovec_idx(bio, i);
+ BUG_ON(!bv->bv_page);
+ mempool_free(bv->bv_page, cc->page_pool);
+ bv->bv_page = NULL;
+ }
+}
+
+/*
+ * One of the bios was finished. Check for completion of
+ * the whole request and correctly clean up the buffer.
+ */
+static void dec_pending(struct crypt_io *io, int error)
+{
+ struct crypt_config *cc = (struct crypt_config *) io->target->private;
+
+ if (error < 0)
+ io->error = error;
+
+ if (!atomic_dec_and_test(&io->pending))
+ return;
+
+ if (io->first_clone)
+ bio_put(io->first_clone);
+
+ bio_endio(io->bio, io->bio->bi_size, io->error);
+
+ mempool_free(io, cc->io_pool);
+}
+
+/*
+ * kcryptd:
+ *
+ * Needed because it would be very unwise to do decryption in an
+ * interrupt context, so bios returning from read requests get
+ * queued here.
+ */
+static struct workqueue_struct *_kcryptd_workqueue;
+
+static void kcryptd_do_work(void *data)
+{
+ struct crypt_io *io = (struct crypt_io *) data;
+ struct crypt_config *cc = (struct crypt_config *) io->target->private;
+ struct convert_context ctx;
+ int r;
+
+ crypt_convert_init(cc, &ctx, io->bio, io->bio,
+ io->bio->bi_sector - io->target->begin, 0);
+ r = crypt_convert(cc, &ctx);
+
+ dec_pending(io, r);
+}
+
+static void kcryptd_queue_io(struct crypt_io *io)
+{
+ INIT_WORK(&io->work, kcryptd_do_work, io);
+ queue_work(_kcryptd_workqueue, &io->work);
+}
+
+/*
+ * Decode key from its hex representation
+ */
+static int crypt_decode_key(u8 *key, char *hex, int size)
+{
+ char buffer[3];
+ char *endp;
+ int i;
+
+ buffer[2] = '\0';
+
+ for(i = 0; i < size; i++) {
+ buffer[0] = *hex++;
+ buffer[1] = *hex++;
+
+ key[i] = (u8)simple_strtoul(buffer, &endp, 16);
+
+ if (endp != &buffer[2])
+ return -EINVAL;
+ }
+
+ if (*hex != '\0')
+ return -EINVAL;
+
+ return 0;
+}
+
+/*
+ * Encode key into its hex representation
+ */
+static void crypt_encode_key(char *hex, u8 *key, int size)
+{
+ int i;
+
+ for(i = 0; i < size; i++) {
+ sprintf(hex, "%02x", *key);
+ hex += 2;
+ key++;
+ }
+}
+
+/*
+ * Construct an encryption mapping:
+ * <cipher> <key> <iv_offset> <dev_path> <start>
+ */
+static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ struct crypt_config *cc;
+ struct crypto_tfm *tfm;
+ char *tmp;
+ char *cipher;
+ char *mode;
+ int crypto_flags;
+ int key_size;
+
+ if (argc != 5) {
+ ti->error = "dm-crypt: Not enough arguments";
+ return -EINVAL;
+ }
+
+ tmp = argv[0];
+ cipher = strsep(&tmp, "-");
+ mode = strsep(&tmp, "-");
+
+ if (tmp)
+ DMWARN("dm-crypt: Unexpected additional cipher options");
+
+ key_size = strlen(argv[1]) >> 1;
+
+ cc = kmalloc(sizeof(*cc) + key_size * sizeof(u8), GFP_KERNEL);
+ if (cc == NULL) {
+ ti->error =
+ "dm-crypt: Cannot allocate transparent encryption context";
+ return -ENOMEM;
+ }
+
+ if (!mode || strcmp(mode, "plain") == 0)
+ cc->iv_generator = crypt_iv_plain;
+ else if (strcmp(mode, "ecb") == 0)
+ cc->iv_generator = NULL;
+ else {
+ ti->error = "dm-crypt: Invalid chaining mode";
+ goto bad1;
+ }
+
+ if (cc->iv_generator)
+ crypto_flags = CRYPTO_TFM_MODE_CBC;
+ else
+ crypto_flags = CRYPTO_TFM_MODE_ECB;
+
+ tfm = crypto_alloc_tfm(cipher, crypto_flags);
+ if (!tfm) {
+ ti->error = "dm-crypt: Error allocating crypto tfm";
+ goto bad1;
+ }
+
+ if (tfm->crt_u.cipher.cit_decrypt_iv && tfm->crt_u.cipher.cit_encrypt_iv)
+ /* at least a 32 bit sector number should fit in our buffer */
+ cc->iv_size = max(crypto_tfm_alg_ivsize(tfm),
+ (unsigned int)(sizeof(u32) / sizeof(u8)));
+ else {
+ cc->iv_size = 0;
+ if (cc->iv_generator) {
+ DMWARN("dm-crypt: Selected cipher does not support IVs");
+ cc->iv_generator = NULL;
+ }
+ }
+
+ cc->io_pool = mempool_create(MIN_IOS, mempool_alloc_slab,
+ mempool_free_slab, _crypt_io_pool);
+ if (!cc->io_pool) {
+ ti->error = "dm-crypt: Cannot allocate crypt io mempool";
+ goto bad2;
+ }
+
+ cc->page_pool = mempool_create(MIN_POOL_PAGES, mempool_alloc_page,
+ mempool_free_page, NULL);
+ if (!cc->page_pool) {
+ ti->error = "dm-crypt: Cannot allocate page mempool";
+ goto bad3;
+ }
+
+ cc->tfm = tfm;
+ cc->key_size = key_size;
+ if ((key_size == 0 && strcmp(argv[1], "-") != 0)
+ || crypt_decode_key(cc->key, argv[1], key_size) < 0) {
+ ti->error = "dm-crypt: Error decoding key";
+ goto bad4;
+ }
+
+ if (tfm->crt_u.cipher.cit_setkey(tfm, cc->key, key_size) < 0) {
+ ti->error = "dm-crypt: Error setting key";
+ goto bad4;
+ }
+
+ if (sscanf(argv[2], SECTOR_FORMAT, &cc->iv_offset) != 1) {
+ ti->error = "dm-crypt: Invalid iv_offset sector";
+ goto bad4;
+ }
+
+ if (sscanf(argv[4], SECTOR_FORMAT, &cc->start) != 1) {
+ ti->error = "dm-crypt: Invalid device sector";
+ goto bad4;
+ }
+
+ if (dm_get_device(ti, argv[3], cc->start, ti->len,
+ dm_table_get_mode(ti->table), &cc->dev)) {
+ ti->error = "dm-crypt: Device lookup failed";
+ goto bad4;
+ }
+
+ ti->private = cc;
+ return 0;
+
+bad4:
+ mempool_destroy(cc->page_pool);
+bad3:
+ mempool_destroy(cc->io_pool);
+bad2:
+ crypto_free_tfm(tfm);
+bad1:
+ kfree(cc);
+ return -EINVAL;
+}
+
+static void crypt_dtr(struct dm_target *ti)
+{
+ struct crypt_config *cc = (struct crypt_config *) ti->private;
+
+ mempool_destroy(cc->page_pool);
+ mempool_destroy(cc->io_pool);
+
+ crypto_free_tfm(cc->tfm);
+ dm_put_device(ti, cc->dev);
+ kfree(cc);
+}
+
+static int crypt_endio(struct bio *bio, unsigned int done, int error)
+{
+ struct crypt_io *io = (struct crypt_io *) bio->bi_private;
+ struct crypt_config *cc = (struct crypt_config *) io->target->private;
+
+ if (bio_data_dir(bio) == WRITE) {
+ /*
+ * free the processed pages, even if
+ * it's only a partially completed write
+ */
+ crypt_free_buffer_pages(cc, bio, done);
+ }
+
+ if (bio->bi_size)
+ return 1;
+
+ bio_put(bio);
+
+ /*
+ * successful reads are decrypted by the worker thread
+ */
+ if ((bio_data_dir(bio) == READ)
+ && bio_flagged(bio, BIO_UPTODATE)) {
+ kcryptd_queue_io(io);
+ return 0;
+ }
+
+ dec_pending(io, error);
+ return error;
+}
+
+static inline struct bio *
+crypt_clone(struct crypt_config *cc, struct crypt_io *io, struct bio *bio,
+ sector_t sector, int *bvec_idx, struct convert_context *ctx)
+{
+ struct bio *clone;
+
+ if (bio_data_dir(bio) == WRITE) {
+ clone = crypt_alloc_buffer(cc, bio->bi_size,
+ io->first_clone, bvec_idx);
+ if (clone) {
+ ctx->bio_out = clone;
+ if (crypt_convert(cc, ctx) < 0) {
+ crypt_free_buffer_pages(cc, clone,
+ clone->bi_size);
+ bio_put(clone);
+ return NULL;
+ }
+ }
+ } else
+ clone = bio_clone(bio, GFP_NOIO);
+
+ if (!clone)
+ return NULL;
+
+ clone->bi_private = io;
+ clone->bi_end_io = crypt_endio;
+ clone->bi_bdev = cc->dev->bdev;
+ clone->bi_sector = cc->start + sector;
+ clone->bi_rw = bio->bi_rw;
+
+ return clone;
+}
+
+static int crypt_map(struct dm_target *ti, struct bio *bio)
+{
+ struct crypt_config *cc = (struct crypt_config *) ti->private;
+ struct crypt_io *io = mempool_alloc(cc->io_pool, GFP_NOIO);
+ struct convert_context ctx;
+ struct bio *clone;
+ unsigned int remaining = bio->bi_size;
+ sector_t sector = bio->bi_sector - ti->begin;
+ int bvec_idx = 0;
+
+ io->target = ti;
+ io->bio = bio;
+ io->first_clone = NULL;
+ io->error = 0;
+ atomic_set(&io->pending, 1); /* hold a reference */
+
+ if (bio_data_dir(bio) == WRITE)
+ crypt_convert_init(cc, &ctx, NULL, bio, sector, 1);
+
+ /*
+ * The allocated buffers can be smaller than the whole bio,
+ * so repeat the whole process until all the data can be handled.
+ */
+ while (remaining) {
+ clone = crypt_clone(cc, io, bio, sector, &bvec_idx, &ctx);
+ if (!clone)
+ goto cleanup;
+
+ if (!io->first_clone) {
+ /*
+ * hold a reference to the first clone, because it
+ * holds the bio_vec array and that can't be freed
+ * before all other clones are released
+ */
+ bio_get(clone);
+ io->first_clone = clone;
+ }
+ atomic_inc(&io->pending);
+
+ remaining -= clone->bi_size;
+ sector += bio_sectors(clone);
+
+ generic_make_request(clone);
+
+ /* out of memory -> run queues */
+ if (remaining)
+ blk_run_queues();
+ }
+
+ /* drop reference, clones could have returned before we reach this */
+ dec_pending(io, 0);
+ return 0;
+
+cleanup:
+ if (io->first_clone) {
+ dec_pending(io, -ENOMEM);
+ return 0;
+ }
+
+ /* if no bio has been dispatched yet, we can directly return the error */
+ mempool_free(io, cc->io_pool);
+ return -ENOMEM;
+}
+
+static int crypt_status(struct dm_target *ti, status_type_t type,
+ char *result, unsigned int maxlen)
+{
+ struct crypt_config *cc = (struct crypt_config *) ti->private;
+ char buffer[32];
+ const char *cipher;
+ const char *mode = NULL;
+ int offset;
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ result[0] = '\0';
+ break;
+
+ case STATUSTYPE_TABLE:
+ cipher = crypto_tfm_alg_name(cc->tfm);
+
+ switch(cc->tfm->crt_u.cipher.cit_mode) {
+ case CRYPTO_TFM_MODE_CBC:
+ mode = "plain";
+ break;
+ case CRYPTO_TFM_MODE_ECB:
+ mode = "ecb";
+ break;
+ default:
+ BUG();
+ }
+
+ snprintf(result, maxlen, "%s-%s ", cipher, mode);
+ offset = strlen(result);
+
+ if (cc->key_size > 0) {
+ if ((maxlen - offset) < ((cc->key_size << 1) + 1))
+ return -ENOMEM;
+
+ crypt_encode_key(result + offset, cc->key, cc->key_size);
+ offset += cc->key_size << 1;
+ } else {
+ if (offset >= maxlen)
+ return -ENOMEM;
+ result[offset++] = '-';
+ }
+
+ format_dev_t(buffer, cc->dev->bdev->bd_dev);
+ snprintf(result + offset, maxlen - offset, " " SECTOR_FORMAT
+ " %s " SECTOR_FORMAT, cc->iv_offset,
+ buffer, cc->start);
+ break;
+ }
+ return 0;
+}
+
+static struct target_type crypt_target = {
+ .name = "crypt",
+ .module = THIS_MODULE,
+ .ctr = crypt_ctr,
+ .dtr = crypt_dtr,
+ .map = crypt_map,
+ .status = crypt_status,
+};
+
+static int __init dm_crypt_init(void)
+{
+ int r;
+
+ _crypt_io_pool = kmem_cache_create("dm-crypt_io",
+ sizeof(struct crypt_io),
+ 0, 0, NULL, NULL);
+ if (!_crypt_io_pool)
+ return -ENOMEM;
+
+ _kcryptd_workqueue = create_workqueue("kcryptd");
+ if (!_kcryptd_workqueue) {
+ r = -ENOMEM;
+ DMERR("couldn't create kcryptd");
+ goto bad1;
+ }
+
+ r = dm_register_target(&crypt_target);
+ if (r < 0) {
+ DMERR("crypt: register failed %d", r);
+ goto bad2;
+ }
+
+ return 0;
+
+bad2:
+ destroy_workqueue(_kcryptd_workqueue);
+bad1:
+ kmem_cache_destroy(_crypt_io_pool);
+ return r;
+}
+
+static void __exit dm_crypt_exit(void)
+{
+ int r = dm_unregister_target(&crypt_target);
+
+ if (r < 0)
+ DMERR("crypt: unregister failed %d", r);
+
+ destroy_workqueue(_kcryptd_workqueue);
+ kmem_cache_destroy(_crypt_io_pool);
+}
+
+module_init(dm_crypt_init);
+module_exit(dm_crypt_exit);
+
+MODULE_AUTHOR("Christophe Saout <christophe@saout.de>");
+MODULE_DESCRIPTION(DM_NAME " target for transparent encryption / decryption");
+MODULE_LICENSE("GPL");
break;
case STATUSTYPE_TABLE:
- offset = snprintf(result, maxlen, "%d " SECTOR_FORMAT,
+ offset = scnprintf(result, maxlen, "%d " SECTOR_FORMAT,
sc->stripes, sc->chunk_mask + 1);
for (i = 0; i < sc->stripes; i++) {
format_dev_t(buffer, sc->stripe[i].dev->bdev->bd_dev);
offset +=
- snprintf(result + offset, maxlen - offset,
+ scnprintf(result + offset, maxlen - offset,
" %s " SECTOR_FORMAT, buffer,
sc->stripe[i].physical_start);
}
return 0;
}
-static void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size)
+void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size)
{
unsigned long size;
void *addr;
int dm_table_create(struct dm_table **result, int mode, unsigned num_targets)
{
- struct dm_table *t = kmalloc(sizeof(*t), GFP_NOIO);
+ struct dm_table *t = kmalloc(sizeof(*t), GFP_KERNEL);
if (!t)
return -ENOMEM;
if (d->bdev)
BUG();
- bdev = open_by_devnum(dev, d->mode, BDEV_RAW);
+ bdev = open_by_devnum(dev, d->mode);
if (IS_ERR(bdev))
return PTR_ERR(bdev);
r = bd_claim(bdev, _claim_ptr);
if (r)
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
else
d->bdev = bdev;
return r;
return;
bd_release(d->bdev);
- blkdev_put(d->bdev, BDEV_RAW);
+ blkdev_put(d->bdev);
d->bdev = NULL;
}
memset(tgt, 0, sizeof(*tgt));
set_default_limits(&tgt->limits);
+ if (!len) {
+ tgt->error = "zero-length target";
+ return -EINVAL;
+ }
+
tgt->type = dm_get_target_type(type);
if (!tgt->type) {
tgt->error = "unknown target type";
}
+EXPORT_SYMBOL(dm_vcalloc);
EXPORT_SYMBOL(dm_get_device);
EXPORT_SYMBOL(dm_put_device);
EXPORT_SYMBOL(dm_table_event);
*/
#include "dm.h"
+#include "dm-bio-list.h"
#include <linux/init.h>
#include <linux/module.h>
atomic_t io_count;
};
-struct deferred_io {
- struct bio *bio;
- struct deferred_io *next;
-};
-
/*
* Bits for the md->flags field.
*/
*/
atomic_t pending;
wait_queue_head_t wait;
- struct deferred_io *deferred;
+ struct bio_list deferred;
/*
* The current mapping.
mempool_free(io, md->io_pool);
}
-static inline struct deferred_io *alloc_deferred(void)
-{
- return kmalloc(sizeof(struct deferred_io), GFP_NOIO);
-}
-
-static inline void free_deferred(struct deferred_io *di)
-{
- kfree(di);
-}
-
/*
* Add the bio to the list of deferred io.
*/
static int queue_io(struct mapped_device *md, struct bio *bio)
{
- struct deferred_io *di;
-
- di = alloc_deferred();
- if (!di)
- return -ENOMEM;
-
down_write(&md->lock);
if (!test_bit(DMF_BLOCK_IO, &md->flags)) {
up_write(&md->lock);
- free_deferred(di);
return 1;
}
- di->bio = bio;
- di->next = md->deferred;
- md->deferred = di;
+ bio_list_add(&md->deferred, bio);
up_write(&md->lock);
return 0; /* deferred successfully */
* interests of getting something for people to use I give
* you this clearly demarcated crap.
*---------------------------------------------------------------*/
-static inline sector_t to_sector(unsigned int bytes)
-{
- return bytes >> SECTOR_SHIFT;
-}
-
-static inline unsigned int to_bytes(sector_t sector)
-{
- return sector << SECTOR_SHIFT;
-}
/*
* Decrements the number of outstanding ios that a bio has been
*/
static inline void dec_pending(struct dm_io *io, int error)
{
- static spinlock_t _uptodate_lock = SPIN_LOCK_UNLOCKED;
- unsigned long flags;
-
- if (error) {
- spin_lock_irqsave(&_uptodate_lock, flags);
+ if (error)
io->error = error;
- spin_unlock_irqrestore(&_uptodate_lock, flags);
- }
if (atomic_dec_and_test(&io->io_count)) {
if (atomic_dec_and_test(&io->md->pending))
clone->bi_idx = idx;
clone->bi_vcnt = idx + bv_count;
clone->bi_size = to_bytes(len);
+ clone->bi_flags &= ~(1 << BIO_SEG_VALID);
return clone;
}
/* get a minor number for the dev */
r = persistent ? specific_minor(minor) : next_free_minor(&minor);
- if (r < 0) {
- kfree(md);
- return NULL;
- }
+ if (r < 0)
+ goto bad1;
memset(md, 0, sizeof(*md));
init_rwsem(&md->lock);
atomic_set(&md->holders, 1);
md->queue = blk_alloc_queue(GFP_KERNEL);
- if (!md->queue) {
- kfree(md);
- return NULL;
- }
+ if (!md->queue)
+ goto bad1;
md->queue->queuedata = md;
blk_queue_make_request(md->queue, dm_request);
md->io_pool = mempool_create(MIN_IOS, mempool_alloc_slab,
mempool_free_slab, _io_cache);
- if (!md->io_pool) {
- free_minor(minor);
- blk_put_queue(md->queue);
- kfree(md);
- return NULL;
- }
+ if (!md->io_pool)
+ goto bad2;
md->disk = alloc_disk(1);
- if (!md->disk) {
- mempool_destroy(md->io_pool);
- free_minor(minor);
- blk_put_queue(md->queue);
- kfree(md);
- return NULL;
- }
+ if (!md->disk)
+ goto bad3;
md->disk->major = _major;
md->disk->first_minor = minor;
init_waitqueue_head(&md->eventq);
return md;
+
+
+ bad3:
+ mempool_destroy(md->io_pool);
+ bad2:
+ blk_put_queue(md->queue);
+ free_minor(minor);
+ bad1:
+ kfree(md);
+ return NULL;
}
static void free_dev(struct mapped_device *md)
/*
* Requeue the deferred bios by calling generic_make_request.
*/
-static void flush_deferred_io(struct deferred_io *c)
+static void flush_deferred_io(struct bio *c)
{
- struct deferred_io *n;
+ struct bio *n;
while (c) {
- n = c->next;
- generic_make_request(c->bio);
- free_deferred(c);
+ n = c->bi_next;
+ c->bi_next = NULL;
+ generic_make_request(c);
c = n;
}
}
int dm_resume(struct mapped_device *md)
{
- struct deferred_io *def;
+ struct bio *def;
down_write(&md->lock);
if (!md->map ||
dm_table_resume_targets(md->map);
clear_bit(DMF_SUSPENDED, &md->flags);
clear_bit(DMF_BLOCK_IO, &md->flags);
- def = md->deferred;
- md->deferred = NULL;
+ def = bio_list_get(&md->deferred);
up_write(&md->lock);
flush_deferred_io(def);
return dm_round_up(n, size) / size;
}
+static inline sector_t to_sector(unsigned long n)
+{
+ return (n >> 9);
+}
+
+static inline unsigned long to_bytes(sector_t n)
+{
+ return (n << 9);
+}
+
/*
* The device-mapper can be driven through one of two interfaces;
* ioctl or filesystem, depending which patch you have applied.
int dm_stripe_init(void);
void dm_stripe_exit(void);
+void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
+
#endif
#define MAJOR_NR MD_MAJOR
#define MD_DRIVER
+/* 63 partitions with the alternate major number (mdp) */
+#define MdpMinorShift 6
+
#define DEBUG 0
#define dprintk(x...) ((void)(DEBUG && printk(x)))
spin_unlock(&all_mddevs_lock);
}
-static mddev_t * mddev_find(int unit)
+static mddev_t * mddev_find(dev_t unit)
{
mddev_t *mddev, *new = NULL;
retry:
spin_lock(&all_mddevs_lock);
list_for_each_entry(mddev, &all_mddevs, all_mddevs)
- if (mdidx(mddev) == unit) {
+ if (mddev->unit == unit) {
mddev_get(mddev);
spin_unlock(&all_mddevs_lock);
if (new)
memset(new, 0, sizeof(*new));
- new->__minor = unit;
+ new->unit = unit;
+ if (MAJOR(unit) == MD_MAJOR)
+ new->md_minor = MINOR(unit);
+ else
+ new->md_minor = MINOR(unit) >> MdpMinorShift;
+
init_MUTEX(&new->reconfig_sem);
INIT_LIST_HEAD(&new->disks);
INIT_LIST_HEAD(&new->all_mddevs);
sb->level = mddev->level;
sb->size = mddev->size;
sb->raid_disks = mddev->raid_disks;
- sb->md_minor = mddev->__minor;
+ sb->md_minor = mddev->md_minor;
sb->not_persistent = !mddev->persistent;
sb->utime = mddev->utime;
sb->state = 0;
int err = 0;
struct block_device *bdev;
- bdev = open_by_devnum(dev, FMODE_READ|FMODE_WRITE, BDEV_RAW);
+ bdev = open_by_devnum(dev, FMODE_READ|FMODE_WRITE);
if (IS_ERR(bdev))
return PTR_ERR(bdev);
err = bd_claim(bdev, rdev);
if (err) {
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
return err;
}
rdev->bdev = bdev;
if (!bdev)
MD_BUG();
bd_release(bdev);
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
}
void md_autodetect_dev(dev_t dev);
return 1;
}
+static int mdp_major = 0;
static struct kobject *md_probe(dev_t dev, int *part, void *data)
{
static DECLARE_MUTEX(disks_sem);
- int unit = *part;
- mddev_t *mddev = mddev_find(unit);
+ mddev_t *mddev = mddev_find(dev);
struct gendisk *disk;
+ int partitioned = (MAJOR(dev) != MD_MAJOR);
+ int shift = partitioned ? MdpMinorShift : 0;
+ int unit = MINOR(dev) >> shift;
if (!mddev)
return NULL;
mddev_put(mddev);
return NULL;
}
- disk = alloc_disk(1);
+ disk = alloc_disk(1 << shift);
if (!disk) {
up(&disks_sem);
mddev_put(mddev);
return NULL;
}
- disk->major = MD_MAJOR;
- disk->first_minor = mdidx(mddev);
- sprintf(disk->disk_name, "md%d", mdidx(mddev));
+ disk->major = MAJOR(dev);
+ disk->first_minor = unit << shift;
+ if (partitioned)
+ sprintf(disk->disk_name, "md_d%d", unit);
+ else
+ sprintf(disk->disk_name, "md%d", unit);
disk->fops = &md_fops;
disk->private_data = mddev;
disk->queue = mddev->queue;
mdk_rdev_t *rdev;
struct gendisk *disk;
char b[BDEVNAME_SIZE];
- int unit;
if (list_empty(&mddev->disks)) {
MD_BUG();
invalidate_bdev(rdev->bdev, 0);
}
- unit = mdidx(mddev);
- md_probe(0, &unit, NULL);
+ md_probe(mddev->unit, NULL, NULL);
disk = mddev->gendisk;
if (!disk)
return -ENOMEM;
mddev->queue->queuedata = mddev;
mddev->queue->make_request_fn = mddev->pers->make_request;
+ mddev->changed = 1;
return 0;
}
disk = mddev->gendisk;
if (disk)
set_capacity(disk, 0);
+ mddev->changed = 1;
} else
printk(KERN_INFO "md: %s switched to read-only mode.\n",
mdname(mddev));
printk(KERN_INFO "md: autorun ...\n");
while (!list_empty(&pending_raid_disks)) {
+ dev_t dev;
rdev0 = list_entry(pending_raid_disks.next,
mdk_rdev_t, same_set);
* mostly sane superblocks. It's time to allocate the
* mddev.
*/
-
- mddev = mddev_find(rdev0->preferred_minor);
+ if (rdev0->preferred_minor < 0 || rdev0->preferred_minor >= MAX_MD_DEVS) {
+ printk(KERN_INFO "md: unit number in %s is bad: %d\n",
+ bdevname(rdev0->bdev, b), rdev0->preferred_minor);
+ break;
+ }
+ dev = MKDEV(MD_MAJOR, rdev0->preferred_minor);
+ md_probe(dev, NULL, NULL);
+ mddev = mddev_find(dev);
if (!mddev) {
printk(KERN_ERR
"md: cannot allocate memory for md drive.\n");
"md: %s already running, cannot run %s\n",
mdname(mddev), bdevname(rdev0->bdev,b));
mddev_unlock(mddev);
- } else if (rdev0->preferred_minor >= 0 && rdev0->preferred_minor < MAX_MD_DEVS) {
+ } else {
printk(KERN_INFO "md: created %s\n", mdname(mddev));
ITERATE_RDEV_GENERIC(candidates,rdev,tmp) {
list_del_init(&rdev->same_set);
}
autorun_array(mddev);
mddev_unlock(mddev);
- } else
- printk(KERN_WARNING "md: %s had invalid preferred minor %d\n",
- bdevname(rdev->bdev, b), rdev0->preferred_minor);
+ }
/* on success, candidates will be empty, on error
* it won't...
*/
info.size = mddev->size;
info.nr_disks = nr;
info.raid_disks = mddev->raid_disks;
- info.md_minor = mddev->__minor;
+ info.md_minor = mddev->md_minor;
info.not_persistent= !mddev->persistent;
info.utime = mddev->utime;
mddev->level = info->level;
mddev->size = info->size;
mddev->raid_disks = info->raid_disks;
- /* don't set __minor, it is determined by which /dev/md* was
+ /* don't set md_minor, it is determined by which /dev/md* was
* openned
*/
if (info->state & (1<<MD_SB_CLEAN))
unsigned int cmd, unsigned long arg)
{
char b[BDEVNAME_SIZE];
- unsigned int minor = iminor(inode);
int err = 0;
struct hd_geometry *loc = (struct hd_geometry *) arg;
mddev_t *mddev = NULL;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
- if (minor >= MAX_MD_DEVS) {
- MD_BUG();
- return -EINVAL;
- }
-
/*
* Commands dealing with the RAID driver but not any
* particular array:
/* START_ARRAY doesn't need to lock the array as autostart_array
* does the locking, and it could even be a different array
*/
+ static int cnt = 3;
+ if (cnt > 0 ) {
+ printk(KERN_WARNING
+ "md: %s(pid %d) used deprecated START_ARRAY ioctl. "
+ "This will not be supported beyond 2.6\n",
+ current->comm, current->pid);
+ cnt--;
+ }
err = autostart_array(new_decode_dev(arg));
if (err) {
printk(KERN_WARNING "md: autostart %s failed!\n",
mddev_get(mddev);
mddev_unlock(mddev);
+ check_disk_change(inode->i_bdev);
out:
return err;
}
return 0;
}
+static int md_media_changed(struct gendisk *disk)
+{
+ mddev_t *mddev = disk->private_data;
+
+ return mddev->changed;
+}
+
+static int md_revalidate(struct gendisk *disk)
+{
+ mddev_t *mddev = disk->private_data;
+
+ mddev->changed = 0;
+ return 0;
+}
static struct block_device_operations md_fops =
{
.owner = THIS_MODULE,
.open = md_open,
.release = md_release,
.ioctl = md_ioctl,
+ .media_changed = md_media_changed,
+ .revalidate_disk= md_revalidate,
};
int md_thread(void * arg)
if (register_blkdev(MAJOR_NR, "md"))
return -1;
-
+ if ((mdp_major=register_blkdev(0, "mdp"))<=0) {
+ unregister_blkdev(MAJOR_NR, "md");
+ return -1;
+ }
devfs_mk_dir("md");
blk_register_region(MKDEV(MAJOR_NR, 0), MAX_MD_DEVS, THIS_MODULE,
md_probe, NULL, NULL);
+ blk_register_region(MKDEV(mdp_major, 0), MAX_MD_DEVS<<MdpMinorShift, THIS_MODULE,
+ md_probe, NULL, NULL);
- for (minor=0; minor < MAX_MD_DEVS; ++minor) {
+ for (minor=0; minor < MAX_MD_DEVS; ++minor)
devfs_mk_bdev(MKDEV(MAJOR_NR, minor),
S_IFBLK|S_IRUSR|S_IWUSR,
"md/%d", minor);
- }
+
+ for (minor=0; minor < MAX_MD_DEVS; ++minor)
+ devfs_mk_bdev(MKDEV(mdp_major, minor<<MdpMinorShift),
+ S_IFBLK|S_IRUSR|S_IWUSR,
+ "md/d%d", minor);
+
register_reboot_notifier(&md_notifier);
raid_table_header = register_sysctl_table(raid_root_table, 1);
struct list_head *tmp;
int i;
blk_unregister_region(MKDEV(MAJOR_NR,0), MAX_MD_DEVS);
+ blk_unregister_region(MKDEV(mdp_major,0), MAX_MD_DEVS << MdpMinorShift);
for (i=0; i < MAX_MD_DEVS; i++)
devfs_remove("md/%d", i);
+ for (i=0; i < MAX_MD_DEVS; i++)
+ devfs_remove("md/d%d", i);
+
devfs_remove("md");
unregister_blkdev(MAJOR_NR,"md");
+ unregister_blkdev(mdp_major, "mdp");
unregister_reboot_notifier(&md_notifier);
unregister_sysctl_table(raid_table_header);
remove_proc_entry("mdstat", NULL);
mddev_t *mddev = data;
r1bio_t *r1_bio;
- /* allocate a r1bio with room for raid_disks entries in the write_bios array */
+ /* allocate a r1bio with room for raid_disks entries in the bios array */
r1_bio = kmalloc(sizeof(r1bio_t) + sizeof(struct bio*)*mddev->raid_disks,
gfp_flags);
if (r1_bio)
kfree(r1_bio);
}
-//#define RESYNC_BLOCK_SIZE (64*1024)
-#define RESYNC_BLOCK_SIZE PAGE_SIZE
+#define RESYNC_BLOCK_SIZE (64*1024)
+//#define RESYNC_BLOCK_SIZE PAGE_SIZE
#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9)
#define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE)
#define RESYNC_WINDOW (2048*1024)
r1_bio = r1bio_pool_alloc(gfp_flags, conf->mddev);
if (!r1_bio)
return NULL;
- bio = bio_alloc(gfp_flags, RESYNC_PAGES);
- if (!bio)
- goto out_free_r1_bio;
+ /*
+ * Allocate bios : 1 for reading, n-1 for writing
+ */
+ for (j = conf->raid_disks ; j-- ; ) {
+ bio = bio_alloc(gfp_flags, RESYNC_PAGES);
+ if (!bio)
+ goto out_free_bio;
+ r1_bio->bios[j] = bio;
+ }
+ /*
+ * Allocate RESYNC_PAGES data pages and attach them to
+ * the first bio;
+ */
+ bio = r1_bio->bios[0];
for (i = 0; i < RESYNC_PAGES; i++) {
page = alloc_page(gfp_flags);
if (unlikely(!page))
goto out_free_pages;
bio->bi_io_vec[i].bv_page = page;
- bio->bi_io_vec[i].bv_len = PAGE_SIZE;
- bio->bi_io_vec[i].bv_offset = 0;
}
- /*
- * Allocate a single data page for this iovec.
- */
- bio->bi_vcnt = RESYNC_PAGES;
- bio->bi_idx = 0;
- bio->bi_size = RESYNC_BLOCK_SIZE;
- bio->bi_end_io = NULL;
- atomic_set(&bio->bi_cnt, 1);
-
r1_bio->master_bio = bio;
return r1_bio;
out_free_pages:
- for (j = 0; j < i; j++)
- __free_page(bio->bi_io_vec[j].bv_page);
- bio_put(bio);
-out_free_r1_bio:
+ for ( ; i > 0 ; i--)
+ __free_page(bio->bi_io_vec[i-1].bv_page);
+out_free_bio:
+ while ( j < conf->raid_disks )
+ bio_put(r1_bio->bios[++j]);
r1bio_pool_free(r1_bio, conf->mddev);
return NULL;
}
int i;
conf_t *conf = data;
r1bio_t *r1bio = __r1_bio;
- struct bio *bio = r1bio->master_bio;
+ struct bio *bio = r1bio->bios[0];
- if (atomic_read(&bio->bi_cnt) != 1)
- BUG();
for (i = 0; i < RESYNC_PAGES; i++) {
__free_page(bio->bi_io_vec[i].bv_page);
bio->bi_io_vec[i].bv_page = NULL;
}
- if (atomic_read(&bio->bi_cnt) != 1)
- BUG();
- bio_put(bio);
+ for (i=0 ; i < conf->raid_disks; i++)
+ bio_put(r1bio->bios[i]);
+
r1bio_pool_free(r1bio, conf->mddev);
}
{
int i;
- if (r1_bio->read_bio) {
- if (atomic_read(&r1_bio->read_bio->bi_cnt) != 1)
- BUG();
- bio_put(r1_bio->read_bio);
- r1_bio->read_bio = NULL;
- }
for (i = 0; i < conf->raid_disks; i++) {
- struct bio **bio = r1_bio->write_bios + i;
- if (*bio) {
- if (atomic_read(&(*bio)->bi_cnt) != 1)
- BUG();
+ struct bio **bio = r1_bio->bios + i;
+ if (*bio)
bio_put(*bio);
- }
*bio = NULL;
}
}
static inline void put_buf(r1bio_t *r1_bio)
{
conf_t *conf = mddev_to_conf(r1_bio->mddev);
- struct bio *bio = r1_bio->master_bio;
unsigned long flags;
- /*
- * undo any possible partial request fixup magic:
- */
- if (bio->bi_size != RESYNC_BLOCK_SIZE)
- bio->bi_io_vec[bio->bi_vcnt-1].bv_len = PAGE_SIZE;
- put_all_bios(conf, r1_bio);
mempool_free(r1_bio, conf->r1buf_pool);
spin_lock_irqsave(&conf->resync_lock, flags);
conf_t *conf = mddev_to_conf(r1_bio->mddev);
conf->mirrors[disk].head_position =
- r1_bio->sector + (r1_bio->master_bio->bi_size >> 9);
+ r1_bio->sector + (r1_bio->sectors);
}
-static int raid1_end_request(struct bio *bio, unsigned int bytes_done, int error)
+static int raid1_end_read_request(struct bio *bio, unsigned int bytes_done, int error)
{
int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
r1bio_t * r1_bio = (r1bio_t *)(bio->bi_private);
if (bio->bi_size)
return 1;
- if (r1_bio->cmd == READ || r1_bio->cmd == READA)
- mirror = r1_bio->read_disk;
- else {
- for (mirror = 0; mirror < conf->raid_disks; mirror++)
- if (r1_bio->write_bios[mirror] == bio)
- break;
- }
+ mirror = r1_bio->read_disk;
/*
* this branch is our 'one mirror IO has finished' event handler:
*/
set_bit(R1BIO_Uptodate, &r1_bio->state);
update_head_pos(mirror, r1_bio);
- if ((r1_bio->cmd == READ) || (r1_bio->cmd == READA)) {
- if (!r1_bio->read_bio)
- BUG();
+
+ /*
+ * we have only one bio on the read side
+ */
+ if (uptodate)
+ raid_end_bio_io(r1_bio);
+ else {
/*
- * we have only one bio on the read side
+ * oops, read error:
*/
- if (uptodate)
- raid_end_bio_io(r1_bio);
- else {
- /*
- * oops, read error:
- */
- char b[BDEVNAME_SIZE];
- printk(KERN_ERR "raid1: %s: rescheduling sector %llu\n",
- bdevname(conf->mirrors[mirror].rdev->bdev,b), (unsigned long long)r1_bio->sector);
- reschedule_retry(r1_bio);
- }
- } else {
+ char b[BDEVNAME_SIZE];
+ printk(KERN_ERR "raid1: %s: rescheduling sector %llu\n",
+ bdevname(conf->mirrors[mirror].rdev->bdev,b), (unsigned long long)r1_bio->sector);
+ reschedule_retry(r1_bio);
+ }
- if (r1_bio->read_bio)
- BUG();
+ atomic_dec(&conf->mirrors[mirror].rdev->nr_pending);
+ return 0;
+}
+
+static int raid1_end_write_request(struct bio *bio, unsigned int bytes_done, int error)
+{
+ int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
+ r1bio_t * r1_bio = (r1bio_t *)(bio->bi_private);
+ int mirror;
+ conf_t *conf = mddev_to_conf(r1_bio->mddev);
+
+ if (bio->bi_size)
+ return 1;
+
+ for (mirror = 0; mirror < conf->raid_disks; mirror++)
+ if (r1_bio->bios[mirror] == bio)
+ break;
+
+ /*
+ * this branch is our 'one mirror IO has finished' event handler:
+ */
+ if (!uptodate)
+ md_error(r1_bio->mddev, conf->mirrors[mirror].rdev);
+ else
/*
- * WRITE:
+ * Set R1BIO_Uptodate in our master bio, so that
+ * we will return a good error code for to the higher
+ * levels even if IO on some other mirrored buffer fails.
*
- * Let's see if all mirrored write operations have finished
- * already.
+ * The 'master' represents the composite IO operation to
+ * user-side. So if something waits for IO, then it will
+ * wait for the 'master' bio.
*/
- if (atomic_dec_and_test(&r1_bio->remaining)) {
- md_write_end(r1_bio->mddev);
- raid_end_bio_io(r1_bio);
- }
+ set_bit(R1BIO_Uptodate, &r1_bio->state);
+
+ update_head_pos(mirror, r1_bio);
+
+ /*
+ *
+ * Let's see if all mirrored write operations have finished
+ * already.
+ */
+ if (atomic_dec_and_test(&r1_bio->remaining)) {
+ md_write_end(r1_bio->mddev);
+ raid_end_bio_io(r1_bio);
}
+
atomic_dec(&conf->mirrors[mirror].rdev->nr_pending);
return 0;
}
+
/*
* This routine returns the disk from which the requested read should
* be done. There is a per-array 'next expected sequential IO' sector
r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
r1_bio->master_bio = bio;
+ r1_bio->sectors = bio->bi_size >> 9;
r1_bio->mddev = mddev;
r1_bio->sector = bio->bi_sector;
- r1_bio->cmd = bio_data_dir(bio);
- if (r1_bio->cmd == READ) {
+ if (bio_data_dir(bio) == READ) {
/*
* read balancing logic:
*/
mirror = conf->mirrors + read_balance(conf, bio, r1_bio);
read_bio = bio_clone(bio, GFP_NOIO);
- if (r1_bio->read_bio)
- BUG();
- r1_bio->read_bio = read_bio;
+
+ r1_bio->bios[r1_bio->read_disk] = read_bio;
read_bio->bi_sector = r1_bio->sector + mirror->rdev->data_offset;
read_bio->bi_bdev = mirror->rdev->bdev;
- read_bio->bi_end_io = raid1_end_request;
- read_bio->bi_rw = r1_bio->cmd;
+ read_bio->bi_end_io = raid1_end_read_request;
+ read_bio->bi_rw = READ;
read_bio->bi_private = r1_bio;
generic_make_request(read_bio);
*/
/* first select target devices under spinlock and
* inc refcount on their rdev. Record them by setting
- * write_bios[x] to bio
+ * bios[x] to bio
*/
spin_lock_irq(&conf->device_lock);
for (i = 0; i < disks; i++) {
if (conf->mirrors[i].rdev &&
!conf->mirrors[i].rdev->faulty) {
atomic_inc(&conf->mirrors[i].rdev->nr_pending);
- r1_bio->write_bios[i] = bio;
+ r1_bio->bios[i] = bio;
} else
- r1_bio->write_bios[i] = NULL;
+ r1_bio->bios[i] = NULL;
}
spin_unlock_irq(&conf->device_lock);
md_write_start(mddev);
for (i = 0; i < disks; i++) {
struct bio *mbio;
- if (!r1_bio->write_bios[i])
+ if (!r1_bio->bios[i])
continue;
mbio = bio_clone(bio, GFP_NOIO);
- r1_bio->write_bios[i] = mbio;
+ r1_bio->bios[i] = mbio;
mbio->bi_sector = r1_bio->sector + conf->mirrors[i].rdev->data_offset;
mbio->bi_bdev = conf->mirrors[i].rdev->bdev;
- mbio->bi_end_io = raid1_end_request;
- mbio->bi_rw = r1_bio->cmd;
+ mbio->bi_end_io = raid1_end_write_request;
+ mbio->bi_rw = WRITE;
mbio->bi_private = r1_bio;
atomic_inc(&r1_bio->remaining);
if (bio->bi_size)
return 1;
- if (r1_bio->read_bio != bio)
+ if (r1_bio->bios[r1_bio->read_disk] != bio)
BUG();
update_head_pos(r1_bio->read_disk, r1_bio);
/*
return 1;
for (i = 0; i < conf->raid_disks; i++)
- if (r1_bio->write_bios[i] == bio) {
+ if (r1_bio->bios[i] == bio) {
mirror = i;
break;
}
update_head_pos(mirror, r1_bio);
if (atomic_dec_and_test(&r1_bio->remaining)) {
- md_done_sync(mddev, r1_bio->master_bio->bi_size >> 9, uptodate);
+ md_done_sync(mddev, r1_bio->sectors, uptodate);
put_buf(r1_bio);
}
atomic_dec(&conf->mirrors[mirror].rdev->nr_pending);
conf_t *conf = mddev_to_conf(mddev);
int i;
int disks = conf->raid_disks;
- struct bio *bio, *mbio;
+ struct bio *bio, *wbio;
- bio = r1_bio->master_bio;
+ bio = r1_bio->bios[r1_bio->read_disk];
/*
- * have to allocate lots of bio structures and
* schedule writes
*/
if (!test_bit(R1BIO_Uptodate, &r1_bio->state)) {
" for block %llu\n",
bdevname(bio->bi_bdev,b),
(unsigned long long)r1_bio->sector);
- md_done_sync(mddev, r1_bio->master_bio->bi_size >> 9, 0);
+ md_done_sync(mddev, r1_bio->sectors, 0);
put_buf(r1_bio);
return;
}
- spin_lock_irq(&conf->device_lock);
- for (i = 0; i < disks ; i++) {
- r1_bio->write_bios[i] = NULL;
- if (!conf->mirrors[i].rdev ||
- conf->mirrors[i].rdev->faulty)
- continue;
- if (conf->mirrors[i].rdev->bdev == bio->bi_bdev)
- /*
- * we read from here, no need to write
- */
- continue;
- if (conf->mirrors[i].rdev->in_sync &&
- r1_bio->sector + (bio->bi_size>>9) <= mddev->recovery_cp)
- /*
- * don't need to write this we are just rebuilding
- */
- continue;
- atomic_inc(&conf->mirrors[i].rdev->nr_pending);
- r1_bio->write_bios[i] = bio;
- }
- spin_unlock_irq(&conf->device_lock);
-
atomic_set(&r1_bio->remaining, 1);
- for (i = disks; i-- ; ) {
- if (!r1_bio->write_bios[i])
+ for (i = 0; i < disks ; i++) {
+ wbio = r1_bio->bios[i];
+ if (wbio->bi_end_io != end_sync_write)
continue;
- mbio = bio_clone(bio, GFP_NOIO);
- r1_bio->write_bios[i] = mbio;
- mbio->bi_bdev = conf->mirrors[i].rdev->bdev;
- mbio->bi_sector = r1_bio->sector + conf->mirrors[i].rdev->data_offset;
- mbio->bi_end_io = end_sync_write;
- mbio->bi_rw = WRITE;
- mbio->bi_private = r1_bio;
+ atomic_inc(&conf->mirrors[i].rdev->nr_pending);
atomic_inc(&r1_bio->remaining);
- md_sync_acct(conf->mirrors[i].rdev, mbio->bi_size >> 9);
- generic_make_request(mbio);
+ md_sync_acct(conf->mirrors[i].rdev, wbio->bi_size >> 9);
+ generic_make_request(wbio);
}
if (atomic_dec_and_test(&r1_bio->remaining)) {
- md_done_sync(mddev, r1_bio->master_bio->bi_size >> 9, 1);
+ md_done_sync(mddev, r1_bio->sectors, 1);
put_buf(r1_bio);
}
}
mddev = r1_bio->mddev;
conf = mddev_to_conf(mddev);
bio = r1_bio->master_bio;
- switch(r1_bio->cmd) {
- case SPECIAL:
+ if (test_bit(R1BIO_IsSync, &r1_bio->state)) {
sync_request_write(mddev, r1_bio);
- break;
- case READ:
- case READA:
+ } else {
if (map(mddev, &rdev) == -1) {
printk(KERN_ALERT "raid1: %s: unrecoverable I/O"
- " read error for block %llu\n",
- bdevname(bio->bi_bdev,b),
- (unsigned long long)r1_bio->sector);
+ " read error for block %llu\n",
+ bdevname(bio->bi_bdev,b),
+ (unsigned long long)r1_bio->sector);
raid_end_bio_io(r1_bio);
- break;
+ } else {
+ printk(KERN_ERR "raid1: %s: redirecting sector %llu to"
+ " another mirror\n",
+ bdevname(rdev->bdev,b),
+ (unsigned long long)r1_bio->sector);
+ bio->bi_bdev = rdev->bdev;
+ bio->bi_sector = r1_bio->sector + rdev->data_offset;
+ bio->bi_rw = READ;
+
+ generic_make_request(bio);
}
- printk(KERN_ERR "raid1: %s: redirecting sector %llu to"
- " another mirror\n",
- bdevname(rdev->bdev,b),
- (unsigned long long)r1_bio->sector);
- bio->bi_bdev = rdev->bdev;
- bio->bi_sector = r1_bio->sector + rdev->data_offset;
- bio->bi_rw = r1_bio->cmd;
-
- generic_make_request(bio);
- break;
}
}
spin_unlock_irqrestore(&retry_list_lock, flags);
conf_t *conf = mddev_to_conf(mddev);
mirror_info_t *mirror;
r1bio_t *r1_bio;
- struct bio *read_bio, *bio;
+ struct bio *bio;
sector_t max_sector, nr_sectors;
- int disk, partial;
+ int disk;
+ int i;
if (!conf->r1buf_pool)
if (init_resync(conf))
r1_bio->mddev = mddev;
r1_bio->sector = sector_nr;
- r1_bio->cmd = SPECIAL;
+ set_bit(R1BIO_IsSync, &r1_bio->state);
r1_bio->read_disk = disk;
- bio = r1_bio->master_bio;
- nr_sectors = RESYNC_BLOCK_SIZE >> 9;
- if (max_sector - sector_nr < nr_sectors)
- nr_sectors = max_sector - sector_nr;
- bio->bi_size = nr_sectors << 9;
- bio->bi_vcnt = (bio->bi_size + PAGE_SIZE-1) / PAGE_SIZE;
- /*
- * Is there a partial page at the end of the request?
- */
- partial = bio->bi_size % PAGE_SIZE;
- if (partial)
- bio->bi_io_vec[bio->bi_vcnt-1].bv_len = partial;
-
-
- read_bio = bio_clone(r1_bio->master_bio, GFP_NOIO);
-
- read_bio->bi_sector = sector_nr + mirror->rdev->data_offset;
- read_bio->bi_bdev = mirror->rdev->bdev;
- read_bio->bi_end_io = end_sync_read;
- read_bio->bi_rw = READ;
- read_bio->bi_private = r1_bio;
-
- if (r1_bio->read_bio)
- BUG();
- r1_bio->read_bio = read_bio;
+ for (i=0; i < conf->raid_disks; i++) {
+ bio = r1_bio->bios[i];
+
+ /* take from bio_init */
+ bio->bi_next = NULL;
+ bio->bi_flags |= 1 << BIO_UPTODATE;
+ bio->bi_rw = 0;
+ bio->bi_vcnt = 0;
+ bio->bi_idx = 0;
+ bio->bi_phys_segments = 0;
+ bio->bi_hw_segments = 0;
+ bio->bi_size = 0;
+ bio->bi_end_io = NULL;
+ bio->bi_private = NULL;
+
+ if (i == disk) {
+ bio->bi_rw = READ;
+ bio->bi_end_io = end_sync_read;
+ } else if (conf->mirrors[i].rdev &&
+ !conf->mirrors[i].rdev->faulty &&
+ (!conf->mirrors[i].rdev->in_sync ||
+ sector_nr + RESYNC_SECTORS > mddev->recovery_cp)) {
+ bio->bi_rw = WRITE;
+ bio->bi_end_io = end_sync_write;
+ } else
+ continue;
+ bio->bi_sector = sector_nr + conf->mirrors[i].rdev->data_offset;
+ bio->bi_bdev = conf->mirrors[i].rdev->bdev;
+ bio->bi_private = r1_bio;
+ }
+ nr_sectors = 0;
+ do {
+ struct page *page;
+ int len = PAGE_SIZE;
+ if (sector_nr + (len>>9) > max_sector)
+ len = (max_sector - sector_nr) << 9;
+ if (len == 0)
+ break;
+ for (i=0 ; i < conf->raid_disks; i++) {
+ bio = r1_bio->bios[i];
+ if (bio->bi_end_io) {
+ page = r1_bio->bios[0]->bi_io_vec[bio->bi_vcnt].bv_page;
+ if (bio_add_page(bio, page, len, 0) == 0) {
+ /* stop here */
+ r1_bio->bios[0]->bi_io_vec[bio->bi_vcnt].bv_page = page;
+ while (i > 0) {
+ i--;
+ bio = r1_bio->bios[i];
+ if (bio->bi_end_io==NULL) continue;
+ /* remove last page from this bio */
+ bio->bi_vcnt--;
+ bio->bi_size -= len;
+ bio->bi_flags &= ~(1<< BIO_SEG_VALID);
+ }
+ goto bio_full;
+ }
+ }
+ }
+ nr_sectors += len>>9;
+ sector_nr += len>>9;
+ } while (r1_bio->bios[disk]->bi_vcnt < RESYNC_PAGES);
+ bio_full:
+ bio = r1_bio->bios[disk];
+ r1_bio->sectors = nr_sectors;
md_sync_acct(mirror->rdev, nr_sectors);
- generic_make_request(read_bio);
+ generic_make_request(bio);
return nr_sectors;
}
kmem_cache_t *sc;
int devs = conf->raid_disks;
- sprintf(conf->cache_name, "md/raid5-%d", conf->mddev->__minor);
+ sprintf(conf->cache_name, "raid5/%s", mdname(conf->mddev));
sc = kmem_cache_create(conf->cache_name,
sizeof(struct stripe_head)+(devs-1)*sizeof(struct r5dev),
kmem_cache_t *sc;
int devs = conf->raid_disks;
- sprintf(conf->cache_name, "md/raid6-%d", conf->mddev->__minor);
+ sprintf(conf->cache_name, "raid6/%s", mdname(conf->mddev));
sc = kmem_cache_create(conf->cache_name,
sizeof(struct stripe_head)+(devs-1)*sizeof(struct r5dev),
Please report problems regarding this driver to the LinuxDVB
mailing list.
- You might want add the following lines to your /etc/modules.conf:
+ You might want add the following lines to your /etc/modprobe.conf:
alias char-major-250 dvb
alias dvb dvb-ttpci
- below dvb-ttpci alps_bsru6 alps_bsrv2 \
+ install dvb-ttpci /sbin/modprobe --first-time -i dvb-ttpci && \
+ /sbin/modprobe -a alps_bsru6 alps_bsrv2 \
grundig_29504-401 grundig_29504-491 \
ves1820
net->base_addr = pid;
if ((result = register_netdev(net)) < 0) {
- kfree(net);
+ dvbnet->device[if_num] = NULL;
+ free_netdev(net);
return result;
}
flush_scheduled_work();
unregister_netdev(net);
dvbnet->state[num]=0;
+ dvbnet->device[num] = NULL;
free_netdev(net);
return 0;
To compile this driver as a module, choose M here: the
module will be called radio-sf16fmi.
+config RADIO_SF16FMR2
+ tristate "SF16FMR2 Radio"
+ depends on ISA && VIDEO_DEV
+ ---help---
+ Choose Y here if you have one of these FM radio cards.
+
+ In order to control your radio card, you will need to use programs
+ that are compatible with the Video For Linux API. Information on
+ this API and pointers to "v4l" programs may be found on the WWW at
+ <http://roadrunner.swansea.uk.linux.org/v4l.shtml>.
+
+ To compile this driver as a module, choose M here: the
+ module will be called radio-sf16fmr2.
+
config RADIO_TERRATEC
tristate "TerraTec ActiveRadio ISA Standalone"
depends on ISA && VIDEO_DEV
obj-$(CONFIG_RADIO_AZTECH) += radio-aztech.o
obj-$(CONFIG_RADIO_RTRACK2) += radio-rtrack2.o
obj-$(CONFIG_RADIO_SF16FMI) += radio-sf16fmi.o
+obj-$(CONFIG_RADIO_SF16FMR2) += radio-sf16fmr2.o
obj-$(CONFIG_RADIO_CADET) += radio-cadet.o
obj-$(CONFIG_RADIO_TYPHOON) += radio-typhoon.o
obj-$(CONFIG_RADIO_TERRATEC) += radio-terratec.o
--- /dev/null
+/* SF16FMR2 radio driver for Linux radio support
+ * heavily based on fmi driver...
+ * (c) 2000-2002 Ziglio Frediano, freddy77@angelfire.com
+ *
+ * Notes on the hardware
+ *
+ * Frequency control is done digitally -- ie out(port,encodefreq(95.8));
+ * No volume control - only mute/unmute - you have to use line volume
+ *
+ * For read stereo/mono you must wait 0.1 sec after set frequency and
+ * card unmuted so I set frequency on unmute
+ * Signal handling seem to work only on autoscanning (not implemented)
+ */
+
+#include <linux/module.h> /* Modules */
+#include <linux/init.h> /* Initdata */
+#include <linux/ioport.h> /* check_region, request_region */
+#include <linux/delay.h> /* udelay */
+#include <asm/io.h> /* outb, outb_p */
+#include <asm/uaccess.h> /* copy to/from user */
+#include <linux/videodev.h> /* kernel radio structs */
+#include <asm/semaphore.h>
+
+static struct semaphore lock;
+
+#undef DEBUG
+//#define DEBUG 1
+
+#ifdef DEBUG
+# define debug_print(s) printk s
+#else
+# define debug_print(s)
+#endif
+
+/* this should be static vars for module size */
+struct fmr2_device
+{
+ int port;
+ int curvol; /* 0-65535, if not volume 0 or 65535 */
+ int mute;
+ int stereo; /* card is producing stereo audio */
+ unsigned long curfreq; /* freq in kHz */
+ int card_type;
+ __u32 flags;
+};
+
+static int io = 0x384;
+static int radio_nr = -1;
+
+/* hw precision is 12.5 kHz
+ * It is only usefull to give freq in intervall of 200 (=0.0125Mhz),
+ * other bits will be truncated
+ */
+#define RSF16_ENCODE(x) ((x)/200+856)
+#define RSF16_MINFREQ 87*16000
+#define RSF16_MAXFREQ 108*16000
+
+/* from radio-aimslab */
+static void sleep_delay(unsigned long n)
+{
+ unsigned d=n/(1000000U/HZ);
+ if (!d)
+ udelay(n);
+ else
+ {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(d);
+ }
+}
+
+static inline void wait(int n,int port)
+{
+ for (;n;--n) inb(port);
+}
+
+static void outbits(int bits, unsigned int data, int nWait, int port)
+{
+ int bit;
+ for(;--bits>=0;) {
+ bit = (data>>bits) & 1;
+ outb(bit,port);
+ wait(nWait,port);
+ outb(bit|2,port);
+ wait(nWait,port);
+ outb(bit,port);
+ wait(nWait,port);
+ }
+}
+
+static inline void fmr2_mute(int port)
+{
+ outb(0x00, port);
+ wait(4,port);
+}
+
+static inline void fmr2_unmute(int port)
+{
+ outb(0x04, port);
+ wait(4,port);
+}
+
+static inline int fmr2_stereo_mode(int port)
+{
+ int n = inb(port);
+ outb(6,port);
+ inb(port);
+ n = ((n>>3)&1)^1;
+ debug_print((KERN_DEBUG "stereo: %d\n", n));
+ return n;
+}
+
+static int fmr2_product_info(struct fmr2_device *dev)
+{
+ int n = inb(dev->port);
+ n &= 0xC1;
+ if (n == 0)
+ {
+ /* this should support volume set */
+ dev->card_type = 12;
+ return 0;
+ }
+ /* not volume (mine is 11) */
+ dev->card_type = (n==128)?11:0;
+ return n;
+}
+
+static inline int fmr2_getsigstr(struct fmr2_device *dev)
+{
+ /* !!! work only if scanning freq */
+ int port = dev->port, res = 0xffff;
+ outb(5,port);
+ wait(4,port);
+ if (!(inb(port)&1)) res = 0;
+ debug_print((KERN_DEBUG "signal: %d\n", res));
+ return res;
+}
+
+/* set frequency and unmute card */
+static int fmr2_setfreq(struct fmr2_device *dev)
+{
+ int port = dev->port;
+ unsigned long freq = dev->curfreq;
+
+ fmr2_mute(port);
+
+ /* 0x42 for mono output
+ * 0x102 forward scanning
+ * 0x182 scansione avanti
+ */
+ outbits(9,0x2,3,port);
+ outbits(16,RSF16_ENCODE(freq),2,port);
+
+ fmr2_unmute(port);
+
+ /* wait 0.11 sec */
+ sleep_delay(110000LU);
+
+ /* NOTE if mute this stop radio
+ you must set freq on unmute */
+ dev->stereo = fmr2_stereo_mode(port);
+ return 0;
+}
+
+/* !!! not tested, in my card this does't work !!! */
+static int fmr2_setvolume(struct fmr2_device *dev)
+{
+ int i,a,n, port = dev->port;
+
+ if (dev->card_type != 11) return 1;
+
+ switch( (dev->curvol+(1<<11)) >> 12 )
+ {
+ case 0: case 1: n = 0x21; break;
+ case 2: n = 0x84; break;
+ case 3: n = 0x90; break;
+ case 4: n = 0x104; break;
+ case 5: n = 0x110; break;
+ case 6: n = 0x204; break;
+ case 7: n = 0x210; break;
+ case 8: n = 0x402; break;
+ case 9: n = 0x404; break;
+ default:
+ case 10: n = 0x408; break;
+ case 11: n = 0x410; break;
+ case 12: n = 0x801; break;
+ case 13: n = 0x802; break;
+ case 14: n = 0x804; break;
+ case 15: n = 0x808; break;
+ case 16: n = 0x810; break;
+ }
+ for(i=12;--i>=0;)
+ {
+ a = ((n >> i) & 1) << 6; /* if (a=0) a= 0; else a= 0x40; */
+ outb(a|4, port);
+ wait(4,port);
+ outb(a|0x24, port);
+ wait(4,port);
+ outb(a|4, port);
+ wait(4,port);
+ }
+ for(i=6;--i>=0;)
+ {
+ a = ((0x18 >> i) & 1) << 6;
+ outb(a|4, port);
+ wait(4,port);
+ outb(a|0x24, port);
+ wait(4,port);
+ outb(a|4, port);
+ wait(4,port);
+ }
+ wait(4,port);
+ outb(0x14, port);
+
+ return 0;
+}
+
+static int fmr2_do_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, void *arg)
+{
+ struct video_device *dev = video_devdata(file);
+ struct fmr2_device *fmr2 = dev->priv;
+ debug_print((KERN_DEBUG "freq %ld flags %d vol %d mute %d "
+ "stereo %d type %d\n",
+ fmr2->curfreq, fmr2->flags, fmr2->curvol, fmr2->mute,
+ fmr2->stereo, fmr2->card_type));
+
+ switch(cmd)
+ {
+ case VIDIOCGCAP:
+ {
+ struct video_capability *v = arg;
+ memset(v,0,sizeof(*v));
+ strcpy(v->name, "SF16-FMR2 radio");
+ v->type=VID_TYPE_TUNER;
+ v->channels=1;
+ v->audios=1;
+ return 0;
+ }
+ case VIDIOCGTUNER:
+ {
+ struct video_tuner *v = arg;
+ int mult;
+
+ if(v->tuner) /* Only 1 tuner */
+ return -EINVAL;
+ strcpy(v->name, "FM");
+ mult = (fmr2->flags & VIDEO_TUNER_LOW) ? 1 : 1000;
+ v->rangelow = RSF16_MINFREQ/mult;
+ v->rangehigh = RSF16_MAXFREQ/mult;
+ v->flags = fmr2->flags | VIDEO_AUDIO_MUTABLE;
+ if (fmr2->mute)
+ v->flags |= VIDEO_AUDIO_MUTE;
+ v->mode=VIDEO_MODE_AUTO;
+ down(&lock);
+ v->signal = fmr2_getsigstr(fmr2);
+ up(&lock);
+ return 0;
+ }
+ case VIDIOCSTUNER:
+ {
+ struct video_tuner *v = arg;
+ if (v->tuner!=0)
+ return -EINVAL;
+ fmr2->flags = v->flags & VIDEO_TUNER_LOW;
+ return 0;
+ }
+ case VIDIOCGFREQ:
+ {
+ unsigned long *freq = arg;
+ *freq = fmr2->curfreq;
+ if (!(fmr2->flags & VIDEO_TUNER_LOW))
+ *freq /= 1000;
+ return 0;
+ }
+ case VIDIOCSFREQ:
+ {
+ unsigned long *freq = arg;
+ if (!(fmr2->flags & VIDEO_TUNER_LOW))
+ *freq *= 1000;
+ if ( *freq < RSF16_MINFREQ || *freq > RSF16_MAXFREQ )
+ return -EINVAL;
+ /* rounding in steps of 200 to match th freq
+ * that will be used
+ */
+ fmr2->curfreq = (*freq/200)*200;
+
+ /* set card freq (if not muted) */
+ if (fmr2->curvol && !fmr2->mute)
+ {
+ down(&lock);
+ fmr2_setfreq(fmr2);
+ up(&lock);
+ }
+ return 0;
+ }
+ case VIDIOCGAUDIO:
+ {
+ struct video_audio *v = arg;
+ memset(v,0,sizeof(*v));
+ /* !!! do not return VIDEO_AUDIO_MUTE */
+ v->flags = VIDEO_AUDIO_MUTABLE;
+ strcpy(v->name, "Radio");
+ /* get current stereo mode */
+ v->mode = fmr2->stereo ? VIDEO_SOUND_STEREO: VIDEO_SOUND_MONO;
+ /* volume supported ? */
+ if (fmr2->card_type == 11)
+ {
+ v->flags |= VIDEO_AUDIO_VOLUME;
+ v->step = 1 << 12;
+ v->volume = fmr2->curvol;
+ }
+ debug_print((KERN_DEBUG "Get flags %d vol %d\n", v->flags, v->volume));
+ return 0;
+ }
+ case VIDIOCSAUDIO:
+ {
+ struct video_audio *v = arg;
+ if(v->audio)
+ return -EINVAL;
+ debug_print((KERN_DEBUG "Set flags %d vol %d\n", v->flags, v->volume));
+ /* set volume */
+ if (v->flags & VIDEO_AUDIO_VOLUME)
+ fmr2->curvol = v->volume; /* !!! set with precision */
+ if (fmr2->card_type != 11) fmr2->curvol = 65535;
+ fmr2->mute = 0;
+ if (v->flags & VIDEO_AUDIO_MUTE)
+ fmr2->mute = 1;
+#ifdef DEBUG
+ if (fmr2->curvol && !fmr2->mute)
+ printk(KERN_DEBUG "unmute\n");
+ else
+ printk(KERN_DEBUG "mute\n");
+#endif
+ down(&lock);
+ if (fmr2->curvol && !fmr2->mute)
+ {
+ fmr2_setvolume(fmr2);
+ fmr2_setfreq(fmr2);
+ }
+ else fmr2_mute(fmr2->port);
+ up(&lock);
+ return 0;
+ }
+ case VIDIOCGUNIT:
+ {
+ struct video_unit *v = arg;
+ v->video=VIDEO_NO_UNIT;
+ v->vbi=VIDEO_NO_UNIT;
+ v->radio=dev->minor;
+ v->audio=0; /* How do we find out this??? */
+ v->teletext=VIDEO_NO_UNIT;
+ return 0;
+ }
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+
+static int fmr2_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+ {
+ return video_usercopy(inode, file, cmd, arg, fmr2_do_ioctl);
+}
+
+static struct fmr2_device fmr2_unit;
+
+static struct file_operations fmr2_fops = {
+ .owner = THIS_MODULE,
+ .open = video_exclusive_open,
+ .release = video_exclusive_release,
+ .ioctl = fmr2_ioctl,
+ .llseek = no_llseek,
+};
+
+static struct video_device fmr2_radio=
+{
+ .owner = THIS_MODULE,
+ .name = "SF16FMR2 radio",
+ . type = VID_TYPE_TUNER,
+ .hardware = VID_HARDWARE_SF16FMR2,
+ .fops = &fmr2_fops,
+};
+
+static int __init fmr2_init(void)
+{
+ fmr2_unit.port = io;
+ fmr2_unit.curvol = 0;
+ fmr2_unit.mute = 0;
+ fmr2_unit.curfreq = 0;
+ fmr2_unit.stereo = 1;
+ fmr2_unit.flags = VIDEO_TUNER_LOW;
+ fmr2_unit.card_type = 0;
+ fmr2_radio.priv = &fmr2_unit;
+
+ init_MUTEX(&lock);
+
+ if (request_region(io, 2, "sf16fmr2"))
+ {
+ printk(KERN_ERR "fmr2: port 0x%x already in use\n", io);
+ return -EBUSY;
+ }
+
+ if(video_register_device(&fmr2_radio, VFL_TYPE_RADIO, radio_nr)==-1)
+ {
+ release_region(io, 2);
+ return -EINVAL;
+ }
+
+ printk(KERN_INFO "SF16FMR2 radio card driver at 0x%x.\n", io);
+ debug_print((KERN_DEBUG "Mute %d Low %d\n",VIDEO_AUDIO_MUTE,VIDEO_TUNER_LOW));
+ /* mute card - prevents noisy bootups */
+ down(&lock);
+ fmr2_mute(io);
+ fmr2_product_info(&fmr2_unit);
+ up(&lock);
+ debug_print((KERN_DEBUG "card_type %d\n", fmr2_unit.card_type));
+ return 0;
+}
+
+MODULE_AUTHOR("Ziglio Frediano, freddy77@angelfire.com");
+MODULE_DESCRIPTION("A driver for the SF16FMR2 radio.");
+MODULE_LICENSE("GPL");
+
+MODULE_PARM(io, "i");
+MODULE_PARM_DESC(io, "I/O address of the SF16FMR2 card (should be 0x384, if do not work try 0x284)");
+MODULE_PARM(radio_nr, "i");
+
+static void __exit fmr2_cleanup_module(void)
+{
+ video_unregister_device(&fmr2_radio);
+ release_region(io,2);
+}
+
+module_init(fmr2_init);
+module_exit(fmr2_cleanup_module);
+
+#ifndef MODULE
+
+static int __init fmr2_setup_io(char *str)
+{
+ get_option(&str, &io);
+ return 1;
+}
+
+__setup("sf16fmr2=", fmr2_setup_io);
+
+#endif
16*157.25,16*454.00,0xa0,0x90,0x30,0x8e,732},
{ "Philips NTSC MK3 (FM1236MK3 or FM1236/F)", Philips, NTSC,
16*160.00,16*442.00,0x01,0x02,0x04,0x8,732},
+
+ { "Philips 4 in 1 (ATI TV Wonder Pro/Conexant)", Philips, NTSC,
+ 16*160.00,16*442.00,0x01,0x02,0x04,0x8e,732},
+ { "Microtune 4049 FM5",Microtune,PAL,
+ 16*141.00,16*464.00,0xa0,0x90,0x30,0x8e,623},
+
};
#define TUNERS ARRAY_SIZE(tuners)
t->radio_freq(c,freq);
}
-static void set_type(struct i2c_client *c, unsigned int type)
+static void set_type(struct i2c_client *c, unsigned int type, char *source)
{
struct tuner *t = i2c_get_clientdata(c);
if (t->type != UNSET) {
- printk("tuner: type already set (%d)\n",t->type);
+ if (t->type != type)
+ printk("tuner: type already set to %d, "
+ "ignoring request for %d\n", t->type, type);
return;
}
if (type >= TUNERS)
return;
t->type = type;
- printk("tuner: type set to %d (%s)\n", t->type,tuners[t->type].name);
+ printk("tuner: type set to %d (%s) by %s\n",
+ t->type,tuners[t->type].name, source);
strlcpy(c->name, tuners[t->type].name, sizeof(c->name));
switch (t->type) {
client_template.adapter = adap;
client_template.addr = addr;
- printk("tuner: chip found @ 0x%x\n", addr<<1);
+ printk("tuner: chip found at addr 0x%x i2c-bus %s\n",
+ addr<<1, adap->name);
if (NULL == (client = kmalloc(sizeof(struct i2c_client), GFP_KERNEL)))
return -ENOMEM;
t->radio_if2 = 10700*1000; // 10.7MHz - FM radio
i2c_attach_client(client);
- if (type < TUNERS) {
- t->type = type;
- printk("tuner: type forced to %d (%s) [insmod]\n",
- t->type,tuners[t->type].name);
- set_type(client,type);
- }
+ if (type < TUNERS)
+ set_type(client, type, "insmod option");
return 0;
}
/* --- configuration --- */
case TUNER_SET_TYPE:
- set_type(client,*iarg);
+ set_type(client,*iarg,client->adapter->name);
break;
case AUDC_SET_RADIO:
if (!t->radio) {
videocodec_buf = (char *) kmalloc(size, GFP_KERNEL);
i = 0;
- i += snprintf(videocodec_buf + i, size - 1,
+ i += scnprintf(videocodec_buf + i, size - 1,
"<S>lave or attached <M>aster name type flags magic ");
- i += snprintf(videocodec_buf + i, size - 1, "(connected as)\n");
+ i += scnprintf(videocodec_buf + i, size -i - 1, "(connected as)\n");
h = codeclist_top;
while (h) {
if (i > (size - LINESIZE))
break; // security check
- i += snprintf(videocodec_buf + i, size,
+ i += scnprintf(videocodec_buf + i, size -i -1,
"S %32s %04x %08lx %08lx (TEMPLATE)\n",
h->codec->name, h->codec->type,
h->codec->flags, h->codec->magic);
while (a) {
if (i > (size - LINESIZE))
break; // security check
- i += snprintf(videocodec_buf + i, size,
+ i += scnprintf(videocodec_buf + i, size -i -1,
"M %32s %04x %08lx %08lx (%s)\n",
a->codec->master_data->name,
a->codec->master_data->type,
SET_MODULE_OWNER(dev);
if (register_netdev(dev) != 0) {
- kfree(dev);
+ free_netdev(dev);
dev = NULL;
}
return dev;
if(dev->blkdev) {
invalidate_inode_pages(dev->blkdev->bd_inode->i_mapping);
- close_bdev_excl(dev->blkdev, BDEV_RAW);
+ close_bdev_excl(dev->blkdev);
}
kfree(dev);
}
#ifdef MODULE
mode = (readonly) ? O_RDONLY : O_RDWR;
- bdev = open_bdev_excl(devname, mode, BDEV_RAW, NULL);
+ bdev = open_bdev_excl(devname, mode, NULL);
#else
mode = (readonly) ? FMODE_READ : FMODE_WRITE;
- bdev = open_by_devnum(name_to_dev_t(devname), mode, BDEV_RAW);
+ bdev = open_by_devnum(name_to_dev_t(devname), mode);
#endif
if(IS_ERR(bdev)) {
err("error: cannot open device %s", devname);
if(MAJOR(bdev->bd_dev) == MTD_BLOCK_MAJOR) {
err("attempting to use an MTD device as a block device");
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
return NULL;
}
dev = kmalloc(sizeof(struct blkmtd_dev), GFP_KERNEL);
if(dev == NULL) {
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
return NULL;
}
nr_parts = parse_mtd_partitions(flash_mtd, probes, &parsed_parts, 0);
-#if CONFIG_MTD_SUPERH_RESERVE
+#ifdef CONFIG_MTD_SUPERH_RESERVE
if (nr_parts <= 0) {
printk(KERN_NOTICE "Using configured partition at 0x%08x.\n",
CONFIG_MTD_SUPERH_RESERVE);
err = el3_common_init(dev);
if (err) {
+ device->driver_data = NULL;
+ free_netdev(dev);
return -ENOMEM;
}
err = el3_common_init(dev);
if (err) {
+ eisa_set_drvdata (edev, NULL);
+ free_netdev(dev);
return err;
}
MODULE_PARM(full_duplex, "1-" __MODULE_STRING(8) "i");
MODULE_PARM(hw_checksums, "1-" __MODULE_STRING(8) "i");
MODULE_PARM(flow_ctrl, "1-" __MODULE_STRING(8) "i");
+MODULE_PARM(global_enable_wol, "i");
+MODULE_PARM(enable_wol, "1-" __MODULE_STRING(8) "i");
MODULE_PARM(rx_copybreak, "i");
MODULE_PARM(max_interrupt_work, "i");
MODULE_PARM(compaq_ioaddr, "i");
MODULE_PARM_DESC(global_full_duplex, "3c59x: same as full_duplex, but applies to all NICs if options is unset");
MODULE_PARM_DESC(hw_checksums, "3c59x Hardware checksum checking by adapter(s) (0-1)");
MODULE_PARM_DESC(flow_ctrl, "3c59x 802.3x flow control usage (PAUSE only) (0-1)");
+MODULE_PARM_DESC(enable_wol, "3c59x: Turn on Wake-on-LAN for adapter(s) (0-1)");
+MODULE_PARM_DESC(global_enable_wol, "3c59x: same as enable_wol, but applies to all NICs if options is unset");
MODULE_PARM_DESC(rx_copybreak, "3c59x copy breakpoint for copy-only-tiny-frames");
MODULE_PARM_DESC(max_interrupt_work, "3c59x maximum events handled per interrupt");
MODULE_PARM_DESC(compaq_ioaddr, "3c59x PCI I/O base address (Compaq BIOS problem workaround)");
flow_ctrl:1, /* Use 802.3x flow control (PAUSE only) */
partner_flow_ctrl:1, /* Partner supports flow control */
has_nway:1,
+ enable_wol:1, /* Wake-on-LAN is enabled */
pm_state_valid:1, /* power_state[] has sane contents */
open:1,
medialock:1,
static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
static int hw_checksums[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
static int flow_ctrl[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
+static int enable_wol[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
static int global_options = -1;
static int global_full_duplex = -1;
+static int global_enable_wol = -1;
/* #define dev_alloc_skb dev_alloc_skb_debug */
vortex_debug = 7;
if (option & 0x4000)
vortex_debug = 2;
+ if (option & 0x0400)
+ vp->enable_wol = 1;
}
print_info = (vortex_debug > 1);
if (global_full_duplex > 0)
vp->full_duplex = 1;
+ if (global_enable_wol > 0)
+ vp->enable_wol = 1;
if (card_idx < MAX_UNITS) {
if (full_duplex[card_idx] > 0)
vp->full_duplex = 1;
if (flow_ctrl[card_idx] > 0)
vp->flow_ctrl = 1;
+ if (enable_wol[card_idx] > 0)
+ vp->enable_wol = 1;
}
vp->force_fd = vp->full_duplex;
dev->set_multicast_list = set_rx_mode;
dev->tx_timeout = vortex_tx_timeout;
dev->watchdog_timeo = (watchdog * HZ) / 1000;
- if (pdev) {
+ if (pdev && vp->enable_wol) {
vp->pm_state_valid = 1;
pci_save_state(VORTEX_PCI(vp), vp->power_state);
acpi_set_WOL(dev);
unsigned int config;
int i;
- if (VORTEX_PCI(vp)) {
+ if (VORTEX_PCI(vp) && vp->enable_wol) {
pci_set_power_state(VORTEX_PCI(vp), 0); /* Go active */
pci_restore_state(VORTEX_PCI(vp), vp->power_state);
}
if (vp->full_bus_master_tx)
outl(0, ioaddr + DownListPtr);
- if (VORTEX_PCI(vp)) {
+ if (VORTEX_PCI(vp) && vp->enable_wol) {
pci_save_state(VORTEX_PCI(vp), vp->power_state);
acpi_set_WOL(dev);
}
/* Should really use issue_and_wait() here */
outw(TotalReset|0x14, dev->base_addr + EL3_CMD);
- if (VORTEX_PCI(vp)) {
+ if (VORTEX_PCI(vp) && vp->enable_wol) {
pci_set_power_state(VORTEX_PCI(vp), 0); /* Go active */
if (vp->pm_state_valid)
pci_restore_state(VORTEX_PCI(vp), vp->power_state);
say N.
config E100
- tristate "EtherExpressPro/100 support (e100, Alternate Intel driver)"
+ tristate "Intel(R) PRO/100+ support"
depends on NET_PCI && PCI
+ select MII
---help---
This driver supports Intel(R) PRO/100 family of adapters, which
includes:
<file:Documentation/networking/net-modules.txt>. The module
will be called e100.
+config E100_NAPI
+ bool "Use Rx Polling (NAPI)"
+ depends on E100
+
config LNE390
tristate "Mylex EISA LNE390A/B support (EXPERIMENTAL)"
depends on NET_PCI && EISA && EXPERIMENTAL
<file:Documentation/Changes>) and you can say N here.
Laptop users should read the Linux Laptop home page at
- <http://www.linux-on-laptops.com/>.
+ <http://www.linux-on-laptops.com/> or
+ Tuxmobil - Linux on Mobile Computers at <http://www.tuxmobil.org/>.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
- Allied Telesyn AT-2970TX/2TX Gigabit Ethernet Adapter
- Allied Telesyn AT-2971SX Gigabit Ethernet Adapter
- Allied Telesyn AT-2971T Gigabit Ethernet Adapter
+ - Belkin Gigabit Desktop Card 10/100/1000Base-T Adapter, Copper RJ-45
- DGE-530T Gigabit Ethernet Adapter
- EG1032 v2 Instant Gigabit Network Adapter
- EG1064 v2 Instant Gigabit Network Adapter
- Marvell 88E8001 Gigabit LOM Ethernet Adapter (Foxconn)
- Marvell 88E8001 Gigabit LOM Ethernet Adapter (Gigabyte)
- Marvell 88E8001 Gigabit LOM Ethernet Adapter (Iwill)
+ - Marvell 88E8050 Gigabit LOM Ethernet Adapter (Intel)
- Marvell RDK-8001 Adapter
- Marvell RDK-8002 Adapter
- Marvell RDK-8003 Adapter
- Marvell RDK-8010 Adapter
- Marvell RDK-8011 Adapter
- Marvell RDK-8012 Adapter
+ - Marvell RDK-8052 Adapter
- Marvell Yukon Gigabit Ethernet 10/100/1000Base-T Adapter (32 bit)
- Marvell Yukon Gigabit Ethernet 10/100/1000Base-T Adapter (64 bit)
- N-Way PCI-Bus Giga-Card 1000/100/10Mbps(L)
obj-$(CONFIG_ISDN) += slhc.o
endif
-obj-$(CONFIG_E100) += e100/
obj-$(CONFIG_E1000) += e1000/
obj-$(CONFIG_IXGB) += ixgb/
obj-$(CONFIG_BONDING) += bonding/
obj-$(CONFIG_NE2K_PCI) += ne2k-pci.o 8390.o
obj-$(CONFIG_PCNET32) += pcnet32.o
obj-$(CONFIG_EEPRO100) += eepro100.o
+obj-$(CONFIG_E100) += e100.o
obj-$(CONFIG_TLAN) += tlan.o
obj-$(CONFIG_EPIC100) += epic100.o
obj-$(CONFIG_SIS190) += sis190.o
break;
}
- if (register_netdev(dev)) {
- printk(KERN_ERR "acenic: device registration failed\n");
- free_netdev(dev);
- continue;
- }
-
switch(pdev->vendor) {
case PCI_VENDOR_ID_ALTEON:
if (pdev->device == PCI_DEVICE_ID_FARALLON_PN9100T) {
continue;
}
+ if (register_netdev(dev)) {
+ printk(KERN_ERR "acenic: device registration failed\n");
+ ace_init_cleanup(dev);
+ free_netdev(dev);
+ continue;
+ }
+
if (ap->pci_using_dac)
dev->features |= NETIF_F_HIGHDMA;
while (root_dev) {
ap = root_dev->priv;
next = ap->next;
+ unregister_netdev(root_dev);
regs = ap->regs;
if (dev->irq)
free_irq(dev->irq, dev);
- unregister_netdev(dev);
iounmap(ap->regs);
}
pcmcia_reset();
+ release_region(IOBASE, 0x20);
+
free_netdev(apne_dev);
}
out1:
cleanup_card(dev);
out:
- kfree(dev);
+ free_netdev(dev);
return ERR_PTR(err);
}
dev->base_addr = 0x220;
dev->irq = IRQ_EBSA110_ETHERNET;
+ ret = -ENODEV;
+ if (!request_region(dev->base_addr, 0x18, dev->name))
+ goto nodev;
+
/*
* Reset the device.
*/
* Check the manufacturer part of the
* ether address.
*/
- ret = -ENODEV;
if (inb(dev->base_addr) != 0x08 ||
inb(dev->base_addr + 2) != 0x00 ||
inb(dev->base_addr + 4) != 0x2b)
- goto nodev;
-
- if (!request_region(dev->base_addr, 0x18, dev->name))
- goto nodev;
+ goto release;
am79c961_banner();
printk(KERN_INFO "%s: ether address ", dev->name);
/*
- * Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
+ * Copyright(c) 1999 - 2004 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* problem on very high Tx traffic load where packets may get dropped
* by the slave.
*
- * 2003/09/24 - Shmulik Hen <shmulik.hen at intel dot com>
+ * 2003/12/01 - Shmulik Hen <shmulik.hen at intel dot com>
* - Code cleanup and style changes
*/
int agg_id;
int i;
struct ad_info ad_info;
+ int res = 1;
/* make sure that the slaves list will
* not change during tx
read_lock(&bond->lock);
if (!BOND_IS_OK(bond)) {
- goto free_out;
+ goto out;
}
if (bond_3ad_get_active_agg_info(bond, &ad_info)) {
printk(KERN_DEBUG "ERROR: bond_3ad_get_active_agg_info failed\n");
- goto free_out;
+ goto out;
}
slaves_in_agg = ad_info.ports;
if (slaves_in_agg == 0) {
/*the aggregator is empty*/
printk(KERN_DEBUG "ERROR: active aggregator is empty\n");
- goto free_out;
+ goto out;
}
slave_agg_no = (data->h_dest[5]^bond->dev->dev_addr[5]) % slaves_in_agg;
if (slave_agg_no >= 0) {
printk(KERN_ERR DRV_NAME ": Error: Couldn't find a slave to tx on for aggregator ID %d\n", agg_id);
- goto free_out;
+ goto out;
}
start_at = slave;
slave_agg_id = agg->aggregator_identifier;
}
- if (SLAVE_IS_OK(slave) &&
- agg && (slave_agg_id == agg_id)) {
- skb->dev = slave->dev;
- skb->priority = 1;
- dev_queue_xmit(skb);
-
- goto out;
+ if (SLAVE_IS_OK(slave) && agg && (slave_agg_id == agg_id)) {
+ res = bond_dev_queue_xmit(bond, skb, slave->dev);
+ break;
}
}
out:
+ if (res) {
+ /* no suitable interface, frame not sent */
+ dev_kfree_skb(skb);
+ }
read_unlock(&bond->lock);
return 0;
-
-free_out:
- /* no suitable interface, frame not sent */
- dev_kfree_skb(skb);
- goto out;
}
int bond_3ad_lacpdu_recv(struct sk_buff *skb, struct net_device *dev, struct packet_type* ptype)
/*
- * Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
+ * Copyright(c) 1999 - 2004 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* - Renamed bond_3ad_link_status_changed() to
* bond_3ad_handle_link_change() for compatibility with TLB.
*
- * 2003/09/24 - Shmulik Hen <shmulik.hen at intel dot com>
+ * 2003/12/01 - Shmulik Hen <shmulik.hen at intel dot com>
* - Code cleanup and style changes
*/
/*
- * Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
+ * Copyright(c) 1999 - 2004 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* - Add support for setting bond's MAC address with special
* handling required for ALB/TLB.
*
- * 2003/09/24 - Shmulik Hen <shmulik.hen at intel dot com>
+ * 2003/12/01 - Shmulik Hen <shmulik.hen at intel dot com>
* - Code cleanup and style changes
+ *
+ * 2003/12/30 - Amir Noam <amir.noam at intel dot com>
+ * - Fixed: Cannot remove and re-enslave the original active slave.
+ *
+ * 2004/01/14 - Shmulik Hen <shmulik.hen at intel dot com>
+ * - Add capability to tag self generated packets in ALB/TLB modes.
*/
//#define BONDING_DEBUG 1
#include <linux/if_arp.h>
#include <linux/if_ether.h>
#include <linux/if_bonding.h>
+#include <linux/if_vlan.h>
#include <net/ipx.h>
#include <net/arp.h>
#include <asm/byteorder.h>
#define TLB_NULL_INDEX 0xffffffff
-#define MAX_LP_RETRY 3
+#define MAX_LP_BURST 3
/* rlb defs */
#define RLB_HASH_TABLE_SIZE 256
}
for (i = 0; i < RLB_ARP_BURST_SIZE; i++) {
- arp_send(ARPOP_REPLY, ETH_P_ARP,
- client_info->ip_dst,
- client_info->slave->dev,
- client_info->ip_src,
- client_info->mac_dst,
- client_info->slave->dev->dev_addr,
- client_info->mac_dst);
+ struct sk_buff *skb;
+
+ skb = arp_create(ARPOP_REPLY, ETH_P_ARP,
+ client_info->ip_dst,
+ client_info->slave->dev,
+ client_info->ip_src,
+ client_info->mac_dst,
+ client_info->slave->dev->dev_addr,
+ client_info->mac_dst);
+ if (!skb) {
+ printk(KERN_ERR DRV_NAME
+ ": Error: failed to create an ARP packet\n");
+ continue;
+ }
+
+ skb->dev = client_info->slave->dev;
+
+ if (client_info->tag) {
+ skb = vlan_put_tag(skb, client_info->vlan_id);
+ if (!skb) {
+ printk(KERN_ERR DRV_NAME
+ ": Error: failed to insert VLAN tag\n");
+ continue;
+ }
+ }
+
+ arp_xmit(skb);
}
}
}
/* Caller must hold both bond and ptr locks for read */
-struct slave *rlb_choose_channel(struct bonding *bond, struct arp_pkt *arp)
+struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
+ struct arp_pkt *arp = (struct arp_pkt *)skb->nh.raw;
struct slave *assigned_slave;
struct rlb_client_info *client_info;
u32 hash_index = 0;
client_info->ntt = 0;
}
+ if (!list_empty(&bond->vlan_list)) {
+ unsigned short vlan_id;
+ int res = vlan_get_tag(skb, &vlan_id);
+ if (!res) {
+ client_info->tag = 1;
+ client_info->vlan_id = vlan_id;
+ }
+ }
+
if (!client_info->assigned) {
u32 prev_tbl_head = bond_info->rx_hashtbl_head;
bond_info->rx_hashtbl_head = hash_index;
/* the arp must be sent on the selected
* rx channel
*/
- tx_slave = rlb_choose_channel(bond, arp);
+ tx_slave = rlb_choose_channel(skb, bond);
if (tx_slave) {
memcpy(arp->mac_src,tx_slave->dev->dev_addr, ETH_ALEN);
}
* When the arp reply is received the entry will be updated
* with the correct unicast address of the client.
*/
- rlb_choose_channel(bond, arp);
+ rlb_choose_channel(skb, bond);
/* The ARP relpy packets must be delayed so that
* they can cancel out the influence of the ARP request.
kfree(bond_info->rx_hashtbl);
bond_info->rx_hashtbl = NULL;
+ bond_info->rx_hashtbl_head = RLB_NULL_INDEX;
+
+ _unlock_rx_hashtbl(bond);
+}
+
+static void rlb_clear_vlan(struct bonding *bond, unsigned short vlan_id)
+{
+ struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
+ u32 curr_index;
+
+ _lock_rx_hashtbl(bond);
+
+ curr_index = bond_info->rx_hashtbl_head;
+ while (curr_index != RLB_NULL_INDEX) {
+ struct rlb_client_info *curr = &(bond_info->rx_hashtbl[curr_index]);
+ u32 next_index = bond_info->rx_hashtbl[curr_index].next;
+ u32 prev_index = bond_info->rx_hashtbl[curr_index].prev;
+
+ if (curr->tag && (curr->vlan_id == vlan_id)) {
+ if (curr_index == bond_info->rx_hashtbl_head) {
+ bond_info->rx_hashtbl_head = next_index;
+ }
+ if (prev_index != RLB_NULL_INDEX) {
+ bond_info->rx_hashtbl[prev_index].next = next_index;
+ }
+ if (next_index != RLB_NULL_INDEX) {
+ bond_info->rx_hashtbl[next_index].prev = prev_index;
+ }
+
+ rlb_init_table_entry(curr);
+ }
+
+ curr_index = next_index;
+ }
_unlock_rx_hashtbl(bond);
}
static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[])
{
+ struct bonding *bond = bond_get_bond_by_slave(slave);
struct learning_pkt pkt;
int size = sizeof(struct learning_pkt);
int i;
memcpy(pkt.mac_src, mac_addr, ETH_ALEN);
pkt.type = __constant_htons(ETH_P_LOOP);
- for (i = 0; i < MAX_LP_RETRY; i++) {
+ for (i = 0; i < MAX_LP_BURST; i++) {
struct sk_buff *skb;
char *data;
skb->priority = TC_PRIO_CONTROL;
skb->dev = slave->dev;
+ if (!list_empty(&bond->vlan_list)) {
+ struct vlan_entry *vlan;
+
+ vlan = bond_next_vlan(bond,
+ bond->alb_info.current_alb_vlan);
+
+ bond->alb_info.current_alb_vlan = vlan;
+ if (!vlan) {
+ kfree_skb(skb);
+ continue;
+ }
+
+ skb = vlan_put_tag(skb, vlan->vlan_id);
+ if (!skb) {
+ printk(KERN_ERR DRV_NAME
+ ": Error: failed to insert VLAN tag\n");
+ continue;
+ }
+ }
+
dev_queue_xmit(skb);
}
}
static int alb_handle_addr_collision_on_attach(struct bonding *bond, struct slave *slave)
{
struct slave *tmp_slave1, *tmp_slave2, *free_mac_slave;
+ struct slave *has_bond_addr = bond->curr_active_slave;
int i, j, found = 0;
if (bond->slave_cnt == 0) {
free_mac_slave = tmp_slave1;
break;
}
+
+ if (!has_bond_addr) {
+ if (!memcmp(tmp_slave1->dev->dev_addr,
+ bond->dev->dev_addr,
+ ETH_ALEN)) {
+
+ has_bond_addr = tmp_slave1;
+ }
+ }
}
if (free_mac_slave) {
": Warning: the hw address of slave %s is in use by "
"the bond; giving it the hw address of %s\n",
slave->dev->name, free_mac_slave->dev->name);
- } else {
+
+ } else if (has_bond_addr) {
printk(KERN_ERR DRV_NAME
": Error: the hw address of slave %s is in use by the "
"bond; couldn't find a slave with a free hw address to "
int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
{
struct bonding *bond = bond_dev->priv;
- struct ethhdr *eth_data = (struct ethhdr *)skb->mac.raw = skb->data;
+ struct ethhdr *eth_data;
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
struct slave *tx_slave = NULL;
static u32 ip_bcast = 0xffffffff;
int do_tx_balance = 1;
u32 hash_index = 0;
u8 *hash_start = NULL;
+ int res = 1;
+
+ skb->mac.raw = (unsigned char *)skb->data;
+ eth_data = (struct ethhdr *)skb->data;
/* make sure that the curr_active_slave and the slaves list do
* not change during tx
read_lock(&bond->curr_slave_lock);
if (!BOND_IS_OK(bond)) {
- goto free_out;
+ goto out;
}
switch (ntohs(skb->protocol)) {
break;
}
- if (ipx_hdr(skb)->ipx_type !=
- __constant_htons(IPX_TYPE_NCP)) {
+ if (ipx_hdr(skb)->ipx_type != IPX_TYPE_NCP) {
/* The only protocol worth balancing in
* this family since it has an "ARP" like
* mechanism
}
if (tx_slave && SLAVE_IS_OK(tx_slave)) {
- skb->dev = tx_slave->dev;
if (tx_slave != bond->curr_active_slave) {
memcpy(eth_data->h_source,
tx_slave->dev->dev_addr,
ETH_ALEN);
}
- dev_queue_xmit(skb);
+
+ res = bond_dev_queue_xmit(bond, skb, tx_slave->dev);
} else {
- /* no suitable interface, frame not sent */
if (tx_slave) {
tlb_clear_slave(bond, tx_slave, 0);
}
- goto free_out;
}
out:
+ if (res) {
+ /* no suitable interface, frame not sent */
+ dev_kfree_skb(skb);
+ }
read_unlock(&bond->curr_slave_lock);
read_unlock(&bond->lock);
return 0;
-
-free_out:
- dev_kfree_skb(skb);
- goto out;
}
void bond_alb_monitor(struct bonding *bond)
return 0;
}
+void bond_alb_clear_vlan(struct bonding *bond, unsigned short vlan_id)
+{
+ if (bond->alb_info.current_alb_vlan &&
+ (bond->alb_info.current_alb_vlan->vlan_id == vlan_id)) {
+ bond->alb_info.current_alb_vlan = NULL;
+ }
+
+ if (bond->alb_info.rlb_enabled) {
+ rlb_clear_vlan(bond, vlan_id);
+ }
+}
+
/*
- * Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
+ * Copyright(c) 1999 - 2004 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* - Add support for setting bond's MAC address with special
* handling required for ALB/TLB.
*
- * 2003/09/24 - Shmulik Hen <shmulik.hen at intel dot com>
+ * 2003/12/01 - Shmulik Hen <shmulik.hen at intel dot com>
* - Code cleanup and style changes
*/
u8 assigned; /* checking whether this entry is assigned */
u8 ntt; /* flag - need to transmit client info */
struct slave *slave; /* the slave assigned to this client */
+ u8 tag; /* flag - need to tag skb */
+ unsigned short vlan_id; /* VLAN tag associated with IP address */
};
struct tlb_slave_info {
* rx traffic should be
* rebalanced
*/
+ struct vlan_entry *current_alb_vlan;
};
int bond_alb_initialize(struct bonding *bond, int rlb_enabled);
int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev);
void bond_alb_monitor(struct bonding *bond);
int bond_alb_set_mac_address(struct net_device *bond_dev, void *addr);
-
+void bond_alb_clear_vlan(struct bonding *bond, unsigned short vlan_id);
#endif /* __BOND_ALB_H__ */
* o Change struct member names and types.
* o Chomp trailing spaces, remove empty lines, fix indentations.
* o Re-organize code according to context.
+ *
+ * 2003/12/30 - Amir Noam <amir.noam at intel dot com>
+ * - Fixed: Cannot remove and re-enslave the original active slave.
+ * - Fixed: Releasing the original active slave causes mac address
+ * duplication.
+ * - Add support for slaves that use ethtool_ops.
+ * Set version to 2.5.3.
+ *
+ * 2004/01/05 - Amir Noam <amir.noam at intel dot com>
+ * - Save bonding parameters per bond instead of using the global values.
+ * Set version to 2.5.4.
+ *
+ * 2004/01/14 - Shmulik Hen <shmulik.hen at intel dot com>
+ * - Enhance VLAN support:
+ * * Add support for VLAN hardware acceleration capable slaves.
+ * * Add capability to tag self generated packets in ALB/TLB modes.
+ * Set version to 2.6.0.
*/
//#define BONDING_DEBUG 1
#include <net/arp.h>
#include <linux/mii.h>
#include <linux/ethtool.h>
+#include <linux/if_vlan.h>
#include <linux/if_bonding.h>
#include "bonding.h"
#include "bond_3ad.h"
/* monitor all links that often (in milliseconds). <=0 disables monitoring */
#define BOND_LINK_MON_INTERV 0
#define BOND_LINK_ARP_INTERV 0
-#define MAX_ARP_IP_TARGETS 16
static int max_bonds = BOND_DEFAULT_MAX_BONDS;
static int miimon = BOND_LINK_MON_INTERV;
static char *primary = NULL;
static char *lacp_rate = NULL;
static int arp_interval = BOND_LINK_ARP_INTERV;
-static char *arp_ip_target[MAX_ARP_IP_TARGETS] = { NULL, };
+static char *arp_ip_target[BOND_MAX_ARP_TARGETS] = { NULL, };
MODULE_PARM(max_bonds, "i");
MODULE_PARM_DESC(max_bonds, "Max number of bonded devices");
MODULE_PARM_DESC(lacp_rate, "LACPDU tx rate to request from 802.3ad partner (slow/fast)");
MODULE_PARM(arp_interval, "i");
MODULE_PARM_DESC(arp_interval, "arp interval in milliseconds");
-MODULE_PARM(arp_ip_target, "1-" __MODULE_STRING(MAX_ARP_IP_TARGETS) "s");
+MODULE_PARM(arp_ip_target, "1-" __MODULE_STRING(BOND_MAX_ARP_TARGETS) "s");
MODULE_PARM_DESC(arp_ip_target, "arp targets in n.n.n.n form");
/*----------------------------- Global variables ----------------------------*/
static struct proc_dir_entry *bond_proc_dir = NULL;
#endif
-static u32 arp_target[MAX_ARP_IP_TARGETS] = { 0, } ;
+static u32 arp_target[BOND_MAX_ARP_TARGETS] = { 0, } ;
static int arp_ip_count = 0;
static u32 my_ip = 0;
static int bond_mode = BOND_MODE_ROUNDROBIN;
{ NULL, -1},
};
+/*-------------------------- Forward declarations ---------------------------*/
+
+static inline void bond_set_mode_ops(struct net_device *bond_dev, int mode);
+
/*---------------------------- General routines -----------------------------*/
-static const char *bond_mode_name(void)
+static const char *bond_mode_name(int mode)
{
- switch (bond_mode) {
+ switch (mode) {
case BOND_MODE_ROUNDROBIN :
return "load balancing (round-robin)";
case BOND_MODE_ACTIVEBACKUP :
}
}
+/*---------------------------------- VLAN -----------------------------------*/
+
+/**
+ * bond_add_vlan - add a new vlan id on bond
+ * @bond: bond that got the notification
+ * @vlan_id: the vlan id to add
+ *
+ * Returns -ENOMEM if allocation failed.
+ */
+static int bond_add_vlan(struct bonding *bond, unsigned short vlan_id)
+{
+ struct vlan_entry *vlan;
+
+ dprintk("bond: %s, vlan id %d\n",
+ (bond ? bond->dev->name: "None"), vlan_id);
+
+ vlan = kmalloc(sizeof(struct vlan_entry), GFP_KERNEL);
+ if (!vlan) {
+ return -ENOMEM;
+ }
+
+ INIT_LIST_HEAD(&vlan->vlan_list);
+ vlan->vlan_id = vlan_id;
+
+ write_lock_bh(&bond->lock);
+
+ list_add_tail(&vlan->vlan_list, &bond->vlan_list);
+
+ write_unlock_bh(&bond->lock);
+
+ dprintk("added VLAN ID %d on bond %s\n", vlan_id, bond->dev->name);
+
+ return 0;
+}
+
+/**
+ * bond_del_vlan - delete a vlan id from bond
+ * @bond: bond that got the notification
+ * @vlan_id: the vlan id to delete
+ *
+ * returns -ENODEV if @vlan_id was not found in @bond.
+ */
+static int bond_del_vlan(struct bonding *bond, unsigned short vlan_id)
+{
+ struct vlan_entry *vlan, *next;
+ int res = -ENODEV;
+
+ dprintk("bond: %s, vlan id %d\n", bond->dev->name, vlan_id);
+
+ write_lock_bh(&bond->lock);
+
+ list_for_each_entry_safe(vlan, next, &bond->vlan_list, vlan_list) {
+ if (vlan->vlan_id == vlan_id) {
+ list_del(&vlan->vlan_list);
+
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
+ bond_alb_clear_vlan(bond, vlan_id);
+ }
+
+ dprintk("removed VLAN ID %d from bond %s\n", vlan_id,
+ bond->dev->name);
+
+ kfree(vlan);
+
+ if (list_empty(&bond->vlan_list) &&
+ (bond->slave_cnt == 0)) {
+ /* Last VLAN removed and no slaves, so
+ * restore block on adding VLANs. This will
+ * be removed once new slaves that are not
+ * VLAN challenged will be added.
+ */
+ bond->dev->features |= NETIF_F_VLAN_CHALLENGED;
+ }
+
+ res = 0;
+ goto out;
+ }
+ }
+
+ dprintk("couldn't find VLAN ID %d in bond %s\n", vlan_id,
+ bond->dev->name);
+
+out:
+ write_unlock_bh(&bond->lock);
+ return res;
+}
+
+/**
+ * bond_has_challenged_slaves
+ * @bond: the bond we're working on
+ *
+ * Searches the slave list. Returns 1 if a vlan challenged slave
+ * was found, 0 otherwise.
+ *
+ * Assumes bond->lock is held.
+ */
+static int bond_has_challenged_slaves(struct bonding *bond)
+{
+ struct slave *slave;
+ int i;
+
+ bond_for_each_slave(bond, slave, i) {
+ if (slave->dev->features & NETIF_F_VLAN_CHALLENGED) {
+ dprintk("found VLAN challenged slave - %s\n",
+ slave->dev->name);
+ return 1;
+ }
+ }
+
+ dprintk("no VLAN challenged slaves found\n");
+ return 0;
+}
+
+/**
+ * bond_next_vlan - safely skip to the next item in the vlans list.
+ * @bond: the bond we're working on
+ * @curr: item we're advancing from
+ *
+ * Returns %NULL if list is empty, bond->next_vlan if @curr is %NULL,
+ * or @curr->next otherwise (even if it is @curr itself again).
+ *
+ * Caller must hold bond->lock
+ */
+struct vlan_entry *bond_next_vlan(struct bonding *bond, struct vlan_entry *curr)
+{
+ struct vlan_entry *next, *last;
+
+ if (list_empty(&bond->vlan_list)) {
+ return NULL;
+ }
+
+ if (!curr) {
+ next = list_entry(bond->vlan_list.next,
+ struct vlan_entry, vlan_list);
+ } else {
+ last = list_entry(bond->vlan_list.prev,
+ struct vlan_entry, vlan_list);
+ if (last == curr) {
+ next = list_entry(bond->vlan_list.next,
+ struct vlan_entry, vlan_list);
+ } else {
+ next = list_entry(curr->vlan_list.next,
+ struct vlan_entry, vlan_list);
+ }
+ }
+
+ return next;
+}
+
+/**
+ * bond_dev_queue_xmit - Prepare skb for xmit.
+ *
+ * @bond: bond device that got this skb for tx.
+ * @skb: hw accel VLAN tagged skb to transmit
+ * @slave_dev: slave that is supposed to xmit this skbuff
+ *
+ * When the bond gets an skb to tarnsmit that is
+ * already hardware accelerated VLAN tagged, and it
+ * needs to relay this skb to a slave that is not
+ * hw accel capable, the skb needs to be "unaccelerated",
+ * i.e. strip the hwaccel tag and re-insert it as part
+ * of the payload.
+ *
+ * Assumption - once a VLAN device is created over the bond device, all
+ * packets are going to be hardware accelerated VLAN tagged since the IP
+ * binding is done over the VLAN device
+ */
+int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb, struct net_device *slave_dev)
+{
+ unsigned short vlan_id;
+ int res;
+
+ if (!list_empty(&bond->vlan_list) &&
+ !(slave_dev->features & NETIF_F_HW_VLAN_TX)) {
+ res = vlan_get_tag(skb, &vlan_id);
+ if (res) {
+ return -EINVAL;
+ }
+
+ skb->dev = slave_dev;
+ skb = vlan_put_tag(skb, vlan_id);
+ if (!skb) {
+ /* vlan_put_tag() frees the skb in case of error,
+ * so return success here so the calling functions
+ * won't attempt to free is again.
+ */
+ return 0;
+ }
+ } else {
+ skb->dev = slave_dev;
+ }
+
+ skb->priority = 1;
+ dev_queue_xmit(skb);
+
+ return 0;
+}
+
+/*
+ * In the following 3 functions, bond_vlan_rx_register(), bond_vlan_rx_add_vid
+ * and bond_vlan_rx_kill_vid, We don't protect the slave list iteration with a
+ * lock because:
+ * a. This operation is performed in IOCTL context,
+ * b. The operation is protected by the RTNL semaphore in the 8021q code,
+ * c. Holding a lock with BH disabled while directly calling a base driver
+ * entry point is generally a BAD idea.
+ *
+ * The design of synchronization/protection for this operation in the 8021q
+ * module is good for one or more VLAN devices over a single physical device
+ * and cannot be extended for a teaming solution like bonding, so there is a
+ * potential race condition here where a net device from the vlan group might
+ * be referenced (either by a base driver or the 8021q code) while it is being
+ * removed from the system. However, it turns out we're not making matters
+ * worse, and if it works for regular VLAN usage it will work here too.
+*/
+
+/**
+ * bond_vlan_rx_register - Propagates registration to slaves
+ * @bond_dev: bonding net device that got called
+ * @grp: vlan group being registered
+ */
+static void bond_vlan_rx_register(struct net_device *bond_dev, struct vlan_group *grp)
+{
+ struct bonding *bond = bond_dev->priv;
+ struct slave *slave;
+ int i;
+
+ bond->vlgrp = grp;
+
+ bond_for_each_slave(bond, slave, i) {
+ struct net_device *slave_dev = slave->dev;
+
+ if ((slave_dev->features & NETIF_F_HW_VLAN_RX) &&
+ slave_dev->vlan_rx_register) {
+ slave_dev->vlan_rx_register(slave_dev, grp);
+ }
+ }
+}
+
+/**
+ * bond_vlan_rx_add_vid - Propagates adding an id to slaves
+ * @bond_dev: bonding net device that got called
+ * @vid: vlan id being added
+ */
+static void bond_vlan_rx_add_vid(struct net_device *bond_dev, uint16_t vid)
+{
+ struct bonding *bond = bond_dev->priv;
+ struct slave *slave;
+ int i, res;
+
+ bond_for_each_slave(bond, slave, i) {
+ struct net_device *slave_dev = slave->dev;
+
+ if ((slave_dev->features & NETIF_F_HW_VLAN_FILTER) &&
+ slave_dev->vlan_rx_add_vid) {
+ slave_dev->vlan_rx_add_vid(slave_dev, vid);
+ }
+ }
+
+ res = bond_add_vlan(bond, vid);
+ if (res) {
+ printk(KERN_ERR DRV_NAME
+ ": %s: Failed to add vlan id %d\n",
+ bond_dev->name, vid);
+ }
+}
+
+/**
+ * bond_vlan_rx_kill_vid - Propagates deleting an id to slaves
+ * @bond_dev: bonding net device that got called
+ * @vid: vlan id being removed
+ */
+static void bond_vlan_rx_kill_vid(struct net_device *bond_dev, uint16_t vid)
+{
+ struct bonding *bond = bond_dev->priv;
+ struct slave *slave;
+ struct net_device *vlan_dev;
+ int i, res;
+
+ bond_for_each_slave(bond, slave, i) {
+ struct net_device *slave_dev = slave->dev;
+
+ if ((slave_dev->features & NETIF_F_HW_VLAN_FILTER) &&
+ slave_dev->vlan_rx_kill_vid) {
+ /* Save and then restore vlan_dev in the grp array,
+ * since the slave's driver might clear it.
+ */
+ vlan_dev = bond->vlgrp->vlan_devices[vid];
+ slave_dev->vlan_rx_kill_vid(slave_dev, vid);
+ bond->vlgrp->vlan_devices[vid] = vlan_dev;
+ }
+ }
+
+ res = bond_del_vlan(bond, vid);
+ if (res) {
+ printk(KERN_ERR DRV_NAME
+ ": %s: Failed to remove vlan id %d\n",
+ bond_dev->name, vid);
+ }
+}
+
+static void bond_add_vlans_on_slave(struct bonding *bond, struct net_device *slave_dev)
+{
+ struct vlan_entry *vlan;
+
+ write_lock_bh(&bond->lock);
+
+ if (list_empty(&bond->vlan_list)) {
+ goto out;
+ }
+
+ if ((slave_dev->features & NETIF_F_HW_VLAN_RX) &&
+ slave_dev->vlan_rx_register) {
+ slave_dev->vlan_rx_register(slave_dev, bond->vlgrp);
+ }
+
+ if (!(slave_dev->features & NETIF_F_HW_VLAN_FILTER) ||
+ !(slave_dev->vlan_rx_add_vid)) {
+ goto out;
+ }
+
+ list_for_each_entry(vlan, &bond->vlan_list, vlan_list) {
+ slave_dev->vlan_rx_add_vid(slave_dev, vlan->vlan_id);
+ }
+
+out:
+ write_unlock_bh(&bond->lock);
+}
+
+static void bond_del_vlans_from_slave(struct bonding *bond, struct net_device *slave_dev)
+{
+ struct vlan_entry *vlan;
+ struct net_device *vlan_dev;
+
+ write_lock_bh(&bond->lock);
+
+ if (list_empty(&bond->vlan_list)) {
+ goto out;
+ }
+
+ if (!(slave_dev->features & NETIF_F_HW_VLAN_FILTER) ||
+ !(slave_dev->vlan_rx_kill_vid)) {
+ goto unreg;
+ }
+
+ list_for_each_entry(vlan, &bond->vlan_list, vlan_list) {
+ /* Save and then restore vlan_dev in the grp array,
+ * since the slave's driver might clear it.
+ */
+ vlan_dev = bond->vlgrp->vlan_devices[vlan->vlan_id];
+ slave_dev->vlan_rx_kill_vid(slave_dev, vlan->vlan_id);
+ bond->vlgrp->vlan_devices[vlan->vlan_id] = vlan_dev;
+ }
+
+unreg:
+ if ((slave_dev->features & NETIF_F_HW_VLAN_RX) &&
+ slave_dev->vlan_rx_register) {
+ slave_dev->vlan_rx_register(slave_dev, NULL);
+ }
+
+out:
+ write_unlock_bh(&bond->lock);
+}
+
/*------------------------------- Link status -------------------------------*/
/*
struct ifreq ifr;
struct ethtool_cmd etool;
- ioctl = slave_dev->do_ioctl;
- if (ioctl) {
- etool.cmd = ETHTOOL_GSET;
- ifr.ifr_data = (char*)&etool;
- if (IOCTL(slave_dev, &ifr, SIOCETHTOOL) == 0) {
- slave->speed = etool.speed;
- slave->duplex = etool.duplex;
- } else {
- goto err_out;
+ /* Fake speed and duplex */
+ slave->speed = SPEED_100;
+ slave->duplex = DUPLEX_FULL;
+
+ if (slave_dev->ethtool_ops) {
+ u32 res;
+
+ if (!slave_dev->ethtool_ops->get_settings) {
+ return -1;
}
- } else {
- goto err_out;
+
+ res = slave_dev->ethtool_ops->get_settings(slave_dev, &etool);
+ if (res < 0) {
+ return -1;
+ }
+
+ goto verify;
+ }
+
+ ioctl = slave_dev->do_ioctl;
+ strncpy(ifr.ifr_name, slave_dev->name, IFNAMSIZ);
+ etool.cmd = ETHTOOL_GSET;
+ ifr.ifr_data = (char*)&etool;
+ if (!ioctl || (IOCTL(slave_dev, &ifr, SIOCETHTOOL) < 0)) {
+ return -1;
}
- switch (slave->speed) {
+verify:
+ switch (etool.speed) {
case SPEED_10:
case SPEED_100:
case SPEED_1000:
break;
default:
- goto err_out;
+ return -1;
}
- switch (slave->duplex) {
+ switch (etool.duplex) {
case DUPLEX_FULL:
case DUPLEX_HALF:
break;
default:
- goto err_out;
+ return -1;
}
- return 0;
+ slave->speed = etool.speed;
+ slave->duplex = etool.duplex;
-err_out:
- /* Fake speed and duplex */
- slave->speed = SPEED_100;
- slave->duplex = DUPLEX_FULL;
- return -1;
+ return 0;
}
/*
* It'd be nice if there was a good way to tell if a driver supports
* netif_carrier, but there really isn't.
*/
-static int bond_check_dev_link(struct net_device *slave_dev, int reporting)
+static int bond_check_dev_link(struct bonding *bond, struct net_device *slave_dev, int reporting)
{
static int (* ioctl)(struct net_device *, struct ifreq *, int);
struct ifreq ifr;
struct mii_ioctl_data *mii;
struct ethtool_value etool;
- if (use_carrier) {
+ if (bond->params.use_carrier) {
return netif_carrier_ok(slave_dev) ? BMSR_LSTATUS : 0;
}
*/
/* Yes, the mii is overlaid on the ifreq.ifr_ifru */
+ strncpy(ifr.ifr_name, slave_dev->name, IFNAMSIZ);
mii = (struct mii_ioctl_data *)&ifr.ifr_data;
if (IOCTL(slave_dev, &ifr, SIOCGMIIPHY) == 0) {
mii->reg_num = MII_BMSR;
return (mii->val_out & BMSR_LSTATUS);
}
}
+ }
- /* try SIOCETHTOOL ioctl, some drivers cache ETHTOOL_GLINK */
- /* for a period of time so we attempt to get link status */
- /* from it last if the above MII ioctls fail... */
+ /* try SIOCETHTOOL ioctl, some drivers cache ETHTOOL_GLINK */
+ /* for a period of time so we attempt to get link status */
+ /* from it last if the above MII ioctls fail... */
+ if (slave_dev->ethtool_ops) {
+ if (slave_dev->ethtool_ops->get_link) {
+ u32 link;
+
+ link = slave_dev->ethtool_ops->get_link(slave_dev);
+
+ return link ? BMSR_LSTATUS : 0;
+ }
+ }
+
+ if (ioctl) {
+ strncpy(ifr.ifr_name, slave_dev->name, IFNAMSIZ);
etool.cmd = ETHTOOL_GLINK;
ifr.ifr_data = (char*)&etool;
if (IOCTL(slave_dev, &ifr, SIOCETHTOOL) == 0) {
*/
static void bond_set_promiscuity(struct bonding *bond, int inc)
{
- if (USES_PRIMARY(bond_mode)) {
+ if (USES_PRIMARY(bond->params.mode)) {
/* write lock already acquired */
if (bond->curr_active_slave) {
dev_set_promiscuity(bond->curr_active_slave->dev, inc);
*/
static void bond_set_allmulti(struct bonding *bond, int inc)
{
- if (USES_PRIMARY(bond_mode)) {
+ if (USES_PRIMARY(bond->params.mode)) {
/* write lock already acquired */
if (bond->curr_active_slave) {
dev_set_allmulti(bond->curr_active_slave->dev, inc);
*/
static void bond_mc_add(struct bonding *bond, void *addr, int alen)
{
- if (USES_PRIMARY(bond_mode)) {
+ if (USES_PRIMARY(bond->params.mode)) {
/* write lock already acquired */
if (bond->curr_active_slave) {
dev_mc_add(bond->curr_active_slave->dev, addr, alen, 0);
*/
static void bond_mc_delete(struct bonding *bond, void *addr, int alen)
{
- if (USES_PRIMARY(bond_mode)) {
+ if (USES_PRIMARY(bond->params.mode)) {
/* write lock already acquired */
if (bond->curr_active_slave) {
dev_mc_delete(bond->curr_active_slave->dev, addr, alen, 0);
*/
static void bond_mc_list_flush(struct net_device *bond_dev, struct net_device *slave_dev)
{
+ struct bonding *bond = bond_dev->priv;
struct dev_mc_list *dmi;
for (dmi = bond_dev->mc_list; dmi; dmi = dmi->next) {
dev_mc_delete(slave_dev, dmi->dmi_addr, dmi->dmi_addrlen, 0);
}
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
/* del lacpdu mc addr from mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
{
struct dev_mc_list *dmi;
- if (!USES_PRIMARY(bond_mode)) {
+ if (!USES_PRIMARY(bond->params.mode)) {
/* nothing to do - mc list is already up-to-date on
* all slaves
*/
{
struct slave *new_active, *old_active;
struct slave *bestslave = NULL;
- int mintime;
+ int mintime = bond->params.updelay;
int i;
new_active = old_active = bond->curr_active_slave;
}
}
- mintime = updelay;
-
/* first try the primary link; if arping, a link must tx/rx traffic
* before it can be considered the curr_active_slave - also, we would skip
* slaves between the curr_active_slave and primary_slave that may be up
* and able to arp
*/
if ((bond->primary_slave) &&
- (!arp_interval) &&
+ (!bond->params.arp_interval) &&
(IS_UP(bond->primary_slave->dev))) {
new_active = bond->primary_slave;
}
if (new_active) {
if (new_active->link == BOND_LINK_BACK) {
- if (USES_PRIMARY(bond_mode)) {
+ if (USES_PRIMARY(bond->params.mode)) {
printk(KERN_INFO DRV_NAME
": %s: making interface %s the new "
"active one %d ms earlier.\n",
bond->dev->name, new_active->dev->name,
- (updelay - new_active->delay) * miimon);
+ (bond->params.updelay - new_active->delay) * bond->params.miimon);
}
new_active->delay = 0;
new_active->link = BOND_LINK_UP;
new_active->jiffies = jiffies;
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
bond_3ad_handle_link_change(new_active, BOND_LINK_UP);
}
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
bond_alb_handle_link_change(bond, new_active, BOND_LINK_UP);
}
} else {
- if (USES_PRIMARY(bond_mode)) {
+ if (USES_PRIMARY(bond->params.mode)) {
printk(KERN_INFO DRV_NAME
": %s: making interface %s the new "
"active one.\n",
}
}
- if (bond_mode == BOND_MODE_ACTIVEBACKUP) {
+ if (bond->params.mode == BOND_MODE_ACTIVEBACKUP) {
if (old_active) {
bond_set_slave_inactive_flags(old_active);
}
}
}
- if (USES_PRIMARY(bond_mode)) {
+ if (USES_PRIMARY(bond->params.mode)) {
bond_mc_swap(bond, new_active, old_active);
}
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
bond_alb_handle_active_change(bond, new_active);
} else {
bond->curr_active_slave = new_active;
struct dev_mc_list *dmi;
struct sockaddr addr;
int link_reporting;
+ int old_features = bond_dev->features;
int res = 0;
if (slave_dev->do_ioctl == NULL) {
return -EBUSY;
}
+ /* vlan challenged mutual exclusion */
+ /* no need to lock since we're protected by rtnl_lock */
+ if (slave_dev->features & NETIF_F_VLAN_CHALLENGED) {
+ dprintk("%s: NETIF_F_VLAN_CHALLENGED\n", slave_dev->name);
+ if (!list_empty(&bond->vlan_list)) {
+ printk(KERN_ERR DRV_NAME
+ ": Error: cannot enslave VLAN "
+ "challenged slave %s on VLAN enabled "
+ "bond %s\n", slave_dev->name,
+ bond_dev->name);
+ return -EPERM;
+ } else {
+ printk(KERN_WARNING DRV_NAME
+ ": Warning: enslaved VLAN challenged "
+ "slave %s. Adding VLANs will be blocked as "
+ "long as %s is part of bond %s\n",
+ slave_dev->name, slave_dev->name,
+ bond_dev->name);
+ bond_dev->features |= NETIF_F_VLAN_CHALLENGED;
+ }
+ } else {
+ dprintk("%s: ! NETIF_F_VLAN_CHALLENGED\n", slave_dev->name);
+ if (bond->slave_cnt == 0) {
+ /* First slave, and it is not VLAN challenged,
+ * so remove the block of adding VLANs over the bond.
+ */
+ bond_dev->features &= ~NETIF_F_VLAN_CHALLENGED;
+ }
+ }
+
if (app_abi_ver >= 1) {
/* The application is using an ABI, which requires the
* slave interface to be closed.
printk(KERN_ERR DRV_NAME
": Error: %s is up\n",
slave_dev->name);
- return -EPERM;
+ res = -EPERM;
+ goto err_undo_flags;
}
if (slave_dev->set_mac_address == NULL) {
"Your kernel likely does not support slave "
"devices.\n");
- return -EOPNOTSUPP;
+ res = -EOPNOTSUPP;
+ goto err_undo_flags;
}
} else {
/* The application is not using an ABI, which requires the
printk(KERN_ERR DRV_NAME
": Error: %s is not running\n",
slave_dev->name);
- return -EINVAL;
+ res = -EINVAL;
+ goto err_undo_flags;
}
- if ((bond_mode == BOND_MODE_8023AD) ||
- (bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_8023AD) ||
+ (bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
printk(KERN_ERR DRV_NAME
": Error: to use %s mode, you must upgrade "
"ifenslave.\n",
- bond_mode_name());
- return -EOPNOTSUPP;
+ bond_mode_name(bond->params.mode));
+ res = -EOPNOTSUPP;
+ goto err_undo_flags;
}
}
new_slave = kmalloc(sizeof(struct slave), GFP_KERNEL);
if (!new_slave) {
- return -ENOMEM;
+ res = -ENOMEM;
+ goto err_undo_flags;
}
memset(new_slave, 0, sizeof(struct slave));
new_slave->dev = slave_dev;
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
/* bond_alb_init_slave() must be called before all other stages since
* it might fail and we do not want to have to undo everything
*/
* curr_active_slave, and that is taken care of later when calling
* bond_change_active()
*/
- if (!USES_PRIMARY(bond_mode)) {
+ if (!USES_PRIMARY(bond->params.mode)) {
/* set promiscuity level to new slave */
if (bond_dev->flags & IFF_PROMISC) {
dev_set_promiscuity(slave_dev, 1);
}
}
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
/* add lacpdu mc addr to mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
dev_mc_add(slave_dev, lacpdu_multicast, ETH_ALEN, 0);
}
+ bond_add_vlans_on_slave(bond, slave_dev);
+
write_lock_bh(&bond->lock);
bond_attach_slave(bond, new_slave);
new_slave->delay = 0;
new_slave->link_failure_count = 0;
- if (miimon && !use_carrier) {
- link_reporting = bond_check_dev_link(slave_dev, 1);
+ if (bond->params.miimon && !bond->params.use_carrier) {
+ link_reporting = bond_check_dev_link(bond, slave_dev, 1);
- if ((link_reporting == -1) && !arp_interval) {
+ if ((link_reporting == -1) && !bond->params.arp_interval) {
/*
* miimon is set but a bonded network driver
* does not support ETHTOOL/MII and
}
/* check for initial state */
- if (!miimon ||
- (bond_check_dev_link(slave_dev, 0) == BMSR_LSTATUS)) {
- if (updelay) {
+ if (!bond->params.miimon ||
+ (bond_check_dev_link(bond, slave_dev, 0) == BMSR_LSTATUS)) {
+ if (bond->params.updelay) {
dprintk("Initial state of slave_dev is "
"BOND_LINK_BACK\n");
new_slave->link = BOND_LINK_BACK;
- new_slave->delay = updelay;
+ new_slave->delay = bond->params.updelay;
} else {
dprintk("Initial state of slave_dev is "
"BOND_LINK_UP\n");
"forced to 100Mbps, duplex forced to Full.\n",
new_slave->dev->name);
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
printk(KERN_WARNING
"Operation of 802.3ad mode requires ETHTOOL "
"support in base driver for proper aggregator "
}
}
- if (USES_PRIMARY(bond_mode) && primary) {
+ if (USES_PRIMARY(bond->params.mode) && bond->params.primary[0]) {
/* if there is a primary slave, remember it */
- if (strcmp(primary, new_slave->dev->name) == 0) {
+ if (strcmp(bond->params.primary, new_slave->dev->name) == 0) {
bond->primary_slave = new_slave;
}
}
- switch (bond_mode) {
+ switch (bond->params.mode) {
case BOND_MODE_ACTIVEBACKUP:
/* if we're in active-backup mode, we need one and only one active
* interface. The backup interfaces will have their NOARP flag set
* can be called only after the mac address of the bond is set
*/
bond_3ad_initialize(bond, 1000/AD_TIMER_INTERVAL,
- lacp_fast);
+ bond->params.lacp_fast);
} else {
SLAVE_AD_INFO(new_slave).id =
SLAVE_AD_INFO(new_slave->prev).id + 1;
err_free:
kfree(new_slave);
+
+err_undo_flags:
+ bond_dev->features = old_features;
+
return res;
}
static int bond_release(struct net_device *bond_dev, struct net_device *slave_dev)
{
struct bonding *bond = bond_dev->priv;
- struct slave *slave;
+ struct slave *slave, *oldcurrent;
struct sockaddr addr;
int mac_addr_differ;
}
/* Inform AD package of unbinding of slave. */
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
/* must be called before the slave is
* detached from the list
*/
? "active" : "backup",
slave_dev->name);
+ oldcurrent = bond->curr_active_slave;
+
bond->current_arp_slave = NULL;
/* release the slave from its bond */
bond->primary_slave = NULL;
}
- if (bond->curr_active_slave == slave) {
+ if (oldcurrent == slave) {
bond_change_active_slave(bond, NULL);
- bond_select_active_slave(bond);
- }
-
- if (!bond->curr_active_slave) {
- printk(KERN_INFO DRV_NAME
- ": %s: now running without any active "
- "interface !\n",
- bond_dev->name);
}
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
- /* must be called only after the slave has been
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
+ /* Must be called only after the slave has been
* detached from the list and the curr_active_slave
- * has been replaced (if our_slave == old_current)
+ * has been cleared (if our_slave == old_current),
+ * but before a new active slave is selected.
*/
bond_alb_deinit_slave(bond, slave);
}
+ if (oldcurrent == slave) {
+ bond_select_active_slave(bond);
+
+ if (!bond->curr_active_slave) {
+ printk(KERN_INFO DRV_NAME
+ ": %s: now running without any active "
+ "interface !\n",
+ bond_dev->name);
+ }
+ }
+
+ if (bond->slave_cnt == 0) {
+ /* if the last slave was removed, zero the mac address
+ * of the master so it will be set by the application
+ * to the mac address of the first slave
+ */
+ memset(bond_dev->dev_addr, 0, bond_dev->addr_len);
+
+ if (list_empty(&bond->vlan_list)) {
+ bond_dev->features |= NETIF_F_VLAN_CHALLENGED;
+ } else {
+ printk(KERN_WARNING DRV_NAME
+ ": Warning: clearing HW address of %s while it "
+ "still has VLANs.\n",
+ bond_dev->name);
+ printk(KERN_WARNING DRV_NAME
+ ": When re-adding slaves, make sure the bond's "
+ "HW address matches its VLANs'.\n");
+ }
+ } else if ((bond_dev->features & NETIF_F_VLAN_CHALLENGED) &&
+ !bond_has_challenged_slaves(bond)) {
+ printk(KERN_INFO DRV_NAME
+ ": last VLAN challenged slave %s "
+ "left bond %s. VLAN blocking is removed\n",
+ slave_dev->name, bond_dev->name);
+ bond_dev->features &= ~NETIF_F_VLAN_CHALLENGED;
+ }
+
write_unlock_bh(&bond->lock);
+ bond_del_vlans_from_slave(bond, slave_dev);
+
/* If the mode USES_PRIMARY, then we should only remove its
* promisc and mc settings if it was the curr_active_slave, but that was
* already taken care of above when we detached the slave
*/
- if (!USES_PRIMARY(bond_mode)) {
+ if (!USES_PRIMARY(bond->params.mode)) {
/* unset promiscuity level from slave */
if (bond_dev->flags & IFF_PROMISC) {
dev_set_promiscuity(slave_dev, -1);
kfree(slave);
- /* if the last slave was removed, zero the mac address
- * of the master so it will be set by the application
- * to the mac address of the first slave
- */
- if (bond->slave_cnt == 0) {
- memset(bond_dev->dev_addr, 0, bond_dev->addr_len);
- }
-
return 0; /* deletion OK */
}
/* Inform AD package of unbinding of slave
* before slave is detached from the list.
*/
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
bond_3ad_unbind_slave(slave);
}
slave_dev = slave->dev;
bond_detach_slave(bond, slave);
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
/* must be called only after the slave
* has been detached from the list
*/
*/
write_unlock_bh(&bond->lock);
+ bond_del_vlans_from_slave(bond, slave_dev);
+
/* If the mode USES_PRIMARY, then we should only remove its
* promisc and mc settings if it was the curr_active_slave, but that was
* already taken care of above when we detached the slave
*/
- if (!USES_PRIMARY(bond_mode)) {
+ if (!USES_PRIMARY(bond->params.mode)) {
/* unset promiscuity level from slave */
if (bond_dev->flags & IFF_PROMISC) {
dev_set_promiscuity(slave_dev, -1);
*/
memset(bond_dev->dev_addr, 0, bond_dev->addr_len);
+ if (list_empty(&bond->vlan_list)) {
+ bond_dev->features |= NETIF_F_VLAN_CHALLENGED;
+ } else {
+ printk(KERN_WARNING DRV_NAME
+ ": Warning: clearing HW address of %s while it "
+ "still has VLANs.\n",
+ bond_dev->name);
+ printk(KERN_WARNING DRV_NAME
+ ": When re-adding slaves, make sure the bond's "
+ "HW address matches its VLANs'.\n");
+ }
+
printk(KERN_INFO DRV_NAME
": %s: released all slaves\n",
bond_dev->name);
struct slave *new_active = NULL;
int res = 0;
+ if (!USES_PRIMARY(bond->params.mode)) {
+ return -EINVAL;
+ }
+
/* Verify that master_dev is indeed the master of slave_dev */
if (!(slave_dev->flags & IFF_SLAVE) ||
(slave_dev->master != bond_dev)) {
{
struct bonding *bond = bond_dev->priv;
- info->bond_mode = bond_mode;
- info->miimon = miimon;
+ info->bond_mode = bond->params.mode;
+ info->miimon = bond->params.miimon;
read_lock_bh(&bond->lock);
info->num_slaves = bond->slave_cnt;
struct bonding *bond = bond_dev->priv;
struct slave *slave, *oldcurrent;
int do_failover = 0;
- int delta_in_ticks = (miimon * HZ) / 1000;
+ int delta_in_ticks;
int i;
read_lock(&bond->lock);
+ delta_in_ticks = (bond->params.miimon * HZ) / 1000;
+
if (bond->kill_timers) {
goto out;
}
u16 old_speed = slave->speed;
u8 old_duplex = slave->duplex;
- link_state = bond_check_dev_link(slave_dev, 0);
+ link_state = bond_check_dev_link(bond, slave_dev, 0);
switch (slave->link) {
case BOND_LINK_UP: /* the link was up */
break;
} else { /* link going down */
slave->link = BOND_LINK_FAIL;
- slave->delay = downdelay;
+ slave->delay = bond->params.downdelay;
if (slave->link_failure_count < UINT_MAX) {
slave->link_failure_count++;
}
- if (downdelay) {
+ if (bond->params.downdelay) {
printk(KERN_INFO DRV_NAME
": %s: link status down for %s "
"interface %s, disabling it in "
"%d ms.\n",
bond_dev->name,
IS_UP(slave_dev)
- ? ((bond_mode == BOND_MODE_ACTIVEBACKUP)
+ ? ((bond->params.mode == BOND_MODE_ACTIVEBACKUP)
? ((slave == oldcurrent)
? "active " : "backup ")
: "")
: "idle ",
slave_dev->name,
- downdelay * miimon);
+ bond->params.downdelay * bond->params.miimon);
}
}
/* no break ! fall through the BOND_LINK_FAIL test to
/* in active/backup mode, we must
* completely disable this interface
*/
- if ((bond_mode == BOND_MODE_ACTIVEBACKUP) ||
- (bond_mode == BOND_MODE_8023AD)) {
+ if ((bond->params.mode == BOND_MODE_ACTIVEBACKUP) ||
+ (bond->params.mode == BOND_MODE_8023AD)) {
bond_set_slave_inactive_flags(slave);
}
slave_dev->name);
/* notify ad that the link status has changed */
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
bond_3ad_handle_link_change(slave, BOND_LINK_DOWN);
}
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
bond_alb_handle_link_change(bond, slave, BOND_LINK_DOWN);
}
": %s: link status up again after %d "
"ms for interface %s.\n",
bond_dev->name,
- (downdelay - slave->delay) * miimon,
+ (bond->params.downdelay - slave->delay) * bond->params.miimon,
slave_dev->name);
}
break;
break;
} else { /* link going up */
slave->link = BOND_LINK_BACK;
- slave->delay = updelay;
+ slave->delay = bond->params.updelay;
- if (updelay) {
+ if (bond->params.updelay) {
/* if updelay == 0, no need to
advertise about a 0 ms delay */
printk(KERN_INFO DRV_NAME
"in %d ms.\n",
bond_dev->name,
slave_dev->name,
- updelay * miimon);
+ bond->params.updelay * bond->params.miimon);
}
}
/* no break ! fall through the BOND_LINK_BACK state in
": %s: link status down again after %d "
"ms for interface %s.\n",
bond_dev->name,
- (updelay - slave->delay) * miimon,
+ (bond->params.updelay - slave->delay) * bond->params.miimon,
slave_dev->name);
} else {
/* link stays up */
slave->link = BOND_LINK_UP;
slave->jiffies = jiffies;
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
/* prevent it from being the active one */
slave->state = BOND_STATE_BACKUP;
- } else if (bond_mode != BOND_MODE_ACTIVEBACKUP) {
+ } else if (bond->params.mode != BOND_MODE_ACTIVEBACKUP) {
/* make it immediately active */
slave->state = BOND_STATE_ACTIVE;
} else if (slave != bond->primary_slave) {
slave_dev->name);
/* notify ad that the link status has changed */
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
bond_3ad_handle_link_change(slave, BOND_LINK_UP);
}
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
bond_alb_handle_link_change(bond, slave, BOND_LINK_UP);
}
bond_update_speed_duplex(slave);
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
if (old_speed != slave->speed) {
bond_3ad_adapter_speed_changed(slave);
}
}
re_arm:
- mod_timer(&bond->mii_timer, jiffies + delta_in_ticks);
+ if (bond->params.miimon) {
+ mod_timer(&bond->mii_timer, jiffies + delta_in_ticks);
+ }
out:
read_unlock(&bond->lock);
}
-static void bond_arp_send_all(struct slave *slave)
+static void bond_arp_send_all(struct bonding *bond, struct slave *slave)
{
int i;
+ u32 *targets = bond->params.arp_targets;
- for (i = 0; (i<MAX_ARP_IP_TARGETS) && arp_target[i]; i++) {
- arp_send(ARPOP_REQUEST, ETH_P_ARP, arp_target[i], slave->dev,
+ for (i = 0; (i < BOND_MAX_ARP_TARGETS) && targets[i]; i++) {
+ arp_send(ARPOP_REQUEST, ETH_P_ARP, targets[i], slave->dev,
my_ip, NULL, slave->dev->dev_addr,
NULL);
}
struct bonding *bond = bond_dev->priv;
struct slave *slave, *oldcurrent;
int do_failover = 0;
- int delta_in_ticks = (arp_interval * HZ) / 1000;
+ int delta_in_ticks;
int i;
read_lock(&bond->lock);
+ delta_in_ticks = (bond->params.arp_interval * HZ) / 1000;
+
if (bond->kill_timers) {
goto out;
}
* to be unstable during low/no traffic periods
*/
if (IS_UP(slave->dev)) {
- bond_arp_send_all(slave);
+ bond_arp_send_all(bond, slave);
}
}
}
re_arm:
- mod_timer(&bond->arp_timer, jiffies + delta_in_ticks);
+ if (bond->params.arp_interval) {
+ mod_timer(&bond->arp_timer, jiffies + delta_in_ticks);
+ }
out:
read_unlock(&bond->lock);
}
{
struct bonding *bond = bond_dev->priv;
struct slave *slave;
- int delta_in_ticks = (arp_interval * HZ) / 1000;
+ int delta_in_ticks;
int i;
read_lock(&bond->lock);
+ delta_in_ticks = (bond->params.arp_interval * HZ) / 1000;
+
if (bond->kill_timers) {
goto out;
}
* rx traffic
*/
if (slave && my_ip) {
- bond_arp_send_all(slave);
+ bond_arp_send_all(bond, slave);
}
}
if (IS_UP(slave->dev)) {
slave->link = BOND_LINK_BACK;
bond_set_slave_active_flags(slave);
- bond_arp_send_all(slave);
+ bond_arp_send_all(bond, slave);
slave->jiffies = jiffies;
bond->current_arp_slave = slave;
break;
}
re_arm:
- mod_timer(&bond->arp_timer, jiffies + delta_in_ticks);
+ if (bond->params.arp_interval) {
+ mod_timer(&bond->arp_timer, jiffies + delta_in_ticks);
+ }
out:
read_unlock(&bond->lock);
}
read_unlock(&dev_base_lock);
}
-static void bond_info_show_master(struct seq_file *seq, struct bonding *bond)
+static void bond_info_show_master(struct seq_file *seq)
{
+ struct bonding *bond = seq->private;
struct slave *curr;
read_lock(&bond->curr_slave_lock);
curr = bond->curr_active_slave;
read_unlock(&bond->curr_slave_lock);
- seq_printf(seq, "Bonding Mode: %s\n", bond_mode_name());
+ seq_printf(seq, "Bonding Mode: %s\n",
+ bond_mode_name(bond->params.mode));
- if (USES_PRIMARY(bond_mode)) {
- if (curr) {
- seq_printf(seq,
- "Currently Active Slave: %s\n",
- curr->dev->name);
- }
+ if (USES_PRIMARY(bond->params.mode)) {
+ seq_printf(seq, "Primary Slave: %s\n",
+ (bond->params.primary[0]) ?
+ bond->params.primary : "None");
+
+ seq_printf(seq, "Currently Active Slave: %s\n",
+ (curr) ? curr->dev->name : "None");
}
seq_printf(seq, "MII Status: %s\n", (curr) ? "up" : "down");
- seq_printf(seq, "MII Polling Interval (ms): %d\n", miimon);
- seq_printf(seq, "Up Delay (ms): %d\n", updelay * miimon);
- seq_printf(seq, "Down Delay (ms): %d\n", downdelay * miimon);
+ seq_printf(seq, "MII Polling Interval (ms): %d\n", bond->params.miimon);
+ seq_printf(seq, "Up Delay (ms): %d\n",
+ bond->params.updelay * bond->params.miimon);
+ seq_printf(seq, "Down Delay (ms): %d\n",
+ bond->params.downdelay * bond->params.miimon);
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
struct ad_info ad_info;
seq_puts(seq, "\n802.3ad info\n");
+ seq_printf(seq, "LACP rate: %s\n",
+ (bond->params.lacp_fast) ? "fast" : "slow");
if (bond_3ad_get_active_agg_info(bond, &ad_info)) {
seq_printf(seq, "bond %s has no active aggregator\n",
static void bond_info_show_slave(struct seq_file *seq, const struct slave *slave)
{
+ struct bonding *bond = seq->private;
+
seq_printf(seq, "\nSlave Interface: %s\n", slave->dev->name);
seq_printf(seq, "MII Status: %s\n",
(slave->link == BOND_LINK_UP) ? "up" : "down");
slave->perm_hwaddr[5]);
}
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
const struct aggregator *agg
= SLAVE_AD_INFO(slave).port.aggregator;
{
if (v == SEQ_START_TOKEN) {
seq_printf(seq, "%s\n", version);
- bond_info_show_master(seq, seq->private);
+ bond_info_show_master(seq);
} else {
bond_info_show_slave(seq, v);
}
bond->kill_timers = 0;
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
struct timer_list *alb_timer = &(BOND_ALB_INFO(bond).alb_timer);
/* bond_alb_initialize must be called before the timer
* is started.
*/
- if (bond_alb_initialize(bond, (bond_mode == BOND_MODE_ALB))) {
+ if (bond_alb_initialize(bond, (bond->params.mode == BOND_MODE_ALB))) {
/* something went wrong - fail the open operation */
return -1;
}
add_timer(alb_timer);
}
- if (miimon) { /* link check interval, in milliseconds. */
+ if (bond->params.miimon) { /* link check interval, in milliseconds. */
init_timer(mii_timer);
mii_timer->expires = jiffies + 1;
mii_timer->data = (unsigned long)bond_dev;
add_timer(mii_timer);
}
- if (arp_interval) { /* arp interval, in milliseconds. */
+ if (bond->params.arp_interval) { /* arp interval, in milliseconds. */
init_timer(arp_timer);
arp_timer->expires = jiffies + 1;
arp_timer->data = (unsigned long)bond_dev;
- if (bond_mode == BOND_MODE_ACTIVEBACKUP) {
+ if (bond->params.mode == BOND_MODE_ACTIVEBACKUP) {
arp_timer->function = (void *)&bond_activebackup_arp_mon;
} else {
arp_timer->function = (void *)&bond_loadbalance_arp_mon;
add_timer(arp_timer);
}
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
struct timer_list *ad_timer = &(BOND_AD_INFO(bond).ad_timer);
init_timer(ad_timer);
ad_timer->expires = jiffies + 1;
bond_mc_list_destroy(bond);
- if (bond_mode == BOND_MODE_8023AD) {
+ if (bond->params.mode == BOND_MODE_8023AD) {
/* Unregister the receive of LACPDUs */
bond_unregister_lacpdu(bond);
}
* because a running timer might be trying to hold it too
*/
- if (miimon) { /* link check interval, in milliseconds. */
+ if (bond->params.miimon) { /* link check interval, in milliseconds. */
del_timer_sync(&bond->mii_timer);
}
- if (arp_interval) { /* arp interval, in milliseconds. */
+ if (bond->params.arp_interval) { /* arp interval, in milliseconds. */
del_timer_sync(&bond->arp_timer);
}
- switch (bond_mode) {
+ switch (bond->params.mode) {
case BOND_MODE_8023AD:
del_timer_sync(&(BOND_AD_INFO(bond).ad_timer));
break;
/* Release the bonded slaves */
bond_release_all(bond_dev);
- if ((bond_mode == BOND_MODE_TLB) ||
- (bond_mode == BOND_MODE_ALB)) {
+ if ((bond->params.mode == BOND_MODE_TLB) ||
+ (bond->params.mode == BOND_MODE_ALB)) {
/* Must be called only after all
* slaves have been released
*/
break;
case BOND_CHANGE_ACTIVE_OLD:
case SIOCBONDCHANGEACTIVE:
- if (USES_PRIMARY(bond_mode)) {
- res = bond_ioctl_change_active(bond_dev, slave_dev);
- } else {
- res = -EINVAL;
- }
+ res = bond_ioctl_change_active(bond_dev, slave_dev);
break;
default:
res = -EOPNOTSUPP;
struct bonding *bond = bond_dev->priv;
struct slave *slave, *start_at;
int i;
+ int res = 1;
read_lock(&bond->lock);
if (!BOND_IS_OK(bond)) {
- goto free_out;
+ goto out;
}
read_lock(&bond->curr_slave_lock);
read_unlock(&bond->curr_slave_lock);
if (!slave) {
- goto free_out;
+ goto out;
}
bond_for_each_slave_from(bond, slave, i, start_at) {
if (IS_UP(slave->dev) &&
(slave->link == BOND_LINK_UP) &&
(slave->state == BOND_STATE_ACTIVE)) {
- skb->dev = slave->dev;
- skb->priority = 1;
- dev_queue_xmit(skb);
+ res = bond_dev_queue_xmit(bond, skb, slave->dev);
write_lock(&bond->curr_slave_lock);
bond->curr_active_slave = slave->next;
write_unlock(&bond->curr_slave_lock);
- goto out;
+ break;
}
}
+
out:
+ if (res) {
+ /* no suitable interface, frame not sent */
+ dev_kfree_skb(skb);
+ }
read_unlock(&bond->lock);
return 0;
-
-free_out:
- /* no suitable interface, frame not sent */
- dev_kfree_skb(skb);
- goto out;
}
/*
static int bond_xmit_activebackup(struct sk_buff *skb, struct net_device *bond_dev)
{
struct bonding *bond = bond_dev->priv;
+ int res = 1;
/* if we are sending arp packets, try to at least
identify our own ip address */
- if (arp_interval && !my_ip &&
+ if (bond->params.arp_interval && !my_ip &&
(skb->protocol == __constant_htons(ETH_P_ARP))) {
char *the_ip = (char *)skb->data +
sizeof(struct ethhdr) +
read_lock(&bond->curr_slave_lock);
if (!BOND_IS_OK(bond)) {
- goto free_out;
+ goto out;
}
if (bond->curr_active_slave) { /* one usable interface */
- skb->dev = bond->curr_active_slave->dev;
- skb->priority = 1;
- dev_queue_xmit(skb);
- goto out;
- } else {
- goto free_out;
+ res = bond_dev_queue_xmit(bond, skb, bond->curr_active_slave->dev);
}
+
out:
+ if (res) {
+ /* no suitable interface, frame not sent */
+ dev_kfree_skb(skb);
+ }
read_unlock(&bond->curr_slave_lock);
read_unlock(&bond->lock);
return 0;
-
-free_out:
- /* no suitable interface, frame not sent */
- dev_kfree_skb(skb);
- goto out;
}
/*
struct slave *slave, *start_at;
int slave_no;
int i;
+ int res = 1;
read_lock(&bond->lock);
if (!BOND_IS_OK(bond)) {
- goto free_out;
+ goto out;
}
slave_no = (data->h_dest[5]^bond_dev->dev_addr[5]) % bond->slave_cnt;
if (IS_UP(slave->dev) &&
(slave->link == BOND_LINK_UP) &&
(slave->state == BOND_STATE_ACTIVE)) {
- skb->dev = slave->dev;
- skb->priority = 1;
- dev_queue_xmit(skb);
-
- goto out;
+ res = bond_dev_queue_xmit(bond, skb, slave->dev);
+ break;
}
}
out:
+ if (res) {
+ /* no suitable interface, frame not sent */
+ dev_kfree_skb(skb);
+ }
read_unlock(&bond->lock);
return 0;
-
-free_out:
- /* no suitable interface, frame not sent */
- dev_kfree_skb(skb);
- goto out;
}
/*
struct slave *slave, *start_at;
struct net_device *tx_dev = NULL;
int i;
+ int res = 1;
read_lock(&bond->lock);
if (!BOND_IS_OK(bond)) {
- goto free_out;
+ goto out;
}
read_lock(&bond->curr_slave_lock);
read_unlock(&bond->curr_slave_lock);
if (!start_at) {
- goto free_out;
+ goto out;
}
bond_for_each_slave_from(bond, slave, i, start_at) {
continue;
}
- skb2->dev = tx_dev;
- skb2->priority = 1;
- dev_queue_xmit(skb2);
+ res = bond_dev_queue_xmit(bond, skb2, tx_dev);
+ if (res) {
+ dev_kfree_skb(skb2);
+ continue;
+ }
}
tx_dev = slave->dev;
}
}
if (tx_dev) {
- skb->dev = tx_dev;
- skb->priority = 1;
- dev_queue_xmit(skb);
- } else {
- goto free_out;
+ res = bond_dev_queue_xmit(bond, skb, tx_dev);
}
out:
+ if (res) {
+ /* no suitable interface, frame not sent */
+ dev_kfree_skb(skb);
+ }
/* frame sent to all suitable interfaces */
read_unlock(&bond->lock);
return 0;
-
-free_out:
- /* no suitable interface, frame not sent */
- dev_kfree_skb(skb);
- goto out;
}
#ifdef CONFIG_NET_FASTROUTE
/*------------------------- Device initialization ---------------------------*/
/*
+ * set bond mode specific net device operations
+ */
+static inline void bond_set_mode_ops(struct net_device *bond_dev, int mode)
+{
+ switch (mode) {
+ case BOND_MODE_ROUNDROBIN:
+ bond_dev->hard_start_xmit = bond_xmit_roundrobin;
+ break;
+ case BOND_MODE_ACTIVEBACKUP:
+ bond_dev->hard_start_xmit = bond_xmit_activebackup;
+ break;
+ case BOND_MODE_XOR:
+ bond_dev->hard_start_xmit = bond_xmit_xor;
+ break;
+ case BOND_MODE_BROADCAST:
+ bond_dev->hard_start_xmit = bond_xmit_broadcast;
+ break;
+ case BOND_MODE_8023AD:
+ bond_dev->hard_start_xmit = bond_3ad_xmit_xor;
+ break;
+ case BOND_MODE_TLB:
+ case BOND_MODE_ALB:
+ bond_dev->hard_start_xmit = bond_alb_xmit;
+ bond_dev->set_mac_address = bond_alb_set_mac_address;
+ break;
+ default:
+ /* Should never happen, mode already checked */
+ printk(KERN_ERR DRV_NAME
+ ": Error: Unknown bonding mode %d\n",
+ mode);
+ break;
+ }
+}
+
+/*
* Does not allocate but creates a /proc entry.
* Allowed to fail.
*/
-static int __init bond_init(struct net_device *bond_dev)
+static int __init bond_init(struct net_device *bond_dev, struct bond_params *params)
{
struct bonding *bond = bond_dev->priv;
- int count;
dprintk("Begin bond_init for %s\n", bond_dev->name);
rwlock_init(&bond->lock);
rwlock_init(&bond->curr_slave_lock);
+ bond->params = *params; /* copy params struct */
+
/* Initialize pointers */
bond->first_slave = NULL;
bond->curr_active_slave = NULL;
bond->current_arp_slave = NULL;
bond->primary_slave = NULL;
bond->dev = bond_dev;
+ INIT_LIST_HEAD(&bond->vlan_list);
/* Initialize the device entry points */
bond_dev->open = bond_open;
bond_dev->change_mtu = bond_change_mtu;
bond_dev->set_mac_address = bond_set_mac_address;
- switch (bond_mode) {
- case BOND_MODE_ROUNDROBIN:
- bond_dev->hard_start_xmit = bond_xmit_roundrobin;
- break;
- case BOND_MODE_ACTIVEBACKUP:
- bond_dev->hard_start_xmit = bond_xmit_activebackup;
- break;
- case BOND_MODE_XOR:
- bond_dev->hard_start_xmit = bond_xmit_xor;
- break;
- case BOND_MODE_BROADCAST:
- bond_dev->hard_start_xmit = bond_xmit_broadcast;
- break;
- case BOND_MODE_8023AD:
- bond_dev->hard_start_xmit = bond_3ad_xmit_xor; /* extern */
- break;
- case BOND_MODE_TLB:
- case BOND_MODE_ALB:
- bond_dev->hard_start_xmit = bond_alb_xmit; /* extern */
- bond_dev->set_mac_address = bond_alb_set_mac_address; /* extern */
- break;
- default:
- printk(KERN_ERR DRV_NAME
- ": Error: Unknown bonding mode %d\n",
- bond_mode);
- return -EINVAL;
- }
+ bond_set_mode_ops(bond_dev, bond->params.mode);
bond_dev->destructor = free_netdev;
#ifdef CONFIG_NET_FASTROUTE
bond_dev->tx_queue_len = 0;
bond_dev->flags |= IFF_MASTER|IFF_MULTICAST;
- printk(KERN_INFO DRV_NAME ": %s registered with", bond_dev->name);
- if (miimon) {
- printk(" MII link monitoring set to %d ms", miimon);
- updelay /= miimon;
- downdelay /= miimon;
- } else {
- printk("out MII link monitoring");
- }
- printk(", in %s mode.\n", bond_mode_name());
+ /* At first, we block adding VLANs. That's the only way to
+ * prevent problems that occur when adding VLANs over an
+ * empty bond. The block will be removed once non-challenged
+ * slaves are enslaved.
+ */
+ bond_dev->features |= NETIF_F_VLAN_CHALLENGED;
- printk(KERN_INFO DRV_NAME ": %s registered with", bond_dev->name);
- if (arp_interval > 0) {
- printk(" ARP monitoring set to %d ms with %d target(s):",
- arp_interval, arp_ip_count);
- for (count=0 ; count<arp_ip_count ; count++) {
- printk(" %s", arp_ip_target[count]);
- }
- printk("\n");
- } else {
- printk("out ARP monitoring\n");
- }
+ /* By default, we declare the bond to be fully
+ * VLAN hardware accelerated capable. Special
+ * care is taken in the various xmit functions
+ * when there are slaves that are not hw accel
+ * capable
+ */
+ bond_dev->vlan_rx_register = bond_vlan_rx_register;
+ bond_dev->vlan_rx_add_vid = bond_vlan_rx_add_vid;
+ bond_dev->vlan_rx_kill_vid = bond_vlan_rx_kill_vid;
+ bond_dev->features |= (NETIF_F_HW_VLAN_TX |
+ NETIF_F_HW_VLAN_RX |
+ NETIF_F_HW_VLAN_FILTER);
#ifdef CONFIG_PROC_FS
bond_create_proc_entry(bond);
return -1;
}
-static int bond_check_params(void)
+static int bond_check_params(struct bond_params *params)
{
/*
* Convert string parameters.
if (bond_mode != BOND_MODE_8023AD) {
printk(KERN_INFO DRV_NAME
": lacp_rate param is irrelevant in mode %s\n",
- bond_mode_name());
+ bond_mode_name(bond_mode));
} else {
lacp_fast = bond_parse_parm(lacp_rate, bond_lacp_tbl);
if (lacp_fast == -1) {
downdelay = 0;
}
+ if ((use_carrier != 0) && (use_carrier != 1)) {
+ printk(KERN_WARNING DRV_NAME
+ ": Warning: use_carrier module parameter (%d), "
+ "not of valid value (0/1), so it was set to 1\n",
+ use_carrier);
+ use_carrier = 1;
+ }
+
/* reset values for 802.3ad */
if (bond_mode == BOND_MODE_8023AD) {
- if (arp_interval) {
- printk(KERN_WARNING DRV_NAME
- ": Warning: ARP monitoring can't be used "
- "simultaneously with 802.3ad, disabling ARP "
- "monitoring\n");
- arp_interval = 0;
- }
-
- if (miimon) {
+ if (!miimon) {
printk(KERN_WARNING DRV_NAME
": Warning: miimon must be specified, "
"otherwise bonding will not detect link "
}
if ((updelay % miimon) != 0) {
- /* updelay will be rounded in bond_init() when it
- * is divided by miimon, we just inform user here
- */
printk(KERN_WARNING DRV_NAME
": Warning: updelay (%d) is not a multiple "
"of miimon (%d), updelay rounded to %d ms\n",
updelay, miimon, (updelay / miimon) * miimon);
}
+ updelay /= miimon;
+
if ((downdelay % miimon) != 0) {
- /* downdelay will be rounded in bond_init() when it
- * is divided by miimon, we just inform user here
- */
printk(KERN_WARNING DRV_NAME
": Warning: downdelay (%d) is not a multiple "
"of miimon (%d), downdelay rounded to %d ms\n",
downdelay, miimon,
(downdelay / miimon) * miimon);
}
+
+ downdelay /= miimon;
}
if (arp_interval < 0) {
}
for (arp_ip_count = 0;
- (arp_ip_count < MAX_ARP_IP_TARGETS) && arp_ip_target[arp_ip_count];
+ (arp_ip_count < BOND_MAX_ARP_TARGETS) && arp_ip_target[arp_ip_count];
arp_ip_count++) {
/* not complete check, but should be good enough to
catch mistakes */
arp_interval = 0;
}
- if (!miimon && !arp_interval) {
+ if (miimon) {
+ printk(KERN_INFO DRV_NAME
+ ": MII link monitoring set to %d ms\n",
+ miimon);
+ } else if (arp_interval) {
+ int i;
+
+ printk(KERN_INFO DRV_NAME
+ ": ARP monitoring set to %d ms with %d target(s):",
+ arp_interval, arp_ip_count);
+
+ for (i = 0; i < arp_ip_count; i++)
+ printk (" %s", arp_ip_target[i]);
+
+ printk("\n");
+
+ } else {
/* miimon and arp_interval not set, we need one so things
* work as expected, see bonding.txt for details
*/
printk(KERN_WARNING DRV_NAME
": Warning: %s primary device specified but has no "
"effect in %s mode\n",
- primary, bond_mode_name());
+ primary, bond_mode_name(bond_mode));
primary = NULL;
}
+ /* fill params struct with the proper values */
+ params->mode = bond_mode;
+ params->miimon = miimon;
+ params->arp_interval = arp_interval;
+ params->updelay = updelay;
+ params->downdelay = downdelay;
+ params->use_carrier = use_carrier;
+ params->lacp_fast = lacp_fast;
+ params->primary[0] = 0;
+
+ if (primary) {
+ strncpy(params->primary, primary, IFNAMSIZ);
+ params->primary[IFNAMSIZ - 1] = 0;
+ }
+
+ memcpy(params->arp_targets, arp_target, sizeof(arp_target));
+
return 0;
}
static int __init bonding_init(void)
{
+ struct bond_params params;
int i;
int res;
printk(KERN_INFO "%s", version);
- res = bond_check_params();
+ res = bond_check_params(¶ms);
if (res) {
return res;
}
* /proc files), but before register_netdevice(), because we
* need to set function pointers.
*/
- res = bond_init(bond_dev);
+ res = bond_init(bond_dev, ¶ms);
if (res < 0) {
free_netdev(bond_dev);
goto out_err;
* 2003/05/01 - Shmulik Hen <shmulik.hen at intel dot com>
* - Added support for Transmit load balancing mode.
*
- * 2003/09/24 - Shmulik Hen <shmulik.hen at intel dot com>
+ * 2003/12/01 - Shmulik Hen <shmulik.hen at intel dot com>
* - Code cleanup and style changes
*/
#include "bond_3ad.h"
#include "bond_alb.h"
-#define DRV_VERSION "2.5.0"
-#define DRV_RELDATE "December 1, 2003"
+#define DRV_VERSION "2.6.0"
+#define DRV_RELDATE "January 14, 2004"
#define DRV_NAME "bonding"
#define DRV_DESCRIPTION "Ethernet Channel Bonding Driver"
+#define BOND_MAX_ARP_TARGETS 16
+
#ifdef BONDING_DEBUG
#define dprintk(fmt, args...) \
printk(KERN_DEBUG \
bond_for_each_slave_from(bond, pos, cnt, (bond)->first_slave)
+struct bond_params {
+ int mode;
+ int miimon;
+ int arp_interval;
+ int use_carrier;
+ int updelay;
+ int downdelay;
+ int lacp_fast;
+ char primary[IFNAMSIZ];
+ u32 arp_targets[BOND_MAX_ARP_TARGETS];
+};
+
+struct vlan_entry {
+ struct list_head vlan_list;
+ unsigned short vlan_id;
+};
+
struct slave {
struct net_device *dev; /* first - usefull for panic debug */
struct slave *next;
u16 flags;
struct ad_bond_info ad_info;
struct alb_bond_info alb_info;
+ struct bond_params params;
+ struct list_head vlan_list;
+ struct vlan_group *vlgrp;
};
/**
slave->dev->flags &= ~IFF_NOARP;
}
+struct vlan_entry *bond_next_vlan(struct bonding *bond, struct vlan_entry *curr);
+int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb, struct net_device *slave_dev);
+
#endif /* _LINUX_BONDING_H */
module_init(bsdcomp_init);
module_exit(bsdcomp_cleanup);
MODULE_LICENSE("Dual BSD/GPL");
+MODULE_ALIAS("ppp-compress-" __stringify(CI_BSD_COMPRESS));
out_unclaim:
mca_device_set_claim(mdev, 0);
- return err;;
+ return err;
}
#endif
{
int err = 0;
-#if CONFIG_MCA
+#ifdef CONFIG_MCA
err = mca_register_driver (&depca_mca_driver);
#endif
#ifdef CONFIG_EISA
static void __exit depca_module_exit (void)
{
int i;
-#if CONFIG_MCA
+#ifdef CONFIG_MCA
mca_unregister_driver (&depca_mca_driver);
#endif
#ifdef CONFIG_EISA
#include "dgrs_asstruct.h"
#include "dgrs_bcomm.h"
+#ifdef CONFIG_PCI
static struct pci_device_id dgrs_pci_tbl[] = {
{ SE6_PCI_VENDOR_ID, SE6_PCI_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID, },
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(pci, dgrs_pci_tbl);
+#endif
+
+#ifdef CONFIG_EISA
+static struct eisa_device_id dgrs_eisa_tbl[] = {
+ { "DBI0A01" },
+ { }
+};
+MODULE_DEVICE_TABLE(eisa, dgrs_eisa_tbl);
+#endif
+
MODULE_LICENSE("GPL");
static int dgrs_nicmode;
/*
- * Chain of device structures
- */
-static struct net_device *dgrs_root_dev;
-
-/*
* Private per-board data structure (dev->priv)
*/
typedef struct
/*
* Stuff for generic ethercard I/F
*/
- struct net_device *next_dev;
struct net_device_stats stats;
/*
priv->intrcnt = 0;
for (i = jiffies + 2*HZ + HZ/2; time_after(i, jiffies); )
{
- barrier(); /* gcc 2.95 needs this */
+ cpu_relax();
if (priv->intrcnt >= 2)
break;
}
}
/*
- * Register the /proc/ioports information...
- */
- if (!request_region(dev->base_addr, 256, "RightSwitch")) {
- printk(KERN_ERR "%s: io 0x%3lX, which is busy.\n", dev->name,
- dev->base_addr);
- rc = -EBUSY;
- goto err_free_irq;
- }
-
- /*
* Entry points...
*/
dev->open = &dgrs_open;
return (0);
}
-static int __init
+static struct net_device * __init
dgrs_found_device(
int io,
ulong mem,
int irq,
ulong plxreg,
- ulong plxdma
+ ulong plxdma,
+ struct device *pdev
)
{
- DGRS_PRIV *priv;
- struct net_device *dev, *aux;
- int i, ret;
+ DGRS_PRIV *priv;
+ struct net_device *dev;
+ int i, ret = -ENOMEM;
dev = alloc_etherdev(sizeof(DGRS_PRIV));
if (!dev)
- return -ENOMEM;
+ goto err0;
priv = (DGRS_PRIV *)dev->priv;
priv->chan = 1;
priv->devtbl[0] = dev;
- dev->init = dgrs_probe1;
SET_MODULE_OWNER(dev);
+ SET_NETDEV_DEV(dev, pdev);
+
+ ret = dgrs_probe1(dev);
+ if (ret)
+ goto err1;
- if (register_netdev(dev) != 0) {
- free_netdev(dev);
- return -EIO;
- }
-
- priv->next_dev = dgrs_root_dev;
- dgrs_root_dev = dev;
+ ret = register_netdev(dev);
+ if (ret)
+ goto err2;
if ( !dgrs_nicmode )
- return (0); /* Switch mode, we are done */
+ return dev; /* Switch mode, we are done */
/*
* Operating card as N separate NICs
if (!devN)
goto fail;
- /* Make it an exact copy of dev[0]... */
- *devN = *dev;
+ /* Don't copy the network device structure! */
/* copy the priv structure of dev[0] */
privN = (DGRS_PRIV *)devN->priv;
devN->irq = 0;
/* ... and base MAC address off address of 1st port */
devN->dev_addr[5] += i;
- /* ... choose a new name */
- strncpy(devN->name, "eth%d", IFNAMSIZ);
- devN->init = dgrs_initclone;
+
+ ret = dgrs_initclone(devN);
+ if (ret)
+ goto fail;
+
SET_MODULE_OWNER(devN);
+ SET_NETDEV_DEV(dev, pdev);
- ret = -EIO;
- if (register_netdev(devN)) {
+ ret = register_netdev(devN);
+ if (ret) {
free_netdev(devN);
goto fail;
}
privN->chan = i+1;
priv->devtbl[i] = devN;
- privN->next_dev = dgrs_root_dev;
- dgrs_root_dev = devN;
}
- return 0;
-fail: aux = priv->next_dev;
- while (dgrs_root_dev != aux) {
- struct net_device *d = dgrs_root_dev;
-
- dgrs_root_dev = ((DGRS_PRIV *)d->priv)->next_dev;
+ return dev;
+
+ fail:
+ while (i >= 0) {
+ struct net_device *d = priv->devtbl[i--];
unregister_netdev(d);
free_netdev(d);
}
- return ret;
+
+ err2:
+ free_irq(dev->irq, dev);
+ err1:
+ free_netdev(dev);
+ err0:
+ return ERR_PTR(ret);
}
-/*
- * Scan for all boards
- */
-static int is2iv[8] __initdata = { 0, 3, 5, 7, 10, 11, 12, 15 };
+static void __devexit dgrs_remove(struct net_device *dev)
+{
+ DGRS_PRIV *priv = dev->priv;
+ int i;
+
+ unregister_netdev(dev);
+
+ for (i = 1; i < priv->nports; ++i) {
+ struct net_device *d = priv->devtbl[i];
+ if (d) {
+ unregister_netdev(d);
+ free_netdev(d);
+ }
+ }
+
+ proc_reset(priv->devtbl[0], 1);
+
+ if (priv->vmem)
+ iounmap(priv->vmem);
+ if (priv->vplxdma)
+ iounmap((uchar *) priv->vplxdma);
+
+ if (dev->irq)
+ free_irq(dev->irq, dev);
-static int __init dgrs_scan(void)
+ for (i = 1; i < priv->nports; ++i) {
+ if (priv->devtbl[i])
+ unregister_netdev(priv->devtbl[i]);
+ }
+}
+
+#ifdef CONFIG_PCI
+static int __init dgrs_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
{
- int cards_found = 0;
+ struct net_device *dev;
+ int err;
uint io;
uint mem;
uint irq;
uint plxreg;
uint plxdma;
- struct pci_dev *pdev = NULL;
/*
- * First, check for PCI boards
+ * Get and check the bus-master and latency values.
+ * Some PCI BIOSes fail to set the master-enable bit,
+ * and the latency timer must be set to the maximum
+ * value to avoid data corruption that occurs when the
+ * timer expires during a transfer. Yes, it's a bug.
*/
- while ((pdev = pci_find_device(SE6_PCI_VENDOR_ID, SE6_PCI_DEVICE_ID, pdev)) != NULL)
- {
- /*
- * Get and check the bus-master and latency values.
- * Some PCI BIOSes fail to set the master-enable bit,
- * and the latency timer must be set to the maximum
- * value to avoid data corruption that occurs when the
- * timer expires during a transfer. Yes, it's a bug.
- */
- if (pci_enable_device(pdev))
- continue;
- pci_set_master(pdev);
+ err = pci_enable_device(pdev);
+ if (err)
+ return err;
+ err = pci_request_regions(pdev, "RightSwitch");
+ if (err)
+ return err;
+
+ pci_set_master(pdev);
+
+ plxreg = pci_resource_start (pdev, 0);
+ io = pci_resource_start (pdev, 1);
+ mem = pci_resource_start (pdev, 2);
+ pci_read_config_dword(pdev, 0x30, &plxdma);
+ irq = pdev->irq;
+ plxdma &= ~15;
- plxreg = pci_resource_start (pdev, 0);
- io = pci_resource_start (pdev, 1);
- mem = pci_resource_start (pdev, 2);
- pci_read_config_dword(pdev, 0x30, &plxdma);
- irq = pdev->irq;
- plxdma &= ~15;
+ /*
+ * On some BIOSES, the PLX "expansion rom" (used for DMA)
+ * address comes up as "0". This is probably because
+ * the BIOS doesn't see a valid 55 AA ROM signature at
+ * the "ROM" start and zeroes the address. To get
+ * around this problem the SE-6 is configured to ask
+ * for 4 MB of space for the dual port memory. We then
+ * must set its range back to 2 MB, and use the upper
+ * half for DMA register access
+ */
+ OUTL(io + PLX_SPACE0_RANGE, 0xFFE00000L);
+ if (plxdma == 0)
+ plxdma = mem + (2048L * 1024L);
+ pci_write_config_dword(pdev, 0x30, plxdma + 1);
+ pci_read_config_dword(pdev, 0x30, &plxdma);
+ plxdma &= ~15;
+
+ dev = dgrs_found_device(io, mem, irq, plxreg, plxdma, &pdev->dev);
+ if (IS_ERR(dev)) {
+ pci_release_regions(pdev);
+ return PTR_ERR(dev);
+ }
- /*
- * On some BIOSES, the PLX "expansion rom" (used for DMA)
- * address comes up as "0". This is probably because
- * the BIOS doesn't see a valid 55 AA ROM signature at
- * the "ROM" start and zeroes the address. To get
- * around this problem the SE-6 is configured to ask
- * for 4 MB of space for the dual port memory. We then
- * must set its range back to 2 MB, and use the upper
- * half for DMA register access
- */
- OUTL(io + PLX_SPACE0_RANGE, 0xFFE00000L);
- if (plxdma == 0)
- plxdma = mem + (2048L * 1024L);
- pci_write_config_dword(pdev, 0x30, plxdma + 1);
- pci_read_config_dword(pdev, 0x30, &plxdma);
- plxdma &= ~15;
+ pci_set_drvdata(pdev, dev);
+ return 0;
+}
- dgrs_found_device(io, mem, irq, plxreg, plxdma);
+static void __devexit dgrs_pci_remove(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
- cards_found++;
- }
+ dgrs_remove(dev);
+ pci_release_regions(pdev);
+ free_netdev(dev);
+}
- /*
- * Second, check for EISA boards
- */
- if (EISA_bus)
- {
- for (io = 0x1000; io < 0x9000; io += 0x1000)
- {
- if (inb(io+ES4H_MANUFmsb) != 0x10
- || inb(io+ES4H_MANUFlsb) != 0x49
- || inb(io+ES4H_PRODUCT) != ES4H_PRODUCT_CODE)
- continue;
+static struct pci_driver dgrs_pci_driver = {
+ .name = "dgrs",
+ .id_table = dgrs_pci_tbl,
+ .probe = dgrs_pci_probe,
+ .remove = __devexit_p(dgrs_pci_remove),
+};
+#endif
- if ( ! (inb(io+ES4H_EC) & ES4H_EC_ENABLE) )
- continue; /* Not EISA configured */
- mem = (inb(io+ES4H_AS_31_24) << 24)
- + (inb(io+ES4H_AS_23_16) << 16);
+#ifdef CONFIG_EISA
+static int is2iv[8] __initdata = { 0, 3, 5, 7, 10, 11, 12, 15 };
- irq = is2iv[ inb(io+ES4H_IS) & ES4H_IS_INTMASK ];
+static int __init dgrs_eisa_probe (struct device *gendev)
+{
+ struct net_device *dev;
+ struct eisa_device *edev = to_eisa_device(gendev);
+ uint io = edev->base_addr;
+ uint mem;
+ uint irq;
+ int rc = -ENODEV; /* Not EISA configured */
- dgrs_found_device(io, mem, irq, 0L, 0L);
+ if (!request_region(io, 256, "RightSwitch")) {
+ printk(KERN_ERR "%s: io 0x%3lX, which is busy.\n", dev->name,
+ dev->base_addr);
+ return -EBUSY;
+ }
- ++cards_found;
- }
+ if ( ! (inb(io+ES4H_EC) & ES4H_EC_ENABLE) )
+ goto err_out;
+
+ mem = (inb(io+ES4H_AS_31_24) << 24)
+ + (inb(io+ES4H_AS_23_16) << 16);
+
+ irq = is2iv[ inb(io+ES4H_IS) & ES4H_IS_INTMASK ];
+
+ dev = dgrs_found_device(io, mem, irq, 0L, 0L, gendev);
+ if (IS_ERR(dev)) {
+ rc = PTR_ERR(dev);
+ goto err_out;
}
- return cards_found;
+ gendev->driver_data = dev;
+ return 0;
+ err_out:
+ release_region(io, 256);
+ return rc;
}
+static int __devexit dgrs_eisa_remove(struct device *gendev)
+{
+ struct net_device *dev = gendev->driver_data;
+
+ dgrs_remove(dev);
+
+ release_region(dev->base_addr, 256);
+
+ free_netdev(dev);
+ return 0;
+}
+
+
+static struct eisa_driver dgrs_eisa_driver = {
+ .id_table = dgrs_eisa_tbl,
+ .driver = {
+ .name = "dgrs",
+ .probe = dgrs_eisa_probe,
+ .remove = __devexit_p(dgrs_eisa_remove),
+ }
+};
+#endif
/*
* Variables that can be overriden from module command line
static int __init dgrs_init_module (void)
{
- int cards_found;
int i;
+ int eisacount = 0, pcicount = 0;
/*
* Command line variable overrides
/*
* Find and configure all the cards
*/
- dgrs_root_dev = NULL;
- cards_found = dgrs_scan();
-
- return cards_found ? 0 : -ENODEV;
+#ifdef CONFIG_EISA
+ eisacount = eisa_driver_register(&dgrs_eisa_driver);
+ if (eisacount < 0)
+ return eisacount;
+#endif
+#ifdef CONFIG_PCI
+ pcicount = pci_register_driver(&dgrs_pci_driver);
+ if (pcicount < 0)
+ return pcicount;
+#endif
+ return (eisacount + pcicount) == 0 ? -ENODEV : 0;
}
static void __exit dgrs_cleanup_module (void)
{
- while (dgrs_root_dev)
- {
- struct net_device *next_dev;
- DGRS_PRIV *priv;
-
- priv = (DGRS_PRIV *) dgrs_root_dev->priv;
- next_dev = priv->next_dev;
- unregister_netdev(dgrs_root_dev);
-
- proc_reset(priv->devtbl[0], 1);
-
- if (priv->vmem)
- iounmap(priv->vmem);
- if (priv->vplxdma)
- iounmap((uchar *) priv->vplxdma);
-
- release_region(dgrs_root_dev->base_addr, 256);
-
- if (dgrs_root_dev->irq)
- free_irq(dgrs_root_dev->irq, dgrs_root_dev);
-
- free_netdev(dgrs_root_dev);
- dgrs_root_dev = next_dev;
- }
+#ifdef CONFIG_EISA
+ eisa_driver_unregister (&dgrs_eisa_driver);
+#endif
+#ifdef CONFIG_PCI
+ pci_unregister_driver (&dgrs_pci_driver);
+#endif
}
module_init(dgrs_init_module);
--- /dev/null
+/*******************************************************************************
+
+
+ Copyright(c) 1999 - 2004 Intel Corporation. All rights reserved.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by the Free
+ Software Foundation; either version 2 of the License, or (at your option)
+ any later version.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ Linux NICS <linux.nics@intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+*******************************************************************************/
+
+/*
+ * e100.c: Intel(R) PRO/100 ethernet driver
+ *
+ * (Re)written 2003 by scott.feldman@intel.com. Based loosely on
+ * original e100 driver, but better described as a munging of
+ * e100, e1000, eepro100, tg3, 8139cp, and other drivers.
+ *
+ * References:
+ * Intel 8255x 10/100 Mbps Ethernet Controller Family,
+ * Open Source Software Developers Manual,
+ * http://sourceforge.net/projects/e1000
+ *
+ *
+ * Theory of Operation
+ *
+ * I. General
+ *
+ * The driver supports Intel(R) 10/100 Mbps PCI Fast Ethernet
+ * controller family, which includes the 82557, 82558, 82559, 82550,
+ * 82551, and 82562 devices. 82558 and greater controllers
+ * integrate the Intel 82555 PHY. The controllers are used in
+ * server and client network interface cards, as well as in
+ * LAN-On-Motherboard (LOM), CardBus, MiniPCI, and ICHx
+ * configurations. 8255x supports a 32-bit linear addressing
+ * mode and operates at 33Mhz PCI clock rate.
+ *
+ * II. Driver Operation
+ *
+ * Memory-mapped mode is used exclusively to access the device's
+ * shared-memory structure, the Control/Status Registers (CSR). All
+ * setup, configuration, and control of the device, including queuing
+ * of Tx, Rx, and configuration commands is through the CSR.
+ * cmd_lock serializes accesses to the CSR command register. cb_lock
+ * protects the shared Command Block List (CBL).
+ *
+ * 8255x is highly MII-compliant and all access to the PHY go
+ * through the Management Data Interface (MDI). Consequently, the
+ * driver leverages the mii.c library shared with other MII-compliant
+ * devices.
+ *
+ * Big- and Little-Endian byte order as well as 32- and 64-bit
+ * archs are supported. Weak-ordered memory and non-cache-coherent
+ * archs are supported.
+ *
+ * III. Transmit
+ *
+ * A Tx skb is mapped and hangs off of a TCB. TCBs are linked
+ * together in a fixed-size ring (CBL) thus forming the flexible mode
+ * memory structure. A TCB marked with the suspend-bit indicates
+ * the end of the ring. The last TCB processed suspends the
+ * controller, and the controller can be restarted by issue a CU
+ * resume command to continue from the suspend point, or a CU start
+ * command to start at a given position in the ring.
+ *
+ * Non-Tx commands (config, multicast setup, etc) are linked
+ * into the CBL ring along with Tx commands. The common structure
+ * used for both Tx and non-Tx commands is the Command Block (CB).
+ *
+ * cb_to_use is the next CB to use for queuing a command; cb_to_clean
+ * is the next CB to check for completion; cb_to_send is the first
+ * CB to start on in case of a previous failure to resume. CB clean
+ * up happens in interrupt context in response to a CU interrupt, or
+ * in dev->poll in the case where NAPI is enabled. cbs_avail keeps
+ * track of number of free CB resources available.
+ *
+ * Hardware padding of short packets to minimum packet size is
+ * enabled. 82557 pads with 7Eh, while the later controllers pad
+ * with 00h.
+ *
+ * IV. Recieve
+ *
+ * The Receive Frame Area (RFA) comprises a ring of Receive Frame
+ * Descriptors (RFD) + data buffer, thus forming the simplified mode
+ * memory structure. Rx skbs are allocated to contain both the RFD
+ * and the data buffer, but the RFD is pulled off before the skb is
+ * indicated. The data buffer is aligned such that encapsulated
+ * protocol headers are u32-aligned. Since the RFD is part of the
+ * mapped shared memory, and completion status is contained within
+ * the RFD, the RFD must be dma_sync'ed to maintain a consistent
+ * view from software and hardware.
+ *
+ * Under typical operation, the receive unit (RU) is start once,
+ * and the controller happily fills RFDs as frames arrive. If
+ * replacement RFDs cannot be allocated, or the RU goes non-active,
+ * the RU must be restarted. Frame arrival generates an interrupt,
+ * and Rx indication and re-allocation happen in the same context,
+ * therefore no locking is required. If NAPI is enabled, this work
+ * happens in dev->poll. A software-generated interrupt is gen-
+ * erated from the watchdog to recover from a failed allocation
+ * senario where all Rx resources have been indicated and none re-
+ * placed.
+ *
+ * V. Miscellaneous
+ *
+ * VLAN offloading of tagging, stripping and filtering is not
+ * supported, but driver will accommodate the extra 4-byte VLAN tag
+ * for processing by upper layers. Tx/Rx Checksum offloading is not
+ * supported. Tx Scatter/Gather is not supported. Jumbo Frames is
+ * not supported (hardware limitation).
+ *
+ * NAPI support is enabled with CONFIG_E100_NAPI.
+ *
+ * MagicPacket(tm) WoL support is enabled/disabled via ethtool.
+ *
+ * Thanks to JC (jchapman@katalix.com) for helping with
+ * testing/troubleshooting the development driver.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/mii.h>
+#include <linux/if_vlan.h>
+#include <linux/skbuff.h>
+#include <linux/ethtool.h>
+#include <linux/string.h>
+#include <asm/unaligned.h>
+
+
+#define DRV_NAME "e100"
+#define DRV_VERSION "3.0.13_dev"
+#define DRV_DESCRIPTION "Intel(R) PRO/100 Network Driver"
+#define DRV_COPYRIGHT "Copyright(c) 1999-2004 Intel Corporation"
+#define PFX DRV_NAME ": "
+
+#define E100_WATCHDOG_PERIOD 2 * HZ
+#define E100_NAPI_WEIGHT 16
+
+MODULE_DESCRIPTION(DRV_DESCRIPTION);
+MODULE_AUTHOR(DRV_COPYRIGHT);
+MODULE_LICENSE("GPL");
+
+static int debug = 3;
+module_param(debug, int, 0);
+MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
+#define DPRINTK(nlevel, klevel, fmt, args...) \
+ (void)((NETIF_MSG_##nlevel & nic->msg_enable) && \
+ printk(KERN_##klevel PFX "%s: %s: " fmt, nic->netdev->name, \
+ __FUNCTION__ , ## args))
+
+#define INTEL_8255X_ETHERNET_DEVICE(device_id, ich) {\
+ PCI_VENDOR_ID_INTEL, device_id, PCI_ANY_ID, PCI_ANY_ID, \
+ PCI_CLASS_NETWORK_ETHERNET << 8, 0xFFFF00, ich }
+static struct pci_device_id e100_id_table[] = {
+ INTEL_8255X_ETHERNET_DEVICE(0x1029, 0),
+ INTEL_8255X_ETHERNET_DEVICE(0x1030, 0),
+ INTEL_8255X_ETHERNET_DEVICE(0x1031, 3),
+ INTEL_8255X_ETHERNET_DEVICE(0x1032, 3),
+ INTEL_8255X_ETHERNET_DEVICE(0x1033, 3),
+ INTEL_8255X_ETHERNET_DEVICE(0x1034, 3),
+ INTEL_8255X_ETHERNET_DEVICE(0x1038, 3),
+ INTEL_8255X_ETHERNET_DEVICE(0x1039, 4),
+ INTEL_8255X_ETHERNET_DEVICE(0x103A, 4),
+ INTEL_8255X_ETHERNET_DEVICE(0x103B, 4),
+ INTEL_8255X_ETHERNET_DEVICE(0x103C, 4),
+ INTEL_8255X_ETHERNET_DEVICE(0x103D, 4),
+ INTEL_8255X_ETHERNET_DEVICE(0x103E, 4),
+ INTEL_8255X_ETHERNET_DEVICE(0x1050, 5),
+ INTEL_8255X_ETHERNET_DEVICE(0x1051, 5),
+ INTEL_8255X_ETHERNET_DEVICE(0x1052, 5),
+ INTEL_8255X_ETHERNET_DEVICE(0x1053, 5),
+ INTEL_8255X_ETHERNET_DEVICE(0x1054, 5),
+ INTEL_8255X_ETHERNET_DEVICE(0x1055, 5),
+ INTEL_8255X_ETHERNET_DEVICE(0x1064, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x1065, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x1066, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x1067, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x1068, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x1069, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x106A, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x106B, 6),
+ INTEL_8255X_ETHERNET_DEVICE(0x1059, 0),
+ INTEL_8255X_ETHERNET_DEVICE(0x1209, 0),
+ INTEL_8255X_ETHERNET_DEVICE(0x1229, 0),
+ INTEL_8255X_ETHERNET_DEVICE(0x2449, 2),
+ INTEL_8255X_ETHERNET_DEVICE(0x2459, 2),
+ INTEL_8255X_ETHERNET_DEVICE(0x245D, 2),
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, e100_id_table);
+
+enum mac {
+ mac_82557_D100_A = 0,
+ mac_82557_D100_B = 1,
+ mac_82557_D100_C = 2,
+ mac_82558_D101_A4 = 4,
+ mac_82558_D101_B0 = 5,
+ mac_82559_D101M = 8,
+ mac_82559_D101S = 9,
+ mac_82550_D102 = 12,
+ mac_82550_D102_C = 13,
+ mac_82551_E = 14,
+ mac_82551_F = 15,
+ mac_82551_10 = 16,
+ mac_unknown = 0xFF,
+};
+
+enum phy {
+ phy_100a = 0x000003E0,
+ phy_100c = 0x035002A8,
+ phy_82555_tx = 0x015002A8,
+ phy_nsc_tx = 0x5C002000,
+ phy_82562_et = 0x033002A8,
+ phy_82562_em = 0x032002A8,
+ phy_82562_eh = 0x017002A8,
+ phy_unknown = 0xFFFFFFFF,
+};
+
+/* CSR (Control/Status Registers) */
+struct csr {
+ struct {
+ u8 status;
+ u8 stat_ack;
+ u8 cmd_lo;
+ u8 cmd_hi;
+ u32 gen_ptr;
+ } scb;
+ u32 port;
+ u16 flash_ctrl;
+ u8 eeprom_ctrl_lo;
+ u8 eeprom_ctrl_hi;
+ u32 mdi_ctrl;
+ u32 rx_dma_count;
+};
+
+enum scb_status {
+ rus_ready = 0x10,
+ rus_mask = 0x3C,
+};
+
+enum scb_stat_ack {
+ stat_ack_not_ours = 0x00,
+ stat_ack_sw_gen = 0x04,
+ stat_ack_rnr = 0x10,
+ stat_ack_cu_idle = 0x20,
+ stat_ack_frame_rx = 0x40,
+ stat_ack_cu_cmd_done = 0x80,
+ stat_ack_not_present = 0xFF,
+ stat_ack_rx = (stat_ack_sw_gen | stat_ack_rnr | stat_ack_frame_rx),
+ stat_ack_tx = (stat_ack_cu_idle | stat_ack_cu_cmd_done),
+};
+
+enum scb_cmd_hi {
+ irq_mask_none = 0x00,
+ irq_mask_all = 0x01,
+ irq_sw_gen = 0x02,
+};
+
+enum scb_cmd_lo {
+ ruc_start = 0x01,
+ ruc_load_base = 0x06,
+ cuc_start = 0x10,
+ cuc_resume = 0x20,
+ cuc_dump_addr = 0x40,
+ cuc_dump_stats = 0x50,
+ cuc_load_base = 0x60,
+ cuc_dump_reset = 0x70,
+};
+
+enum port {
+ software_reset = 0x0000,
+ selftest = 0x0001,
+ selective_reset = 0x0002,
+};
+
+enum eeprom_ctrl_lo {
+ eesk = 0x01,
+ eecs = 0x02,
+ eedi = 0x04,
+ eedo = 0x08,
+};
+
+enum mdi_ctrl {
+ mdi_write = 0x04000000,
+ mdi_read = 0x08000000,
+ mdi_ready = 0x10000000,
+};
+
+enum eeprom_op {
+ op_write = 0x05,
+ op_read = 0x06,
+ op_ewds = 0x10,
+ op_ewen = 0x13,
+};
+
+enum eeprom_offsets {
+ eeprom_id = 0x0A,
+ eeprom_config_asf = 0x0D,
+ eeprom_smbus_addr = 0x90,
+};
+
+enum eeprom_id {
+ eeprom_id_wol = 0x0020,
+};
+
+enum eeprom_config_asf {
+ eeprom_asf = 0x8000,
+ eeprom_gcl = 0x4000,
+};
+
+enum cb_status {
+ cb_complete = 0x8000,
+ cb_ok = 0x2000,
+};
+
+enum cb_command {
+ cb_iaaddr = 0x0001,
+ cb_config = 0x0002,
+ cb_multi = 0x0003,
+ cb_tx = 0x0004,
+ cb_dump = 0x0006,
+ cb_tx_sf = 0x0008,
+ cb_cid = 0x1f00,
+ cb_i = 0x2000,
+ cb_s = 0x4000,
+ cb_el = 0x8000,
+};
+
+struct rfd {
+ u16 status;
+ u16 command;
+ u32 link;
+ u32 rbd;
+ u16 actual_size;
+ u16 size;
+};
+
+struct rx {
+ struct rx *next, *prev;
+ struct sk_buff *skb;
+ dma_addr_t dma_addr;
+};
+
+#if defined(__BIG_ENDIAN_BITFIELD)
+#define X(a,b) b,a
+#else
+#define X(a,b) a,b
+#endif
+struct config {
+/*0*/ u8 X(byte_count:6, pad0:2);
+/*1*/ u8 X(X(rx_fifo_limit:4, tx_fifo_limit:3), pad1:1);
+/*2*/ u8 adaptive_ifs;
+/*3*/ u8 X(X(X(X(mwi_enable:1, type_enable:1), read_align_enable:1),
+ term_write_cache_line:1), pad3:4);
+/*4*/ u8 X(rx_dma_max_count:7, pad4:1);
+/*5*/ u8 X(tx_dma_max_count:7, dma_max_count_enable:1);
+/*6*/ u8 X(X(X(X(X(X(X(late_scb_update:1, direct_rx_dma:1),
+ tno_intr:1), cna_intr:1), standard_tcb:1), standard_stat_counter:1),
+ rx_discard_overruns:1), rx_save_bad_frames:1);
+/*7*/ u8 X(X(X(X(X(rx_discard_short_frames:1, tx_underrun_retry:2),
+ pad7:2), rx_extended_rfd:1), tx_two_frames_in_fifo:1),
+ tx_dynamic_tbd:1);
+/*8*/ u8 X(X(mii_mode:1, pad8:6), csma_disabled:1);
+/*9*/ u8 X(X(X(X(X(rx_tcpudp_checksum:1, pad9:3), vlan_arp_tco:1),
+ link_status_wake:1), arp_wake:1), mcmatch_wake:1);
+/*10*/ u8 X(X(X(pad10:3, no_source_addr_insertion:1), preamble_length:2),
+ loopback:2);
+/*11*/ u8 X(linear_priority:3, pad11:5);
+/*12*/ u8 X(X(linear_priority_mode:1, pad12:3), ifs:4);
+/*13*/ u8 ip_addr_lo;
+/*14*/ u8 ip_addr_hi;
+/*15*/ u8 X(X(X(X(X(X(X(promiscuous_mode:1, broadcast_disabled:1),
+ wait_after_win:1), pad15_1:1), ignore_ul_bit:1), crc_16_bit:1),
+ pad15_2:1), crs_or_cdt:1);
+/*16*/ u8 fc_delay_lo;
+/*17*/ u8 fc_delay_hi;
+/*18*/ u8 X(X(X(X(X(rx_stripping:1, tx_padding:1), rx_crc_transfer:1),
+ rx_long_ok:1), fc_priority_threshold:3), pad18:1);
+/*19*/ u8 X(X(X(X(X(X(X(addr_wake:1, magic_packet_disable:1),
+ fc_disable:1), fc_restop:1), fc_restart:1), fc_reject:1),
+ full_duplex_force:1), full_duplex_pin:1);
+/*20*/ u8 X(X(X(pad20_1:5, fc_priority_location:1), multi_ia:1), pad20_2:1);
+/*21*/ u8 X(X(pad21_1:3, multicast_all:1), pad21_2:4);
+/*22*/ u8 X(X(rx_d102_mode:1, rx_vlan_drop:1), pad22:6);
+ u8 pad_d102[9];
+};
+
+#define E100_MAX_MULTICAST_ADDRS 64
+struct multi {
+ u16 count;
+ u8 addr[E100_MAX_MULTICAST_ADDRS * ETH_ALEN + 2/*pad*/];
+};
+
+/* Important: keep total struct u32-aligned */
+struct cb {
+ u16 status;
+ u16 command;
+ u32 link;
+ union {
+ u8 iaaddr[ETH_ALEN];
+ struct config config;
+ struct multi multi;
+ struct {
+ u32 tbd_array;
+ u16 tcb_byte_count;
+ u8 threshold;
+ u8 tbd_count;
+ struct {
+ u32 buf_addr;
+ u16 size;
+ u16 eol;
+ } tbd;
+ } tcb;
+ u32 dump_buffer_addr;
+ } u;
+ struct cb *next, *prev;
+ dma_addr_t dma_addr;
+ struct sk_buff *skb;
+};
+
+enum loopback {
+ lb_none = 0, lb_mac = 1, lb_phy = 3,
+};
+
+struct stats {
+ u32 tx_good_frames, tx_max_collisions, tx_late_collisions,
+ tx_underruns, tx_lost_crs, tx_deferred, tx_single_collisions,
+ tx_multiple_collisions, tx_total_collisions;
+ u32 rx_good_frames, rx_crc_errors, rx_alignment_errors,
+ rx_resource_errors, rx_overrun_errors, rx_cdt_errors,
+ rx_short_frame_errors;
+ u32 fc_xmt_pause, fc_rcv_pause, fc_rcv_unsupported;
+ u16 xmt_tco_frames, rcv_tco_frames;
+ u32 complete;
+};
+
+struct mem {
+ struct {
+ u32 signature;
+ u32 result;
+ } selftest;
+ struct stats stats;
+ u8 dump_buf[596];
+};
+
+struct param_range {
+ u32 min;
+ u32 max;
+ u32 count;
+};
+
+struct params {
+ struct param_range rfds;
+ struct param_range cbs;
+};
+
+struct nic {
+ /* Begin: frequently used values: keep adjacent for cache effect */
+ u32 msg_enable ____cacheline_aligned;
+ struct net_device *netdev;
+ struct pci_dev *pdev;
+
+ struct rx *rxs ____cacheline_aligned;
+ struct rx *rx_to_use;
+ struct rx *rx_to_clean;
+ struct rfd blank_rfd;
+ int ru_running;
+
+ spinlock_t cb_lock ____cacheline_aligned;
+ spinlock_t cmd_lock;
+ struct csr *csr;
+ enum scb_cmd_lo cuc_cmd;
+ unsigned int cbs_avail;
+ struct cb *cbs;
+ struct cb *cb_to_use;
+ struct cb *cb_to_send;
+ struct cb *cb_to_clean;
+ u16 tx_command;
+ /* End: frequently used values: keep adjacent for cache effect */
+
+ enum {
+ ich = (1 << 0),
+ promiscuous = (1 << 1),
+ multicast_all = (1 << 2),
+ wol_magic = (1 << 3),
+ } flags ____cacheline_aligned;
+
+ enum mac mac;
+ enum phy phy;
+ struct params params;
+ struct net_device_stats net_stats;
+ struct timer_list watchdog;
+ struct timer_list blink_timer;
+ struct mii_if_info mii;
+ enum loopback loopback;
+
+ struct mem *mem;
+ dma_addr_t dma_addr;
+
+ dma_addr_t cbs_dma_addr;
+ u8 adaptive_ifs;
+ u8 tx_threshold;
+ u32 tx_frames;
+ u32 tx_collisions;
+ u32 tx_deferred;
+ u32 tx_single_collisions;
+ u32 tx_multiple_collisions;
+ u32 tx_fc_pause;
+ u32 tx_tco_frames;
+
+ u32 rx_fc_pause;
+ u32 rx_fc_unsupported;
+ u32 rx_tco_frames;
+
+ u8 rev_id;
+ u16 leds;
+ u16 eeprom_wc;
+ u16 eeprom[256];
+ u32 pm_state[16];
+};
+
+static inline void e100_write_flush(struct nic *nic)
+{
+ /* Flush previous PCI writes through intermediate bridges
+ * by doing a benign read */
+ (void)readb(&nic->csr->scb.status);
+}
+
+static inline void e100_enable_irq(struct nic *nic)
+{
+ writeb(irq_mask_none, &nic->csr->scb.cmd_hi);
+ e100_write_flush(nic);
+}
+
+static inline void e100_disable_irq(struct nic *nic)
+{
+ writeb(irq_mask_all, &nic->csr->scb.cmd_hi);
+ e100_write_flush(nic);
+}
+
+static void e100_hw_reset(struct nic *nic)
+{
+ /* Put CU and RU into idle with a selective reset to get
+ * device off of PCI bus */
+ writel(selective_reset, &nic->csr->port);
+ e100_write_flush(nic); udelay(20);
+
+ /* Now fully reset device */
+ writel(software_reset, &nic->csr->port);
+ e100_write_flush(nic); udelay(20);
+
+ /* TCO workaround - 82559 and greater */
+ if(nic->mac >= mac_82559_D101M) {
+ /* Issue a redundant CU load base without setting
+ * general pointer, and without waiting for scb to
+ * clear. This gets us into post-driver. Finally,
+ * wait 20 msec for reset to take effect. */
+ writeb(cuc_load_base, &nic->csr->scb.cmd_lo);
+ mdelay(20);
+ }
+
+ /* Mask off our interrupt line - it's unmasked after reset */
+ e100_disable_irq(nic);
+}
+
+static int e100_self_test(struct nic *nic)
+{
+ u32 dma_addr = nic->dma_addr + offsetof(struct mem, selftest);
+
+ /* Passing the self-test is a pretty good indication
+ * that the device can DMA to/from host memory */
+
+ nic->mem->selftest.signature = 0;
+ nic->mem->selftest.result = 0xFFFFFFFF;
+
+ writel(selftest | dma_addr, &nic->csr->port);
+ e100_write_flush(nic);
+ /* Wait 10 msec for self-test to complete */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ / 100 + 1);
+
+ /* Interrupts are enabled after self-test */
+ e100_disable_irq(nic);
+
+ /* Check results of self-test */
+ if(nic->mem->selftest.result != 0) {
+ DPRINTK(HW, ERR, "Self-test failed: result=0x%08X\n",
+ nic->mem->selftest.result);
+ return -ETIMEDOUT;
+ }
+ if(nic->mem->selftest.signature == 0) {
+ DPRINTK(HW, ERR, "Self-test failed: timed out\n");
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static void e100_eeprom_write(struct nic *nic, u16 addr_len, u16 addr, u16 data)
+{
+ u32 cmd_addr_data[3];
+ u8 ctrl;
+ int i, j;
+
+ /* Three cmds: write/erase enable, write data, write/erase disable */
+ cmd_addr_data[0] = op_ewen << (addr_len - 2);
+ cmd_addr_data[1] = (((op_write << addr_len) | addr) << 16) | data;
+ cmd_addr_data[2] = op_ewds << (addr_len - 2);
+
+ /* Bit-bang cmds to write word to eeprom */
+ for(j = 0; j < 3; j++) {
+
+ /* Chip select */
+ writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+
+ for(i = 31; i >= 0; i--) {
+ ctrl = (cmd_addr_data[j] & (1 << i)) ?
+ eecs | eedi : eecs;
+ writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+ writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+ }
+ /* Wait 10 msec for cmd to complete */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ / 100 + 1);
+
+ /* Chip deselect */
+ writeb(0, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+ }
+
+};
+
+/* General technique stolen from the eepro100 driver - very clever */
+static u16 e100_eeprom_read(struct nic *nic, u16 *addr_len, u16 addr)
+{
+ u32 cmd_addr_data;
+ u16 data = 0;
+ u8 ctrl;
+ int i;
+
+ cmd_addr_data = ((op_read << *addr_len) | addr) << 16;
+
+ /* Chip select */
+ writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+
+ /* Bit-bang to read word from eeprom */
+ for(i = 31; i >= 0; i--) {
+ ctrl = (cmd_addr_data & (1 << i)) ? eecs | eedi : eecs;
+ writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+ writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+ /* Eeprom drives a dummy zero to EEDO after receiving
+ * complete address. Use this to adjust addr_len. */
+ ctrl = readb(&nic->csr->eeprom_ctrl_lo);
+ if(!(ctrl & eedo) && i > 16) {
+ *addr_len -= (i - 16);
+ i = 17;
+ }
+ data = (data << 1) | (ctrl & eedo ? 1 : 0);
+ }
+
+ /* Chip deselect */
+ writeb(0, &nic->csr->eeprom_ctrl_lo);
+ e100_write_flush(nic); udelay(4);
+
+ return data;
+};
+
+/* Load entire EEPROM image into driver cache and validate checksum */
+static int e100_eeprom_load(struct nic *nic)
+{
+ u16 addr, addr_len = 8, checksum = 0;
+
+ /* Try reading with an 8-bit addr len to discover actual addr len */
+ e100_eeprom_read(nic, &addr_len, 0);
+ nic->eeprom_wc = 1 << addr_len;
+
+ for(addr = 0; addr < nic->eeprom_wc; addr++) {
+ nic->eeprom[addr] = e100_eeprom_read(nic, &addr_len, addr);
+ if(addr < nic->eeprom_wc - 1)
+ checksum += nic->eeprom[addr];
+ }
+
+ /* The checksum, stored in the last word, is calculated such that
+ * the sum of words should be 0xBABA */
+ checksum = 0xBABA - checksum;
+ if(checksum != nic->eeprom[nic->eeprom_wc - 1]) {
+ DPRINTK(PROBE, ERR, "EEPROM corrupted\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+/* Save (portion of) driver EEPROM cache to device and update checksum */
+static int e100_eeprom_save(struct nic *nic, u16 start, u16 count)
+{
+ u16 addr, addr_len = 8, checksum = 0;
+
+ /* Try reading with an 8-bit addr len to discover actual addr len */
+ e100_eeprom_read(nic, &addr_len, 0);
+ nic->eeprom_wc = 1 << addr_len;
+
+ if(start + count >= nic->eeprom_wc)
+ return -EINVAL;
+
+ for(addr = start; addr < start + count; addr++)
+ e100_eeprom_write(nic, addr_len, addr, nic->eeprom[addr]);
+
+ /* The checksum, stored in the last word, is calculated such that
+ * the sum of words should be 0xBABA */
+ for(addr = 0; addr < nic->eeprom_wc - 1; addr++)
+ checksum += nic->eeprom[addr];
+ nic->eeprom[nic->eeprom_wc - 1] = 0xBABA - checksum;
+ e100_eeprom_write(nic, addr_len, nic->eeprom_wc - 1, 0xBABA - checksum);
+
+ return 0;
+}
+
+#define E100_WAIT_SCB_TIMEOUT 40
+static inline int e100_exec_cmd(struct nic *nic, u8 cmd, dma_addr_t dma_addr)
+{
+ unsigned long flags;
+ unsigned int i;
+ int err = 0;
+
+ spin_lock_irqsave(&nic->cmd_lock, flags);
+
+ /* Previous command is accepted when SCB clears */
+ for(i = 0; i < E100_WAIT_SCB_TIMEOUT; i++) {
+ if(likely(!readb(&nic->csr->scb.cmd_lo)))
+ break;
+ cpu_relax();
+ if(unlikely(i > (E100_WAIT_SCB_TIMEOUT >> 1)))
+ udelay(5);
+ }
+ if(unlikely(i == E100_WAIT_SCB_TIMEOUT)) {
+ err = -EAGAIN;
+ goto err_unlock;
+ }
+
+ if(unlikely(cmd != cuc_resume))
+ writel(dma_addr, &nic->csr->scb.gen_ptr);
+ writeb(cmd, &nic->csr->scb.cmd_lo);
+
+err_unlock:
+ spin_unlock_irqrestore(&nic->cmd_lock, flags);
+
+ return err;
+}
+
+static inline int e100_exec_cb(struct nic *nic, struct sk_buff *skb,
+ void (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *))
+{
+ struct cb *cb;
+ unsigned long flags;
+ int err = 0;
+
+ spin_lock_irqsave(&nic->cb_lock, flags);
+
+ if(unlikely(!nic->cbs_avail)) {
+ err = -ENOMEM;
+ goto err_unlock;
+ }
+
+ cb = nic->cb_to_use;
+ nic->cb_to_use = cb->next;
+ nic->cbs_avail--;
+ cb->skb = skb;
+
+ if(unlikely(!nic->cbs_avail))
+ err = -ENOSPC;
+
+ cb_prepare(nic, cb, skb);
+
+ /* Order is important otherwise we'll be in a race with h/w:
+ * set S-bit in current first, then clear S-bit in previous. */
+ cb->command |= cpu_to_le16(cb_s);
+ cb->prev->command &= cpu_to_le16(~cb_s);
+
+ while(nic->cb_to_send != nic->cb_to_use) {
+ if(unlikely((err = e100_exec_cmd(nic, nic->cuc_cmd,
+ nic->cb_to_send->dma_addr)))) {
+ /* Ok, here's where things get sticky. It's
+ * possible that we can't schedule the command
+ * because the controller is too busy, so
+ * let's just queue the command and try again
+ * when another command is scheduled. */
+ break;
+ } else {
+ nic->cuc_cmd = cuc_resume;
+ nic->cb_to_send = nic->cb_to_send->next;
+ }
+ }
+
+err_unlock:
+ spin_unlock_irqrestore(&nic->cb_lock, flags);
+
+ return err;
+}
+
+static u16 mdio_ctrl(struct nic *nic, u32 addr, u32 dir, u32 reg, u16 data)
+{
+ u32 data_out = 0;
+ unsigned int i;
+
+ writel((reg << 16) | (addr << 21) | dir | data, &nic->csr->mdi_ctrl);
+
+ for(i = 0; i < 100; i++) {
+ udelay(20);
+ if((data_out = readl(&nic->csr->mdi_ctrl)) & mdi_ready)
+ break;
+ }
+
+ DPRINTK(HW, DEBUG,
+ "%s:addr=%d, reg=%d, data_in=0x%04X, data_out=0x%04X\n",
+ dir == mdi_read ? "READ" : "WRITE", addr, reg, data, data_out);
+ return (u16)data_out;
+}
+
+static int mdio_read(struct net_device *netdev, int addr, int reg)
+{
+ return mdio_ctrl(netdev->priv, addr, mdi_read, reg, 0);
+}
+
+static void mdio_write(struct net_device *netdev, int addr, int reg, int data)
+{
+ mdio_ctrl(netdev->priv, addr, mdi_write, reg, data);
+}
+
+static void e100_get_defaults(struct nic *nic)
+{
+ struct param_range rfds = { .min = 64, .max = 256, .count = 64 };
+ struct param_range cbs = { .min = 64, .max = 256, .count = 64 };
+
+ pci_read_config_byte(nic->pdev, PCI_REVISION_ID, &nic->rev_id);
+ /* MAC type is encoded as rev ID; exception: ICH is treated as 82559 */
+ nic->mac = (nic->flags & ich) ? mac_82559_D101M : nic->rev_id;
+ if(nic->mac == mac_unknown)
+ nic->mac = mac_82557_D100_A;
+
+ nic->params.rfds = rfds;
+ nic->params.cbs = cbs;
+
+ /* Quadwords to DMA into FIFO before starting frame transmit */
+ nic->tx_threshold = 0xE0;
+
+ nic->tx_command = cpu_to_le16(cb_tx | cb_i | cb_tx_sf |
+ ((nic->mac >= mac_82558_D101_A4) ? cb_cid : 0));
+
+ /* Template for a freshly allocated RFD */
+ nic->blank_rfd.command = cpu_to_le16(cb_el);
+ nic->blank_rfd.rbd = 0xFFFFFFFF;
+ nic->blank_rfd.size = cpu_to_le16(VLAN_ETH_FRAME_LEN);
+
+ /* MII setup */
+ nic->mii.phy_id_mask = 0x1F;
+ nic->mii.reg_num_mask = 0x1F;
+ nic->mii.dev = nic->netdev;
+ nic->mii.mdio_read = mdio_read;
+ nic->mii.mdio_write = mdio_write;
+}
+
+static void e100_configure(struct nic *nic, struct cb *cb, struct sk_buff *skb)
+{
+ struct config *config = &cb->u.config;
+ u8 *c = (u8 *)config;
+
+ cb->command = cpu_to_le16(cb_config);
+
+ memset(config, 0, sizeof(struct config));
+
+ config->byte_count = 0x16; /* bytes in this struct */
+ config->rx_fifo_limit = 0x8; /* bytes in FIFO before DMA */
+ config->direct_rx_dma = 0x1; /* reserved */
+ config->standard_tcb = 0x1; /* 1=standard, 0=extended */
+ config->standard_stat_counter = 0x1; /* 1=standard, 0=extended */
+ config->rx_discard_short_frames = 0x1; /* 1=discard, 0=pass */
+ config->tx_underrun_retry = 0x3; /* # of underrun retries */
+ config->mii_mode = 0x1; /* 1=MII mode, 0=503 mode */
+ config->pad10 = 0x6;
+ config->no_source_addr_insertion = 0x1; /* 1=no, 0=yes */
+ config->preamble_length = 0x2; /* 0=1, 1=3, 2=7, 3=15 bytes */
+ config->ifs = 0x6; /* x16 = inter frame spacing */
+ config->ip_addr_hi = 0xF2; /* ARP IP filter - not used */
+ config->pad15_1 = 0x1;
+ config->pad15_2 = 0x1;
+ config->crs_or_cdt = 0x0; /* 0=CRS only, 1=CRS or CDT */
+ config->fc_delay_hi = 0x40; /* time delay for fc frame */
+ config->tx_padding = 0x1; /* 1=pad short frames */
+ config->fc_priority_threshold = 0x7; /* 7=priority fc disabled */
+ config->pad18 = 0x1;
+ config->full_duplex_pin = 0x1; /* 1=examine FDX# pin */
+ config->pad20_1 = 0x1F;
+ config->fc_priority_location = 0x1; /* 1=byte#31, 0=byte#19 */
+ config->pad21_1 = 0x5;
+
+ config->adaptive_ifs = nic->adaptive_ifs;
+ config->loopback = nic->loopback;
+
+ if(nic->mii.force_media && nic->mii.full_duplex)
+ config->full_duplex_force = 0x1; /* 1=force, 0=auto */
+
+ if(nic->flags & promiscuous || nic->loopback) {
+ config->rx_save_bad_frames = 0x1; /* 1=save, 0=discard */
+ config->rx_discard_short_frames = 0x0; /* 1=discard, 0=save */
+ config->promiscuous_mode = 0x1; /* 1=on, 0=off */
+ }
+
+ if(nic->flags & multicast_all)
+ config->multicast_all = 0x1; /* 1=accept, 0=no */
+
+ if(!(nic->flags & wol_magic))
+ config->magic_packet_disable = 0x1; /* 1=off, 0=on */
+
+ if(nic->mac >= mac_82558_D101_A4) {
+ config->fc_disable = 0x1; /* 1=Tx fc off, 0=Tx fc on */
+ config->mwi_enable = 0x1; /* 1=enable, 0=disable */
+ config->standard_tcb = 0x0; /* 1=standard, 0=extended */
+ config->rx_long_ok = 0x1; /* 1=VLANs ok, 0=standard */
+ if(nic->mac >= mac_82559_D101M)
+ config->tno_intr = 0x1; /* TCO stats enable */
+ else
+ config->standard_stat_counter = 0x0;
+ }
+
+ DPRINTK(HW, DEBUG, "[00-07]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ c[0], c[1], c[2], c[3], c[4], c[5], c[6], c[7]);
+ DPRINTK(HW, DEBUG, "[08-15]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ c[8], c[9], c[10], c[11], c[12], c[13], c[14], c[15]);
+ DPRINTK(HW, DEBUG, "[16-23]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ c[16], c[17], c[18], c[19], c[20], c[21], c[22], c[23]);
+}
+
+static void e100_setup_iaaddr(struct nic *nic, struct cb *cb,
+ struct sk_buff *skb)
+{
+ cb->command = cpu_to_le16(cb_iaaddr);
+ memcpy(cb->u.iaaddr, nic->netdev->dev_addr, ETH_ALEN);
+}
+
+static void e100_dump(struct nic *nic, struct cb *cb, struct sk_buff *skb)
+{
+ cb->command = cpu_to_le16(cb_dump);
+ cb->u.dump_buffer_addr = cpu_to_le32(nic->dma_addr +
+ offsetof(struct mem, dump_buf));
+}
+
+#define NCONFIG_AUTO_SWITCH 0x0080
+#define MII_NSC_CONG MII_RESV1
+#define NSC_CONG_ENABLE 0x0100
+#define NSC_CONG_TXREADY 0x0400
+#define ADVERTISE_FC_SUPPORTED 0x0400
+static int e100_phy_init(struct nic *nic)
+{
+ struct net_device *netdev = nic->netdev;
+ u32 addr;
+ u16 bmcr, stat, id_lo, id_hi, cong;
+
+ /* Discover phy addr by searching addrs in order {1,0,2,..., 31} */
+ for(addr = 0; addr < 32; addr++) {
+ nic->mii.phy_id = (addr == 0) ? 1 : (addr == 1) ? 0 : addr;
+ bmcr = mdio_read(netdev, nic->mii.phy_id, MII_BMCR);
+ stat = mdio_read(netdev, nic->mii.phy_id, MII_BMSR);
+ stat = mdio_read(netdev, nic->mii.phy_id, MII_BMSR);
+ if(!((bmcr == 0xFFFF) || ((stat == 0) && (bmcr == 0))))
+ break;
+ }
+ DPRINTK(HW, DEBUG, "phy_addr = %d\n", nic->mii.phy_id);
+ if(addr == 32)
+ return -EAGAIN;
+
+ /* Selected the phy and isolate the rest */
+ for(addr = 0; addr < 32; addr++) {
+ if(addr != nic->mii.phy_id) {
+ mdio_write(netdev, addr, MII_BMCR, BMCR_ISOLATE);
+ } else {
+ bmcr = mdio_read(netdev, addr, MII_BMCR);
+ mdio_write(netdev, addr, MII_BMCR,
+ bmcr & ~BMCR_ISOLATE);
+ }
+ }
+
+ /* Get phy ID */
+ id_lo = mdio_read(netdev, nic->mii.phy_id, MII_PHYSID1);
+ id_hi = mdio_read(netdev, nic->mii.phy_id, MII_PHYSID2);
+ nic->phy = (u32)id_hi << 16 | (u32)id_lo;
+ DPRINTK(HW, DEBUG, "phy ID = 0x%08X\n", nic->phy);
+
+ /* Handle National tx phy */
+ if(nic->phy == phy_nsc_tx) {
+ /* Disable congestion control */
+ cong = mdio_read(netdev, nic->mii.phy_id, MII_NSC_CONG);
+ cong |= NSC_CONG_TXREADY;
+ cong &= ~NSC_CONG_ENABLE;
+ mdio_write(netdev, nic->mii.phy_id, MII_NSC_CONG, cong);
+ }
+
+ if(nic->mac >= mac_82550_D102)
+ /* enable/disable MDI/MDI-X auto-switching */
+ mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG,
+ nic->mii.force_media ? 0 : NCONFIG_AUTO_SWITCH);
+
+ return 0;
+}
+
+static int e100_hw_init(struct nic *nic)
+{
+ int err;
+
+ e100_hw_reset(nic);
+
+ DPRINTK(HW, ERR, "e100_hw_init\n");
+ if(!in_interrupt() && (err = e100_self_test(nic)))
+ return err;
+
+ if((err = e100_phy_init(nic)))
+ return err;
+ if((err = e100_exec_cmd(nic, cuc_load_base, 0)))
+ return err;
+ if((err = e100_exec_cmd(nic, ruc_load_base, 0)))
+ return err;
+ if((err = e100_exec_cb(nic, NULL, e100_configure)))
+ return err;
+ if((err = e100_exec_cb(nic, NULL, e100_setup_iaaddr)))
+ return err;
+ if((err = e100_exec_cmd(nic, cuc_dump_addr,
+ nic->dma_addr + offsetof(struct mem, stats))))
+ return err;
+ if((err = e100_exec_cmd(nic, cuc_dump_reset, 0)))
+ return err;
+
+ e100_disable_irq(nic);
+
+ return 0;
+}
+
+static void e100_multi(struct nic *nic, struct cb *cb, struct sk_buff *skb)
+{
+ struct net_device *netdev = nic->netdev;
+ struct dev_mc_list *list = netdev->mc_list;
+ u16 i, count = min(netdev->mc_count, E100_MAX_MULTICAST_ADDRS);
+
+ cb->command = cpu_to_le16(cb_multi);
+ cb->u.multi.count = cpu_to_le16(count * ETH_ALEN);
+ for(i = 0; list && i < count; i++, list = list->next)
+ memcpy(&cb->u.multi.addr[i*ETH_ALEN], &list->dmi_addr,
+ ETH_ALEN);
+}
+
+static void e100_set_multicast_list(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+
+ DPRINTK(HW, DEBUG, "mc_count=%d, flags=0x%04X\n",
+ netdev->mc_count, netdev->flags);
+
+ if(netdev->flags & IFF_PROMISC)
+ nic->flags |= promiscuous;
+ else
+ nic->flags &= ~promiscuous;
+
+ if(netdev->flags & IFF_ALLMULTI ||
+ netdev->mc_count > E100_MAX_MULTICAST_ADDRS)
+ nic->flags |= multicast_all;
+ else
+ nic->flags &= ~multicast_all;
+
+ e100_exec_cb(nic, NULL, e100_configure);
+ e100_exec_cb(nic, NULL, e100_multi);
+}
+
+static void e100_update_stats(struct nic *nic)
+{
+ struct net_device_stats *ns = &nic->net_stats;
+ struct stats *s = &nic->mem->stats;
+ u32 *complete = (nic->mac < mac_82558_D101_A4) ? &s->fc_xmt_pause :
+ (nic->mac < mac_82559_D101M) ? (u32 *)&s->xmt_tco_frames :
+ &s->complete;
+
+ /* Device's stats reporting may take several microseconds to
+ * complete, so where always waiting for results of the
+ * previous command. */
+
+ if(*complete == le32_to_cpu(0x0000A007)) {
+ *complete = 0;
+ nic->tx_frames = le32_to_cpu(s->tx_good_frames);
+ nic->tx_collisions = le32_to_cpu(s->tx_total_collisions);
+ ns->tx_aborted_errors += le32_to_cpu(s->tx_max_collisions);
+ ns->tx_window_errors += le32_to_cpu(s->tx_late_collisions);
+ ns->tx_carrier_errors += le32_to_cpu(s->tx_lost_crs);
+ ns->tx_fifo_errors += le32_to_cpu(s->tx_underruns);
+ ns->collisions += nic->tx_collisions;
+ ns->tx_errors += le32_to_cpu(s->tx_max_collisions) +
+ le32_to_cpu(s->tx_lost_crs);
+ ns->rx_dropped += le32_to_cpu(s->rx_resource_errors);
+ ns->rx_length_errors += le32_to_cpu(s->rx_short_frame_errors);
+ ns->rx_over_errors += le32_to_cpu(s->rx_resource_errors);
+ ns->rx_crc_errors += le32_to_cpu(s->rx_crc_errors);
+ ns->rx_frame_errors += le32_to_cpu(s->rx_alignment_errors);
+ ns->rx_fifo_errors += le32_to_cpu(s->rx_overrun_errors);
+ ns->rx_errors += le32_to_cpu(s->rx_crc_errors) +
+ le32_to_cpu(s->rx_alignment_errors) +
+ le32_to_cpu(s->rx_short_frame_errors) +
+ le32_to_cpu(s->rx_cdt_errors);
+ nic->tx_deferred += le32_to_cpu(s->tx_deferred);
+ nic->tx_single_collisions +=
+ le32_to_cpu(s->tx_single_collisions);
+ nic->tx_multiple_collisions +=
+ le32_to_cpu(s->tx_multiple_collisions);
+ if(nic->mac >= mac_82558_D101_A4) {
+ nic->tx_fc_pause += le32_to_cpu(s->fc_xmt_pause);
+ nic->rx_fc_pause += le32_to_cpu(s->fc_rcv_pause);
+ nic->rx_fc_unsupported +=
+ le32_to_cpu(s->fc_rcv_unsupported);
+ if(nic->mac >= mac_82559_D101M) {
+ nic->tx_tco_frames +=
+ le16_to_cpu(s->xmt_tco_frames);
+ nic->rx_tco_frames +=
+ le16_to_cpu(s->rcv_tco_frames);
+ }
+ }
+ }
+
+ e100_exec_cmd(nic, cuc_dump_reset, 0);
+}
+
+static void e100_adjust_adaptive_ifs(struct nic *nic, int speed, int duplex)
+{
+ /* Adjust inter-frame-spacing (IFS) between two transmits if
+ * we're getting collisions on a half-duplex connection. */
+
+ if(duplex == DUPLEX_HALF) {
+ u32 prev = nic->adaptive_ifs;
+ u32 min_frames = (speed == SPEED_100) ? 1000 : 100;
+
+ if((nic->tx_frames / 32 < nic->tx_collisions) &&
+ (nic->tx_frames > min_frames)) {
+ if(nic->adaptive_ifs < 60)
+ nic->adaptive_ifs += 5;
+ } else if (nic->tx_frames < min_frames) {
+ if(nic->adaptive_ifs >= 5)
+ nic->adaptive_ifs -= 5;
+ }
+ if(nic->adaptive_ifs != prev)
+ e100_exec_cb(nic, NULL, e100_configure);
+ }
+}
+
+static void e100_watchdog(unsigned long data)
+{
+ struct nic *nic = (struct nic *)data;
+ struct ethtool_cmd cmd;
+
+ DPRINTK(TIMER, DEBUG, "right now = %ld\n", jiffies);
+
+ /* mii library handles link maintenance tasks */
+
+ mii_ethtool_gset(&nic->mii, &cmd);
+
+ if(mii_link_ok(&nic->mii) && !netif_carrier_ok(nic->netdev)) {
+ DPRINTK(LINK, INFO, "link up, %sMbps, %s-duplex\n",
+ cmd.speed == SPEED_100 ? "100" : "10",
+ cmd.duplex == DUPLEX_FULL ? "full" : "half");
+ } else if(!mii_link_ok(&nic->mii) && netif_carrier_ok(nic->netdev)) {
+ DPRINTK(LINK, INFO, "link down\n");
+ }
+
+ mii_check_link(&nic->mii);
+
+ /* Software generated interrupt to recover from (rare) Rx
+ * allocation failure */
+ writeb(irq_sw_gen, &nic->csr->scb.cmd_hi);
+ e100_write_flush(nic);
+
+ e100_update_stats(nic);
+ e100_adjust_adaptive_ifs(nic, cmd.speed, cmd.duplex);
+
+ if(nic->mac <= mac_82557_D100_C)
+ /* Issue a multicast command to workaround a 557 lock up */
+ e100_set_multicast_list(nic->netdev);
+
+ mod_timer(&nic->watchdog, jiffies + E100_WATCHDOG_PERIOD);
+}
+
+static inline void e100_xmit_prepare(struct nic *nic, struct cb *cb,
+ struct sk_buff *skb)
+{
+ cb->command = nic->tx_command;
+ cb->u.tcb.tbd_array = cb->dma_addr + offsetof(struct cb, u.tcb.tbd);
+ cb->u.tcb.tcb_byte_count = 0;
+ cb->u.tcb.threshold = nic->tx_threshold;
+ cb->u.tcb.tbd_count = 1;
+ cb->u.tcb.tbd.buf_addr = cpu_to_le32(pci_map_single(nic->pdev,
+ skb->data, skb->len, PCI_DMA_TODEVICE));
+ cb->u.tcb.tbd.size = cpu_to_le16(skb->len);
+}
+
+static int e100_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ int err = e100_exec_cb(nic, skb, e100_xmit_prepare);
+
+ switch(err) {
+ case -ENOSPC:
+ /* We queued the skb, but now we're out of space. */
+ netif_stop_queue(netdev);
+ break;
+ case -ENOMEM:
+ /* This is a hard error - log it. */
+ DPRINTK(TX_ERR, DEBUG, "Out of Tx resources, returning skb\n");
+ netif_stop_queue(netdev);
+ return 1;
+ }
+
+ netdev->trans_start = jiffies;
+ return 0;
+}
+
+static inline int e100_tx_clean(struct nic *nic)
+{
+ struct cb *cb;
+ int tx_cleaned = 0;
+
+ spin_lock(&nic->cb_lock);
+
+ DPRINTK(TX_DONE, DEBUG, "cb->status = 0x%04X\n",
+ nic->cb_to_clean->status);
+
+ /* Clean CBs marked complete */
+ for(cb = nic->cb_to_clean;
+ cb->status & cpu_to_le16(cb_complete);
+ cb = nic->cb_to_clean = cb->next) {
+ if(likely(cb->skb)) {
+ nic->net_stats.tx_packets++;
+ nic->net_stats.tx_bytes += cb->skb->len;
+
+ pci_unmap_single(nic->pdev,
+ le32_to_cpu(cb->u.tcb.tbd.buf_addr),
+ le16_to_cpu(cb->u.tcb.tbd.size),
+ PCI_DMA_TODEVICE);
+ dev_kfree_skb_any(cb->skb);
+ tx_cleaned = 1;
+ }
+ cb->status = 0;
+ nic->cbs_avail++;
+ }
+
+ spin_unlock(&nic->cb_lock);
+
+ /* Recover from running out of Tx resources in xmit_frame */
+ if(unlikely(tx_cleaned && netif_queue_stopped(nic->netdev)))
+ netif_wake_queue(nic->netdev);
+
+ return tx_cleaned;
+}
+
+static void e100_clean_cbs(struct nic *nic)
+{
+ if(nic->cbs) {
+ while(nic->cb_to_clean != nic->cb_to_use) {
+ struct cb *cb = nic->cb_to_clean;
+ if(cb->skb) {
+ pci_unmap_single(nic->pdev,
+ le32_to_cpu(cb->u.tcb.tbd.buf_addr),
+ le16_to_cpu(cb->u.tcb.tbd.size),
+ PCI_DMA_TODEVICE);
+ dev_kfree_skb(cb->skb);
+ }
+ nic->cb_to_clean = nic->cb_to_clean->next;
+ }
+ nic->cbs_avail = nic->params.cbs.count;
+ pci_free_consistent(nic->pdev,
+ sizeof(struct cb) * nic->params.cbs.count,
+ nic->cbs, nic->cbs_dma_addr);
+ nic->cbs = NULL;
+ nic->cbs_avail = 0;
+ }
+ nic->cuc_cmd = cuc_start;
+ nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean =
+ nic->cbs;
+}
+
+static int e100_alloc_cbs(struct nic *nic)
+{
+ struct cb *cb;
+ unsigned int i, count = nic->params.cbs.count;
+
+ nic->cuc_cmd = cuc_start;
+ nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean = NULL;
+ nic->cbs_avail = 0;
+
+ nic->cbs = pci_alloc_consistent(nic->pdev,
+ sizeof(struct cb) * count, &nic->cbs_dma_addr);
+ if(!nic->cbs)
+ return -ENOMEM;
+
+ for(cb = nic->cbs, i = 0; i < count; cb++, i++) {
+ cb->next = (i + 1 < count) ? cb + 1 : nic->cbs;
+ cb->prev = (i == 0) ? nic->cbs + count - 1 : cb - 1;
+
+ cb->dma_addr = nic->cbs_dma_addr + i * sizeof(struct cb);
+ cb->link = cpu_to_le32(nic->cbs_dma_addr +
+ ((i+1) % count) * sizeof(struct cb));
+ }
+
+ nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean = nic->cbs;
+ nic->cbs_avail = count;
+
+ return 0;
+}
+
+static inline void e100_start_receiver(struct nic *nic)
+{
+ /* (Re)start RU if suspended or idle and RFA is non-NULL */
+ if(!nic->ru_running && nic->rx_to_clean->skb) {
+ e100_exec_cmd(nic, ruc_start, nic->rx_to_clean->dma_addr);
+ nic->ru_running = 1;
+ }
+}
+
+#define RFD_BUF_LEN (sizeof(struct rfd) + VLAN_ETH_FRAME_LEN)
+static inline int e100_rx_alloc_skb(struct nic *nic, struct rx *rx)
+{
+ unsigned int rx_offset = 2; /* u32 align protocol headers */
+
+ if(!(rx->skb = dev_alloc_skb(RFD_BUF_LEN + rx_offset)))
+ return -ENOMEM;
+
+ /* Align, init, and map the RFA. */
+ rx->skb->dev = nic->netdev;
+ skb_reserve(rx->skb, rx_offset);
+ memcpy(rx->skb->data, &nic->blank_rfd, sizeof(struct rfd));
+ rx->dma_addr = pci_map_single(nic->pdev, rx->skb->data,
+ RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
+
+ /* Link the RFD to end of RFA by linking previous RFD to
+ * this one, and clearing EL bit of previous. */
+ if(rx->prev->skb) {
+ struct rfd *prev_rfd = (struct rfd *)rx->prev->skb->data;
+ put_unaligned(cpu_to_le32(rx->dma_addr),
+ (u32 *)&prev_rfd->link);
+ prev_rfd->command &= ~cpu_to_le16(cb_el);
+ pci_dma_sync_single(nic->pdev, rx->prev->dma_addr,
+ sizeof(struct rfd), PCI_DMA_TODEVICE);
+ }
+
+ return 0;
+}
+
+static inline int e100_rx_indicate(struct nic *nic, struct rx *rx,
+ unsigned int *work_done, unsigned int work_to_do)
+{
+ struct sk_buff *skb = rx->skb;
+ struct rfd *rfd = (struct rfd *)skb->data;
+ u16 rfd_status, actual_size;
+
+ if(unlikely(work_done && *work_done >= work_to_do))
+ return -EAGAIN;
+
+ /* Need to sync before taking a peek at cb_complete bit */
+ pci_dma_sync_single(nic->pdev, rx->dma_addr,
+ sizeof(struct rfd), PCI_DMA_FROMDEVICE);
+ rfd_status = le16_to_cpu(rfd->status);
+
+ DPRINTK(RX_STATUS, DEBUG, "status=0x%04X\n", rfd_status);
+
+ /* If data isn't ready, nothing to indicate */
+ if(unlikely(!(rfd_status & cb_complete)))
+ return -EAGAIN;
+
+ /* Get actual data size */
+ actual_size = le16_to_cpu(rfd->actual_size) & 0x3FFF;
+ if(unlikely(actual_size > RFD_BUF_LEN - sizeof(struct rfd)))
+ actual_size = RFD_BUF_LEN - sizeof(struct rfd);
+
+ /* Get data */
+ pci_dma_sync_single(nic->pdev, rx->dma_addr,
+ sizeof(struct rfd) + actual_size,
+ PCI_DMA_FROMDEVICE);
+ pci_unmap_single(nic->pdev, rx->dma_addr,
+ RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
+
+ /* Pull off the RFD and put the actual data (minus eth hdr) */
+ skb_reserve(skb, sizeof(struct rfd));
+ skb_put(skb, actual_size);
+ skb->protocol = eth_type_trans(skb, nic->netdev);
+
+ if(unlikely(!(rfd_status & cb_ok)) ||
+ actual_size > nic->netdev->mtu + VLAN_ETH_HLEN) {
+ /* Don't indicate if errors */
+ dev_kfree_skb_any(skb);
+ } else {
+ nic->net_stats.rx_packets++;
+ nic->net_stats.rx_bytes += actual_size;
+ nic->netdev->last_rx = jiffies;
+#ifdef CONFIG_E100_NAPI
+ netif_receive_skb(skb);
+#else
+ netif_rx(skb);
+#endif
+ if(work_done)
+ (*work_done)++;
+ }
+
+ rx->skb = NULL;
+
+ return 0;
+}
+
+static inline void e100_rx_clean(struct nic *nic, unsigned int *work_done,
+ unsigned int work_to_do)
+{
+ struct rx *rx;
+
+ /* Indicate newly arrived packets */
+ for(rx = nic->rx_to_clean; rx->skb; rx = nic->rx_to_clean = rx->next) {
+ if(e100_rx_indicate(nic, rx, work_done, work_to_do))
+ break; /* No more to clean */
+ }
+
+ /* Alloc new skbs to refill list */
+ for(rx = nic->rx_to_use; !rx->skb; rx = nic->rx_to_use = rx->next) {
+ if(unlikely(e100_rx_alloc_skb(nic, rx)))
+ break; /* Better luck next time (see watchdog) */
+ }
+
+ e100_start_receiver(nic);
+}
+
+static void e100_rx_clean_list(struct nic *nic)
+{
+ struct rx *rx;
+ unsigned int i, count = nic->params.rfds.count;
+
+ if(nic->rxs) {
+ for(rx = nic->rxs, i = 0; i < count; rx++, i++) {
+ if(rx->skb) {
+ pci_unmap_single(nic->pdev, rx->dma_addr,
+ RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(rx->skb);
+ }
+ }
+ kfree(nic->rxs);
+ nic->rxs = NULL;
+ }
+
+ nic->rx_to_use = nic->rx_to_clean = NULL;
+ nic->ru_running = 0;
+}
+
+static int e100_rx_alloc_list(struct nic *nic)
+{
+ struct rx *rx;
+ unsigned int i, count = nic->params.rfds.count;
+
+ nic->rx_to_use = nic->rx_to_clean = NULL;
+
+ if(!(nic->rxs = kmalloc(sizeof(struct rx) * count, GFP_ATOMIC)))
+ return -ENOMEM;
+ memset(nic->rxs, 0, sizeof(struct rx) * count);
+
+ for(rx = nic->rxs, i = 0; i < count; rx++, i++) {
+ rx->next = (i + 1 < count) ? rx + 1 : nic->rxs;
+ rx->prev = (i == 0) ? nic->rxs + count - 1 : rx - 1;
+ if(e100_rx_alloc_skb(nic, rx)) {
+ e100_rx_clean_list(nic);
+ return -ENOMEM;
+ }
+ }
+
+ nic->rx_to_use = nic->rx_to_clean = nic->rxs;
+
+ return 0;
+}
+
+static irqreturn_t e100_intr(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *netdev = dev_id;
+ struct nic *nic = netdev->priv;
+ u8 stat_ack = readb(&nic->csr->scb.stat_ack);
+
+ DPRINTK(INTR, DEBUG, "stat_ack = 0x%02X\n", stat_ack);
+
+ if(stat_ack == stat_ack_not_ours || /* Not our interrupt */
+ stat_ack == stat_ack_not_present) /* Hardware is ejected */
+ return IRQ_NONE;
+
+ /* Ack interrupt(s) */
+ writeb(stat_ack, &nic->csr->scb.stat_ack);
+
+ /* We hit Receive No Resource (RNR); restart RU after cleaning */
+ if(stat_ack & stat_ack_rnr)
+ nic->ru_running = 0;
+
+#ifdef CONFIG_E100_NAPI
+ e100_disable_irq(nic);
+ netif_rx_schedule(netdev);
+#else
+ if(stat_ack & stat_ack_rx)
+ e100_rx_clean(nic, NULL, 0);
+ if(stat_ack & stat_ack_tx)
+ e100_tx_clean(nic);
+#endif
+
+ return IRQ_HANDLED;
+}
+
+#ifdef CONFIG_E100_NAPI
+static int e100_poll(struct net_device *netdev, int *budget)
+{
+ struct nic *nic = netdev->priv;
+ unsigned int work_to_do = min(netdev->quota, *budget);
+ unsigned int work_done = 0;
+ int tx_cleaned;
+
+ e100_rx_clean(nic, &work_done, work_to_do);
+ tx_cleaned = e100_tx_clean(nic);
+
+ /* If no Rx and Tx cleanup work was done, exit polling mode. */
+ if((!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
+ netif_rx_complete(netdev);
+ e100_enable_irq(nic);
+ return 0;
+ }
+
+ *budget -= work_done;
+ netdev->quota -= work_done;
+
+ return 1;
+}
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void e100_netpoll(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ e100_disable_irq(nic);
+ e100_intr(nic->pdev->irq, netdev, NULL);
+ e100_enable_irq(nic);
+}
+#endif
+
+static struct net_device_stats *e100_get_stats(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ return &nic->net_stats;
+}
+
+static int e100_set_mac_address(struct net_device *netdev, void *p)
+{
+ struct nic *nic = netdev->priv;
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+ e100_exec_cb(nic, NULL, e100_setup_iaaddr);
+
+ return 0;
+}
+
+static int e100_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ if(new_mtu < ETH_ZLEN || new_mtu > ETH_DATA_LEN)
+ return -EINVAL;
+ netdev->mtu = new_mtu;
+ return 0;
+}
+
+static int e100_asf(struct nic *nic)
+{
+ /* ASF can be enabled from eeprom */
+ return((nic->pdev->device >= 0x1050) && (nic->pdev->device <= 0x1055) &&
+ (nic->eeprom[eeprom_config_asf] & eeprom_asf) &&
+ !(nic->eeprom[eeprom_config_asf] & eeprom_gcl) &&
+ ((nic->eeprom[eeprom_smbus_addr] & 0xFF) != 0xFE));
+}
+
+static int e100_up(struct nic *nic)
+{
+ int err;
+
+ if((err = e100_rx_alloc_list(nic)))
+ return err;
+ if((err = e100_alloc_cbs(nic)))
+ goto err_rx_clean_list;
+ if((err = e100_hw_init(nic)))
+ goto err_clean_cbs;
+ e100_set_multicast_list(nic->netdev);
+ e100_start_receiver(nic);
+ netif_start_queue(nic->netdev);
+ mod_timer(&nic->watchdog, jiffies);
+ if((err = request_irq(nic->pdev->irq, e100_intr, SA_SHIRQ,
+ nic->netdev->name, nic->netdev)))
+ goto err_no_irq;
+ e100_enable_irq(nic);
+ return 0;
+
+err_no_irq:
+ del_timer_sync(&nic->watchdog);
+ netif_stop_queue(nic->netdev);
+err_clean_cbs:
+ e100_clean_cbs(nic);
+err_rx_clean_list:
+ e100_rx_clean_list(nic);
+ return err;
+}
+
+static void e100_down(struct nic *nic)
+{
+ e100_hw_reset(nic);
+ free_irq(nic->pdev->irq, nic->netdev);
+ del_timer_sync(&nic->watchdog);
+ netif_carrier_off(nic->netdev);
+ netif_stop_queue(nic->netdev);
+ e100_clean_cbs(nic);
+ e100_rx_clean_list(nic);
+}
+
+static void e100_tx_timeout(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+
+ DPRINTK(TX_ERR, DEBUG, "scb.status=0x%02X\n",
+ readb(&nic->csr->scb.status));
+ e100_down(netdev->priv);
+ e100_up(netdev->priv);
+}
+
+static int e100_loopback_test(struct nic *nic, enum loopback loopback_mode)
+{
+ int err;
+ struct sk_buff *skb;
+
+ /* Use driver resources to perform internal MAC or PHY
+ * loopback test. A single packet is prepared and transmitted
+ * in loopback mode, and the test passes if the received
+ * packet compares byte-for-byte to the transmitted packet. */
+
+ if((err = e100_rx_alloc_list(nic)))
+ return err;
+ if((err = e100_alloc_cbs(nic)))
+ goto err_clean_rx;
+
+ /* ICH PHY loopback is broken so do MAC loopback instead */
+ if(nic->flags & ich && loopback_mode == lb_phy)
+ loopback_mode = lb_mac;
+
+ nic->loopback = loopback_mode;
+ if((err = e100_hw_init(nic)))
+ goto err_loopback_none;
+
+ if(loopback_mode == lb_phy)
+ mdio_write(nic->netdev, nic->mii.phy_id, MII_BMCR,
+ BMCR_LOOPBACK);
+
+ e100_start_receiver(nic);
+
+ if(!(skb = dev_alloc_skb(ETH_DATA_LEN))) {
+ err = -ENOMEM;
+ goto err_loopback_none;
+ }
+ skb_put(skb, ETH_DATA_LEN);
+ memset(skb->data, 0xFF, ETH_DATA_LEN);
+ e100_xmit_frame(skb, nic->netdev);
+
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ / 100 + 1);
+
+ if(memcmp(nic->rx_to_clean->skb->data + sizeof(struct rfd),
+ skb->data, ETH_DATA_LEN))
+ err = -EAGAIN;
+
+err_loopback_none:
+ mdio_write(nic->netdev, nic->mii.phy_id, MII_BMCR, 0);
+ nic->loopback = lb_none;
+ e100_hw_init(nic);
+ e100_clean_cbs(nic);
+err_clean_rx:
+ e100_rx_clean_list(nic);
+ return err;
+}
+
+#define MII_LED_CONTROL 0x1B
+static void e100_blink_led(unsigned long data)
+{
+ struct nic *nic = (struct nic *)data;
+ enum led_state {
+ led_on = 0x01,
+ led_off = 0x04,
+ led_on_559 = 0x05,
+ led_on_557 = 0x07,
+ };
+
+ nic->leds = (nic->leds & led_on) ? led_off :
+ (nic->mac < mac_82559_D101M) ? led_on_557 : led_on_559;
+ mdio_write(nic->netdev, nic->mii.phy_id, MII_LED_CONTROL, nic->leds);
+ mod_timer(&nic->blink_timer, jiffies + HZ / 4);
+}
+
+static int e100_get_settings(struct net_device *netdev, struct ethtool_cmd *cmd)
+{
+ struct nic *nic = netdev->priv;
+ return mii_ethtool_gset(&nic->mii, cmd);
+}
+
+static int e100_set_settings(struct net_device *netdev, struct ethtool_cmd *cmd)
+{
+ struct nic *nic = netdev->priv;
+ int err;
+
+ mdio_write(netdev, nic->mii.phy_id, MII_BMCR, BMCR_RESET);
+ err = mii_ethtool_sset(&nic->mii, cmd);
+ e100_exec_cb(nic, NULL, e100_configure);
+
+ return err;
+}
+
+static void e100_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *info)
+{
+ struct nic *nic = netdev->priv;
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+ strcpy(info->fw_version, "N/A");
+ strcpy(info->bus_info, pci_name(nic->pdev));
+}
+
+static int e100_get_regs_len(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+#define E100_PHY_REGS 0x1C
+#define E100_REGS_LEN 1 + E100_PHY_REGS + \
+ sizeof(nic->mem->dump_buf) / sizeof(u32)
+ return E100_REGS_LEN * sizeof(u32);
+}
+
+static void e100_get_regs(struct net_device *netdev,
+ struct ethtool_regs *regs, void *p)
+{
+ struct nic *nic = netdev->priv;
+ u32 *buff = p;
+ int i;
+
+ regs->version = (1 << 24) | nic->rev_id;
+ buff[0] = readb(&nic->csr->scb.cmd_hi) << 24 |
+ readb(&nic->csr->scb.cmd_lo) << 16 |
+ readw(&nic->csr->scb.status);
+ for(i = E100_PHY_REGS; i >= 0; i--)
+ buff[1 + E100_PHY_REGS - i] =
+ mdio_read(netdev, nic->mii.phy_id, i);
+ memset(nic->mem->dump_buf, 0, sizeof(nic->mem->dump_buf));
+ e100_exec_cb(nic, NULL, e100_dump);
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ / 100 + 1);
+ memcpy(&buff[2 + E100_PHY_REGS], nic->mem->dump_buf,
+ sizeof(nic->mem->dump_buf));
+}
+
+static void e100_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+{
+ struct nic *nic = netdev->priv;
+ wol->supported = (nic->mac >= mac_82558_D101_A4) ? WAKE_MAGIC : 0;
+ wol->wolopts = (nic->flags & wol_magic) ? WAKE_MAGIC : 0;
+}
+
+static int e100_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+{
+ struct nic *nic = netdev->priv;
+
+ if(wol->wolopts != WAKE_MAGIC && wol->wolopts != 0)
+ return -EOPNOTSUPP;
+
+ if(wol->wolopts)
+ nic->flags |= wol_magic;
+ else
+ nic->flags &= ~wol_magic;
+
+ pci_enable_wake(nic->pdev, 0, nic->flags & (wol_magic | e100_asf(nic)));
+ e100_exec_cb(nic, NULL, e100_configure);
+
+ return 0;
+}
+
+static u32 e100_get_msglevel(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ return nic->msg_enable;
+}
+
+static void e100_set_msglevel(struct net_device *netdev, u32 value)
+{
+ struct nic *nic = netdev->priv;
+ nic->msg_enable = value;
+}
+
+static int e100_nway_reset(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ return mii_nway_restart(&nic->mii);
+}
+
+static u32 e100_get_link(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ return mii_link_ok(&nic->mii);
+}
+
+static int e100_get_eeprom_len(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ return nic->eeprom_wc << 1;
+}
+
+#define E100_EEPROM_MAGIC 0x1234
+static int e100_get_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct nic *nic = netdev->priv;
+
+ eeprom->magic = E100_EEPROM_MAGIC;
+ memcpy(bytes, &((u8 *)nic->eeprom)[eeprom->offset], eeprom->len);
+
+ return 0;
+}
+
+static int e100_set_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct nic *nic = netdev->priv;
+
+ if(eeprom->magic != E100_EEPROM_MAGIC)
+ return -EINVAL;
+ memcpy(&((u8 *)nic->eeprom)[eeprom->offset], bytes, eeprom->len);
+
+ return e100_eeprom_save(nic, eeprom->offset >> 1,
+ (eeprom->len >> 1) + 1);
+}
+
+static void e100_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct nic *nic = netdev->priv;
+ struct param_range *rfds = &nic->params.rfds;
+ struct param_range *cbs = &nic->params.cbs;
+
+ ring->rx_max_pending = rfds->max;
+ ring->tx_max_pending = cbs->max;
+ ring->rx_mini_max_pending = 0;
+ ring->rx_jumbo_max_pending = 0;
+ ring->rx_pending = rfds->count;
+ ring->tx_pending = cbs->count;
+ ring->rx_mini_pending = 0;
+ ring->rx_jumbo_pending = 0;
+}
+
+static int e100_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct nic *nic = netdev->priv;
+ struct param_range *rfds = &nic->params.rfds;
+ struct param_range *cbs = &nic->params.cbs;
+
+ if(netif_running(netdev))
+ e100_down(nic);
+ rfds->count = max(ring->rx_pending, rfds->min);
+ rfds->count = min(rfds->count, rfds->max);
+ cbs->count = max(ring->tx_pending, cbs->min);
+ cbs->count = min(cbs->count, cbs->max);
+ if(netif_running(netdev))
+ e100_up(nic);
+
+ return 0;
+}
+
+static const char e100_gstrings_test[][ETH_GSTRING_LEN] = {
+ "Link test (on/offline)",
+ "Eeprom test (on/offline)",
+ "Self test (offline)",
+ "Mac loopback (offline)",
+ "Phy loopback (offline)",
+};
+#define E100_TEST_LEN sizeof(e100_gstrings_test) / ETH_GSTRING_LEN
+
+static int e100_diag_test_count(struct net_device *netdev)
+{
+ return E100_TEST_LEN;
+}
+
+static void e100_diag_test(struct net_device *netdev,
+ struct ethtool_test *test, u64 *data)
+{
+ struct nic *nic = netdev->priv;
+ int i;
+
+ memset(data, 0, E100_TEST_LEN * sizeof(u64));
+ data[0] = !mii_link_ok(&nic->mii);
+ data[1] = e100_eeprom_load(nic);
+ if(test->flags & ETH_TEST_FL_OFFLINE) {
+ if(netif_running(netdev))
+ e100_down(nic);
+ data[2] = e100_self_test(nic);
+ data[3] = e100_loopback_test(nic, lb_mac);
+ data[4] = e100_loopback_test(nic, lb_phy);
+ if(netif_running(netdev))
+ e100_up(nic);
+ }
+ for(i = 0; i < E100_TEST_LEN; i++)
+ test->flags |= data[i] ? ETH_TEST_FL_FAILED : 0;
+}
+
+static int e100_phys_id(struct net_device *netdev, u32 data)
+{
+ struct nic *nic = netdev->priv;
+
+ if(!data || data > (u32)(MAX_SCHEDULE_TIMEOUT / HZ))
+ data = (u32)(MAX_SCHEDULE_TIMEOUT / HZ);
+ mod_timer(&nic->blink_timer, jiffies);
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(data * HZ);
+ del_timer_sync(&nic->blink_timer);
+ mdio_write(netdev, nic->mii.phy_id, MII_LED_CONTROL, 0);
+
+ return 0;
+}
+
+static const char e100_gstrings_stats[][ETH_GSTRING_LEN] = {
+ "rx_packets", "tx_packets", "rx_bytes", "tx_bytes", "rx_errors",
+ "tx_errors", "rx_dropped", "tx_dropped", "multicast", "collisions",
+ "rx_length_errors", "rx_over_errors", "rx_crc_errors",
+ "rx_frame_errors", "rx_fifo_errors", "rx_missed_errors",
+ "tx_aborted_errors", "tx_carrier_errors", "tx_fifo_errors",
+ "tx_heartbeat_errors", "tx_window_errors",
+ /* device-specific stats */
+ "tx_deferred", "tx_single_collisions", "tx_multi_collisions",
+ "tx_flow_control_pause", "rx_flow_control_pause",
+ "rx_flow_control_unsupported", "tx_tco_packets", "rx_tco_packets",
+};
+#define E100_NET_STATS_LEN 21
+#define E100_STATS_LEN sizeof(e100_gstrings_stats) / ETH_GSTRING_LEN
+
+static int e100_get_stats_count(struct net_device *netdev)
+{
+ return E100_STATS_LEN;
+}
+
+static void e100_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct nic *nic = netdev->priv;
+ int i;
+
+ for(i = 0; i < E100_NET_STATS_LEN; i++)
+ data[i] = ((unsigned long *)&nic->net_stats)[i];
+
+ data[i++] = nic->tx_deferred;
+ data[i++] = nic->tx_single_collisions;
+ data[i++] = nic->tx_multiple_collisions;
+ data[i++] = nic->tx_fc_pause;
+ data[i++] = nic->rx_fc_pause;
+ data[i++] = nic->rx_fc_unsupported;
+ data[i++] = nic->tx_tco_frames;
+ data[i++] = nic->rx_tco_frames;
+}
+
+static void e100_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+{
+ switch(stringset) {
+ case ETH_SS_TEST:
+ memcpy(data, *e100_gstrings_test, sizeof(e100_gstrings_test));
+ break;
+ case ETH_SS_STATS:
+ memcpy(data, *e100_gstrings_stats, sizeof(e100_gstrings_stats));
+ break;
+ }
+}
+
+static struct ethtool_ops e100_ethtool_ops = {
+ .get_settings = e100_get_settings,
+ .set_settings = e100_set_settings,
+ .get_drvinfo = e100_get_drvinfo,
+ .get_regs_len = e100_get_regs_len,
+ .get_regs = e100_get_regs,
+ .get_wol = e100_get_wol,
+ .set_wol = e100_set_wol,
+ .get_msglevel = e100_get_msglevel,
+ .set_msglevel = e100_set_msglevel,
+ .nway_reset = e100_nway_reset,
+ .get_link = e100_get_link,
+ .get_eeprom_len = e100_get_eeprom_len,
+ .get_eeprom = e100_get_eeprom,
+ .set_eeprom = e100_set_eeprom,
+ .get_ringparam = e100_get_ringparam,
+ .set_ringparam = e100_set_ringparam,
+ .self_test_count = e100_diag_test_count,
+ .self_test = e100_diag_test,
+ .get_strings = e100_get_strings,
+ .phys_id = e100_phys_id,
+ .get_stats_count = e100_get_stats_count,
+ .get_ethtool_stats = e100_get_ethtool_stats,
+};
+
+static int e100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+ struct nic *nic = netdev->priv;
+ struct mii_ioctl_data *mii = (struct mii_ioctl_data *)&ifr->ifr_data;
+
+ return generic_mii_ioctl(&nic->mii, mii, cmd, NULL);
+}
+
+static int e100_alloc(struct nic *nic)
+{
+ nic->mem = pci_alloc_consistent(nic->pdev, sizeof(struct mem),
+ &nic->dma_addr);
+ return nic->mem ? 0 : -ENOMEM;
+}
+
+static void e100_free(struct nic *nic)
+{
+ if(nic->mem) {
+ pci_free_consistent(nic->pdev, sizeof(struct mem),
+ nic->mem, nic->dma_addr);
+ nic->mem = NULL;
+ }
+}
+
+static int e100_open(struct net_device *netdev)
+{
+ struct nic *nic = netdev->priv;
+ int err = 0;
+
+ netif_carrier_off(netdev);
+ if((err = e100_up(nic)))
+ DPRINTK(IFUP, ERR, "Cannot open interface, aborting.\n");
+ return err;
+}
+
+static int e100_close(struct net_device *netdev)
+{
+ e100_down(netdev->priv);
+ return 0;
+}
+
+static int __devinit e100_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct net_device *netdev;
+ struct nic *nic;
+ int err;
+
+ if(!(netdev = alloc_etherdev(sizeof(struct nic)))) {
+ if(((1 << debug) - 1) & NETIF_MSG_PROBE)
+ printk(KERN_ERR PFX "Etherdev alloc failed, abort.\n");
+ return -ENOMEM;
+ }
+
+ netdev->open = e100_open;
+ netdev->stop = e100_close;
+ netdev->hard_start_xmit = e100_xmit_frame;
+ netdev->get_stats = e100_get_stats;
+ netdev->set_multicast_list = e100_set_multicast_list;
+ netdev->set_mac_address = e100_set_mac_address;
+ netdev->change_mtu = e100_change_mtu;
+ netdev->do_ioctl = e100_do_ioctl;
+ SET_ETHTOOL_OPS(netdev, &e100_ethtool_ops);
+ netdev->tx_timeout = e100_tx_timeout;
+ netdev->watchdog_timeo = E100_WATCHDOG_PERIOD;
+#ifdef CONFIG_E100_NAPI
+ netdev->poll = e100_poll;
+ netdev->weight = E100_NAPI_WEIGHT;
+#endif
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ netdev->poll_controller = e100_netpoll;
+#endif
+
+ nic = netdev->priv;
+ nic->netdev = netdev;
+ nic->pdev = pdev;
+ nic->msg_enable = (1 << debug) - 1;
+ pci_set_drvdata(pdev, netdev);
+
+ if((err = pci_enable_device(pdev))) {
+ DPRINTK(PROBE, ERR, "Cannot enable PCI device, aborting.\n");
+ goto err_out_free_dev;
+ }
+
+ if(!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {
+ DPRINTK(PROBE, ERR, "Cannot find proper PCI device "
+ "base address, aborting.\n");
+ err = -ENODEV;
+ goto err_out_disable_pdev;
+ }
+
+ if((err = pci_request_regions(pdev, DRV_NAME))) {
+ DPRINTK(PROBE, ERR, "Cannot obtain PCI resources, aborting.\n");
+ goto err_out_disable_pdev;
+ }
+
+ pci_set_master(pdev);
+
+ if((err = pci_set_dma_mask(pdev, 0xFFFFFFFFULL))) {
+ DPRINTK(PROBE, ERR, "No usable DMA configuration, aborting.\n");
+ goto err_out_free_res;
+ }
+
+ SET_MODULE_OWNER(netdev);
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+
+ nic->csr = ioremap(pci_resource_start(pdev, 0), sizeof(struct csr));
+ if(!nic->csr) {
+ DPRINTK(PROBE, ERR, "Cannot map device registers, aborting.\n");
+ err = -ENOMEM;
+ goto err_out_free_res;
+ }
+
+ if(ent->driver_data)
+ nic->flags |= ich;
+ else
+ nic->flags &= ~ich;
+
+ spin_lock_init(&nic->cb_lock);
+ spin_lock_init(&nic->cmd_lock);
+
+ init_timer(&nic->watchdog);
+ nic->watchdog.function = e100_watchdog;
+ nic->watchdog.data = (unsigned long)nic;
+ init_timer(&nic->blink_timer);
+ nic->blink_timer.function = e100_blink_led;
+ nic->blink_timer.data = (unsigned long)nic;
+
+ if((err = e100_alloc(nic))) {
+ DPRINTK(PROBE, ERR, "Cannot alloc driver memory, aborting.\n");
+ goto err_out_iounmap;
+ }
+
+ e100_get_defaults(nic);
+ e100_hw_reset(nic);
+ e100_phy_init(nic);
+
+ if((err = e100_eeprom_load(nic)))
+ goto err_out_free;
+ ((u16 *)netdev->dev_addr)[0] = le16_to_cpu(nic->eeprom[0]);
+ ((u16 *)netdev->dev_addr)[1] = le16_to_cpu(nic->eeprom[1]);
+ ((u16 *)netdev->dev_addr)[2] = le16_to_cpu(nic->eeprom[2]);
+ if(!is_valid_ether_addr(netdev->dev_addr)) {
+ DPRINTK(PROBE, ERR, "Invalid MAC address from "
+ "EEPROM, aborting.\n");
+ err = -EAGAIN;
+ goto err_out_free;
+ }
+
+ /* Wol magic packet can be enabled from eeprom */
+ if((nic->mac >= mac_82558_D101_A4) &&
+ (nic->eeprom[eeprom_id] & eeprom_id_wol))
+ nic->flags |= wol_magic;
+
+ pci_enable_wake(pdev, 0, nic->flags & (wol_magic | e100_asf(nic)));
+
+ if((err = register_netdev(netdev))) {
+ DPRINTK(PROBE, ERR, "Cannot register net device, aborting.\n");
+ goto err_out_free;
+ }
+
+ DPRINTK(PROBE, INFO, "addr 0x%lx, irq %d, "
+ "MAC addr %02X:%02X:%02X:%02X:%02X:%02X\n",
+ pci_resource_start(pdev, 0), pdev->irq,
+ netdev->dev_addr[0], netdev->dev_addr[1], netdev->dev_addr[2],
+ netdev->dev_addr[3], netdev->dev_addr[4], netdev->dev_addr[5]);
+
+ return 0;
+
+err_out_free:
+ e100_free(nic);
+err_out_iounmap:
+ iounmap(nic->csr);
+err_out_free_res:
+ pci_release_regions(pdev);
+err_out_disable_pdev:
+ pci_disable_device(pdev);
+err_out_free_dev:
+ pci_set_drvdata(pdev, NULL);
+ free_netdev(netdev);
+ return err;
+}
+
+static void __devexit e100_remove(struct pci_dev *pdev)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+
+ if(netdev) {
+ struct nic *nic = netdev->priv;
+ unregister_netdev(netdev);
+ e100_free(nic);
+ iounmap(nic->csr);
+ free_netdev(netdev);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ }
+}
+
+#ifdef CONFIG_PM
+static int e100_suspend(struct pci_dev *pdev, u32 state)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct nic *nic = netdev->priv;
+
+ if(netif_running(netdev))
+ e100_down(nic);
+ e100_hw_reset(nic);
+ netif_device_detach(netdev);
+
+ pci_save_state(pdev, nic->pm_state);
+ pci_enable_wake(pdev, state, nic->flags & (wol_magic | e100_asf(nic)));
+ pci_disable_device(pdev);
+ pci_set_power_state(pdev, state);
+
+ return 0;
+}
+
+static int e100_resume(struct pci_dev *pdev)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct nic *nic = netdev->priv;
+
+ pci_set_power_state(pdev, 0);
+ pci_restore_state(pdev, nic->pm_state);
+ e100_hw_init(nic);
+
+ netif_device_attach(netdev);
+ if(netif_running(netdev))
+ e100_up(nic);
+
+ return 0;
+}
+#endif
+
+static struct pci_driver e100_driver = {
+ .name = DRV_NAME,
+ .id_table = e100_id_table,
+ .probe = e100_probe,
+ .remove = __devexit_p(e100_remove),
+#ifdef CONFIG_PM
+ .suspend = e100_suspend,
+ .resume = e100_resume,
+#endif
+};
+
+static int __init e100_init_module(void)
+{
+ if(((1 << debug) - 1) & NETIF_MSG_DRV) {
+ printk(KERN_INFO PFX "%s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
+ printk(KERN_INFO PFX "%s\n", DRV_COPYRIGHT);
+ }
+ return pci_module_init(&e100_driver);
+}
+
+static void __exit e100_cleanup_module(void)
+{
+ pci_unregister_driver(&e100_driver);
+}
+
+module_init(e100_init_module);
+module_exit(e100_cleanup_module);
+++ /dev/null
-
-"This software program is licensed subject to the GNU General Public License
-(GPL). Version 2, June 1991, available at
-<http://www.fsf.org/copyleft/gpl.html>"
-
-GNU General Public License
-
-Version 2, June 1991
-
-Copyright (C) 1989, 1991 Free Software Foundation, Inc.
-59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
-
-Everyone is permitted to copy and distribute verbatim copies of this license
-document, but changing it is not allowed.
-
-Preamble
-
-The licenses for most software are designed to take away your freedom to
-share and change it. By contrast, the GNU General Public License is intended
-to guarantee your freedom to share and change free software--to make sure
-the software is free for all its users. This General Public License applies
-to most of the Free Software Foundation's software and to any other program
-whose authors commit to using it. (Some other Free Software Foundation
-software is covered by the GNU Library General Public License instead.) You
-can apply it to your programs, too.
-
-When we speak of free software, we are referring to freedom, not price. Our
-General Public Licenses are designed to make sure that you have the freedom
-to distribute copies of free software (and charge for this service if you
-wish), that you receive source code or can get it if you want it, that you
-can change the software or use pieces of it in new free programs; and that
-you know you can do these things.
-
-To protect your rights, we need to make restrictions that forbid anyone to
-deny you these rights or to ask you to surrender the rights. These
-restrictions translate to certain responsibilities for you if you distribute
-copies of the software, or if you modify it.
-
-For example, if you distribute copies of such a program, whether gratis or
-for a fee, you must give the recipients all the rights that you have. You
-must make sure that they, too, receive or can get the source code. And you
-must show them these terms so they know their rights.
-
-We protect your rights with two steps: (1) copyright the software, and (2)
-offer you this license which gives you legal permission to copy, distribute
-and/or modify the software.
-
-Also, for each author's protection and ours, we want to make certain that
-everyone understands that there is no warranty for this free software. If
-the software is modified by someone else and passed on, we want its
-recipients to know that what they have is not the original, so that any
-problems introduced by others will not reflect on the original authors'
-reputations.
-
-Finally, any free program is threatened constantly by software patents. We
-wish to avoid the danger that redistributors of a free program will
-individually obtain patent licenses, in effect making the program
-proprietary. To prevent this, we have made it clear that any patent must be
-licensed for everyone's free use or not licensed at all.
-
-The precise terms and conditions for copying, distribution and modification
-follow.
-
-TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
-
-0. This License applies to any program or other work which contains a notice
- placed by the copyright holder saying it may be distributed under the
- terms of this General Public License. The "Program", below, refers to any
- such program or work, and a "work based on the Program" means either the
- Program or any derivative work under copyright law: that is to say, a
- work containing the Program or a portion of it, either verbatim or with
- modifications and/or translated into another language. (Hereinafter,
- translation is included without limitation in the term "modification".)
- Each licensee is addressed as "you".
-
- Activities other than copying, distribution and modification are not
- covered by this License; they are outside its scope. The act of running
- the Program is not restricted, and the output from the Program is covered
- only if its contents constitute a work based on the Program (independent
- of having been made by running the Program). Whether that is true depends
- on what the Program does.
-
-1. You may copy and distribute verbatim copies of the Program's source code
- as you receive it, in any medium, provided that you conspicuously and
- appropriately publish on each copy an appropriate copyright notice and
- disclaimer of warranty; keep intact all the notices that refer to this
- License and to the absence of any warranty; and give any other recipients
- of the Program a copy of this License along with the Program.
-
- You may charge a fee for the physical act of transferring a copy, and you
- may at your option offer warranty protection in exchange for a fee.
-
-2. You may modify your copy or copies of the Program or any portion of it,
- thus forming a work based on the Program, and copy and distribute such
- modifications or work under the terms of Section 1 above, provided that
- you also meet all of these conditions:
-
- * a) You must cause the modified files to carry prominent notices stating
- that you changed the files and the date of any change.
-
- * b) You must cause any work that you distribute or publish, that in
- whole or in part contains or is derived from the Program or any part
- thereof, to be licensed as a whole at no charge to all third parties
- under the terms of this License.
-
- * c) If the modified program normally reads commands interactively when
- run, you must cause it, when started running for such interactive
- use in the most ordinary way, to print or display an announcement
- including an appropriate copyright notice and a notice that there is
- no warranty (or else, saying that you provide a warranty) and that
- users may redistribute the program under these conditions, and
- telling the user how to view a copy of this License. (Exception: if
- the Program itself is interactive but does not normally print such
- an announcement, your work based on the Program is not required to
- print an announcement.)
-
- These requirements apply to the modified work as a whole. If identifiable
- sections of that work are not derived from the Program, and can be
- reasonably considered independent and separate works in themselves, then
- this License, and its terms, do not apply to those sections when you
- distribute them as separate works. But when you distribute the same
- sections as part of a whole which is a work based on the Program, the
- distribution of the whole must be on the terms of this License, whose
- permissions for other licensees extend to the entire whole, and thus to
- each and every part regardless of who wrote it.
-
- Thus, it is not the intent of this section to claim rights or contest
- your rights to work written entirely by you; rather, the intent is to
- exercise the right to control the distribution of derivative or
- collective works based on the Program.
-
- In addition, mere aggregation of another work not based on the Program
- with the Program (or with a work based on the Program) on a volume of a
- storage or distribution medium does not bring the other work under the
- scope of this License.
-
-3. You may copy and distribute the Program (or a work based on it, under
- Section 2) in object code or executable form under the terms of Sections
- 1 and 2 above provided that you also do one of the following:
-
- * a) Accompany it with the complete corresponding machine-readable source
- code, which must be distributed under the terms of Sections 1 and 2
- above on a medium customarily used for software interchange; or,
-
- * b) Accompany it with a written offer, valid for at least three years,
- to give any third party, for a charge no more than your cost of
- physically performing source distribution, a complete machine-
- readable copy of the corresponding source code, to be distributed
- under the terms of Sections 1 and 2 above on a medium customarily
- used for software interchange; or,
-
- * c) Accompany it with the information you received as to the offer to
- distribute corresponding source code. (This alternative is allowed
- only for noncommercial distribution and only if you received the
- program in object code or executable form with such an offer, in
- accord with Subsection b above.)
-
- The source code for a work means the preferred form of the work for
- making modifications to it. For an executable work, complete source code
- means all the source code for all modules it contains, plus any
- associated interface definition files, plus the scripts used to control
- compilation and installation of the executable. However, as a special
- exception, the source code distributed need not include anything that is
- normally distributed (in either source or binary form) with the major
- components (compiler, kernel, and so on) of the operating system on which
- the executable runs, unless that component itself accompanies the
- executable.
-
- If distribution of executable or object code is made by offering access
- to copy from a designated place, then offering equivalent access to copy
- the source code from the same place counts as distribution of the source
- code, even though third parties are not compelled to copy the source
- along with the object code.
-
-4. You may not copy, modify, sublicense, or distribute the Program except as
- expressly provided under this License. Any attempt otherwise to copy,
- modify, sublicense or distribute the Program is void, and will
- automatically terminate your rights under this License. However, parties
- who have received copies, or rights, from you under this License will not
- have their licenses terminated so long as such parties remain in full
- compliance.
-
-5. You are not required to accept this License, since you have not signed
- it. However, nothing else grants you permission to modify or distribute
- the Program or its derivative works. These actions are prohibited by law
- if you do not accept this License. Therefore, by modifying or
- distributing the Program (or any work based on the Program), you
- indicate your acceptance of this License to do so, and all its terms and
- conditions for copying, distributing or modifying the Program or works
- based on it.
-
-6. Each time you redistribute the Program (or any work based on the
- Program), the recipient automatically receives a license from the
- original licensor to copy, distribute or modify the Program subject to
- these terms and conditions. You may not impose any further restrictions
- on the recipients' exercise of the rights granted herein. You are not
- responsible for enforcing compliance by third parties to this License.
-
-7. If, as a consequence of a court judgment or allegation of patent
- infringement or for any other reason (not limited to patent issues),
- conditions are imposed on you (whether by court order, agreement or
- otherwise) that contradict the conditions of this License, they do not
- excuse you from the conditions of this License. If you cannot distribute
- so as to satisfy simultaneously your obligations under this License and
- any other pertinent obligations, then as a consequence you may not
- distribute the Program at all. For example, if a patent license would
- not permit royalty-free redistribution of the Program by all those who
- receive copies directly or indirectly through you, then the only way you
- could satisfy both it and this License would be to refrain entirely from
- distribution of the Program.
-
- If any portion of this section is held invalid or unenforceable under any
- particular circumstance, the balance of the section is intended to apply
- and the section as a whole is intended to apply in other circumstances.
-
- It is not the purpose of this section to induce you to infringe any
- patents or other property right claims or to contest validity of any
- such claims; this section has the sole purpose of protecting the
- integrity of the free software distribution system, which is implemented
- by public license practices. Many people have made generous contributions
- to the wide range of software distributed through that system in
- reliance on consistent application of that system; it is up to the
- author/donor to decide if he or she is willing to distribute software
- through any other system and a licensee cannot impose that choice.
-
- This section is intended to make thoroughly clear what is believed to be
- a consequence of the rest of this License.
-
-8. If the distribution and/or use of the Program is restricted in certain
- countries either by patents or by copyrighted interfaces, the original
- copyright holder who places the Program under this License may add an
- explicit geographical distribution limitation excluding those countries,
- so that distribution is permitted only in or among countries not thus
- excluded. In such case, this License incorporates the limitation as if
- written in the body of this License.
-
-9. The Free Software Foundation may publish revised and/or new versions of
- the General Public License from time to time. Such new versions will be
- similar in spirit to the present version, but may differ in detail to
- address new problems or concerns.
-
- Each version is given a distinguishing version number. If the Program
- specifies a version number of this License which applies to it and "any
- later version", you have the option of following the terms and
- conditions either of that version or of any later version published by
- the Free Software Foundation. If the Program does not specify a version
- number of this License, you may choose any version ever published by the
- Free Software Foundation.
-
-10. If you wish to incorporate parts of the Program into other free programs
- whose distribution conditions are different, write to the author to ask
- for permission. For software which is copyrighted by the Free Software
- Foundation, write to the Free Software Foundation; we sometimes make
- exceptions for this. Our decision will be guided by the two goals of
- preserving the free status of all derivatives of our free software and
- of promoting the sharing and reuse of software generally.
-
- NO WARRANTY
-
-11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
- FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
- OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
- PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
- EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
- ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH
- YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
- NECESSARY SERVICING, REPAIR OR CORRECTION.
-
-12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
- WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
- REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
- DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL
- DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM
- (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
- INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF
- THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR
- OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
-
-END OF TERMS AND CONDITIONS
-
-How to Apply These Terms to Your New Programs
-
-If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it free
-software which everyone can redistribute and change under these terms.
-
-To do so, attach the following notices to the program. It is safest to
-attach them to the start of each source file to most effectively convey the
-exclusion of warranty; and each file should have at least the "copyright"
-line and a pointer to where the full notice is found.
-
-one line to give the program's name and an idea of what it does.
-Copyright (C) yyyy name of author
-
-This program is free software; you can redistribute it and/or modify it
-under the terms of the GNU General Public License as published by the Free
-Software Foundation; either version 2 of the License, or (at your option)
-any later version.
-
-This program is distributed in the hope that it will be useful, but WITHOUT
-ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
-more details.
-
-You should have received a copy of the GNU General Public License along with
-this program; if not, write to the Free Software Foundation, Inc., 59
-Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
-Also add information on how to contact you by electronic and paper mail.
-
-If the program is interactive, make it output a short notice like this when
-it starts in an interactive mode:
-
-Gnomovision version 69, Copyright (C) year name of author Gnomovision comes
-with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free
-software, and you are welcome to redistribute it under certain conditions;
-type 'show c' for details.
-
-The hypothetical commands 'show w' and 'show c' should show the appropriate
-parts of the General Public License. Of course, the commands you use may be
-called something other than 'show w' and 'show c'; they could even be
-mouse-clicks or menu items--whatever suits your program.
-
-You should also get your employer (if you work as a programmer) or your
-school, if any, to sign a "copyright disclaimer" for the program, if
-necessary. Here is a sample; alter the names:
-
-Yoyodyne, Inc., hereby disclaims all copyright interest in the program
-'Gnomovision' (which makes passes at compilers) written by James Hacker.
-
-signature of Ty Coon, 1 April 1989
-Ty Coon, President of Vice
-
-This General Public License does not permit incorporating your program into
-proprietary programs. If your program is a subroutine library, you may
-consider it more useful to permit linking proprietary applications with the
-library. If this is what you want to do, use the GNU Library General Public
-License instead of this License.
+++ /dev/null
-#
-# Makefile for the Intel's E100 ethernet driver
-#
-
-obj-$(CONFIG_E100) += e100.o
-
-e100-objs := e100_main.o e100_config.o e100_phy.o \
- e100_eeprom.o e100_test.o
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-#ifndef _E100_INC_
-#define _E100_INC_
-
-#include <linux/module.h>
-#include <linux/types.h>
-#include <linux/init.h>
-#include <linux/mm.h>
-#include <linux/errno.h>
-#include <linux/ioport.h>
-#include <linux/pci.h>
-#include <linux/kernel.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/delay.h>
-#include <linux/timer.h>
-#include <linux/slab.h>
-#include <linux/interrupt.h>
-#include <linux/string.h>
-#include <linux/wait.h>
-#include <linux/reboot.h>
-#include <asm/io.h>
-#include <asm/unaligned.h>
-#include <asm/processor.h>
-#include <linux/ethtool.h>
-#include <linux/inetdevice.h>
-#include <linux/bitops.h>
-
-#include <linux/if.h>
-#include <asm/uaccess.h>
-#include <linux/ip.h>
-#include <linux/if_vlan.h>
-#include <linux/mii.h>
-
-#define E100_CABLE_UNKNOWN 0
-#define E100_CABLE_OK 1
-#define E100_CABLE_OPEN_NEAR 2 /* Open Circuit Near End */
-#define E100_CABLE_OPEN_FAR 3 /* Open Circuit Far End */
-#define E100_CABLE_SHORT_NEAR 4 /* Short Circuit Near End */
-#define E100_CABLE_SHORT_FAR 5 /* Short Circuit Far End */
-
-#define E100_REGS_LEN 2
-/*
- * Configure parameters for buffers per controller.
- * If the machine this is being used on is a faster machine (i.e. > 150MHz)
- * and running on a 10MBS network then more queueing of data occurs. This
- * may indicate the some of the numbers below should be adjusted. Here are
- * some typical numbers:
- * MAX_TCB 64
- * MAX_RFD 64
- * The default numbers give work well on most systems tests so no real
- * adjustments really need to take place. Also, if the machine is connected
- * to a 100MBS network the numbers described above can be lowered from the
- * defaults as considerably less data will be queued.
- */
-
-#define TX_FRAME_CNT 8 /* consecutive transmit frames per interrupt */
-/* TX_FRAME_CNT must be less than MAX_TCB */
-
-#define E100_DEFAULT_TCB 64
-#define E100_MIN_TCB 2*TX_FRAME_CNT + 3 /* make room for at least 2 interrupts */
-#define E100_MAX_TCB 1024
-
-#define E100_DEFAULT_RFD 64
-#define E100_MIN_RFD 8
-#define E100_MAX_RFD 1024
-
-#define E100_DEFAULT_XSUM true
-#define E100_DEFAULT_BER ZLOCK_MAX_ERRORS
-#define E100_DEFAULT_SPEED_DUPLEX 0
-#define E100_DEFAULT_FC 0
-#define E100_DEFAULT_IFS true
-#define E100_DEFAULT_UCODE true
-
-#define TX_THRSHLD 8
-
-/* IFS parameters */
-#define MIN_NUMBER_OF_TRANSMITS_100 1000
-#define MIN_NUMBER_OF_TRANSMITS_10 100
-
-#define E100_MAX_NIC 16
-
-#define E100_MAX_SCB_WAIT 100 /* Max udelays in wait_scb */
-#define E100_MAX_CU_IDLE_WAIT 50 /* Max udelays in wait_cus_idle */
-
-/* HWI feature related constant */
-#define HWI_REGISTER_GRANULARITY 80 /* register granularity = 80 Cm */
-#define HWI_NEAR_END_BOUNDARY 1000 /* Near end is defined as < 10 meters */
-
-/* CPUSAVER_BUNDLE_MAX: Sets the maximum number of frames that will be bundled.
- * In some situations, such as the TCP windowing algorithm, it may be
- * better to limit the growth of the bundle size than let it go as
- * high as it can, because that could cause too much added latency.
- * The default is six, because this is the number of packets in the
- * default TCP window size. A value of 1 would make CPUSaver indicate
- * an interrupt for every frame received. If you do not want to put
- * a limit on the bundle size, set this value to xFFFF.
- */
-#define E100_DEFAULT_CPUSAVER_BUNDLE_MAX 6
-#define E100_DEFAULT_CPUSAVER_INTERRUPT_DELAY 0x600
-#define E100_DEFAULT_BUNDLE_SMALL_FR false
-
-/* end of configurables */
-
-/* ====================================================================== */
-/* hw */
-/* ====================================================================== */
-
-/* timeout for command completion */
-#define E100_CMD_WAIT 100 /* iterations */
-
-struct driver_stats {
- struct net_device_stats net_stats;
-
- unsigned long tx_late_col;
- unsigned long tx_ok_defrd;
- unsigned long tx_one_retry;
- unsigned long tx_mt_one_retry;
- unsigned long rcv_cdt_frames;
- unsigned long xmt_fc_pkts;
- unsigned long rcv_fc_pkts;
- unsigned long rcv_fc_unsupported;
- unsigned long xmt_tco_pkts;
- unsigned long rcv_tco_pkts;
- unsigned long rx_intr_pkts;
-};
-
-/* TODO: kill me when we can do C99 */
-#define false (0)
-#define true (1)
-
-/* Changed for 82558 and 82559 enhancements */
-/* defines for 82558/9 flow control CSR values */
-#define DFLT_FC_THLD 0x00 /* Rx FIFO threshold of 0.5KB free */
-#define DFLT_FC_CMD 0x00 /* FC Command in CSR */
-
-/* ====================================================================== */
-/* equates */
-/* ====================================================================== */
-
-/*
- * These are general purpose defines
- */
-
-/* Bit Mask definitions */
-#define BIT_0 0x0001
-#define BIT_1 0x0002
-#define BIT_2 0x0004
-#define BIT_3 0x0008
-#define BIT_4 0x0010
-#define BIT_5 0x0020
-#define BIT_6 0x0040
-#define BIT_7 0x0080
-#define BIT_8 0x0100
-#define BIT_9 0x0200
-#define BIT_10 0x0400
-#define BIT_11 0x0800
-#define BIT_12 0x1000
-#define BIT_13 0x2000
-#define BIT_14 0x4000
-#define BIT_15 0x8000
-#define BIT_28 0x10000000
-
-#define BIT_0_2 0x0007
-#define BIT_0_3 0x000F
-#define BIT_0_4 0x001F
-#define BIT_0_5 0x003F
-#define BIT_0_6 0x007F
-#define BIT_0_7 0x00FF
-#define BIT_0_8 0x01FF
-#define BIT_0_13 0x3FFF
-#define BIT_0_15 0xFFFF
-#define BIT_1_2 0x0006
-#define BIT_1_3 0x000E
-#define BIT_2_5 0x003C
-#define BIT_3_4 0x0018
-#define BIT_4_5 0x0030
-#define BIT_4_6 0x0070
-#define BIT_4_7 0x00F0
-#define BIT_5_7 0x00E0
-#define BIT_5_12 0x1FE0
-#define BIT_5_15 0xFFE0
-#define BIT_6_7 0x00c0
-#define BIT_7_11 0x0F80
-#define BIT_8_10 0x0700
-#define BIT_9_13 0x3E00
-#define BIT_12_15 0xF000
-#define BIT_8_15 0xFF00
-
-#define BIT_16_20 0x001F0000
-#define BIT_21_25 0x03E00000
-#define BIT_26_27 0x0C000000
-
-/* Transmit Threshold related constants */
-#define DEFAULT_TX_PER_UNDERRUN 20000
-
-#define MAX_MULTICAST_ADDRS 64
-#define MAX_FILTER 16
-
-#define FULL_DUPLEX 2
-#define HALF_DUPLEX 1
-
-/*
- * These defines are specific to the 82557
- */
-
-/* E100 PORT functions -- lower 4 bits */
-#define PORT_SOFTWARE_RESET 0
-#define PORT_SELFTEST 1
-#define PORT_SELECTIVE_RESET 2
-#define PORT_DUMP 3
-
-/* SCB Status Word bit definitions */
-/* Interrupt status/ack fields */
-/* ER and FCP interrupts for 82558 masks */
-#define SCB_STATUS_ACK_MASK BIT_8_15 /* Status Mask */
-#define SCB_STATUS_ACK_CX BIT_15 /* CU Completed Action Cmd */
-#define SCB_STATUS_ACK_FR BIT_14 /* RU Received A Frame */
-#define SCB_STATUS_ACK_CNA BIT_13 /* CU Became Inactive (IDLE) */
-#define SCB_STATUS_ACK_RNR BIT_12 /* RU Became Not Ready */
-#define SCB_STATUS_ACK_MDI BIT_11 /* MDI read or write done */
-#define SCB_STATUS_ACK_SWI BIT_10 /* S/W generated interrupt */
-#define SCB_STATUS_ACK_ER BIT_9 /* Early Receive */
-#define SCB_STATUS_ACK_FCP BIT_8 /* Flow Control Pause */
-
-/*- CUS Fields */
-#define SCB_CUS_MASK (BIT_6 | BIT_7) /* CUS 2-bit Mask */
-#define SCB_CUS_IDLE 0 /* CU Idle */
-#define SCB_CUS_SUSPEND BIT_6 /* CU Suspended */
-#define SCB_CUS_ACTIVE BIT_7 /* CU Active */
-
-/*- RUS Fields */
-#define SCB_RUS_IDLE 0 /* RU Idle */
-#define SCB_RUS_MASK BIT_2_5 /* RUS 3-bit Mask */
-#define SCB_RUS_SUSPEND BIT_2 /* RU Suspended */
-#define SCB_RUS_NO_RESOURCES BIT_3 /* RU Out Of Resources */
-#define SCB_RUS_READY BIT_4 /* RU Ready */
-#define SCB_RUS_SUSP_NO_RBDS (BIT_2 | BIT_5) /* RU No More RBDs */
-#define SCB_RUS_NO_RBDS (BIT_3 | BIT_5) /* RU No More RBDs */
-#define SCB_RUS_READY_NO_RBDS (BIT_4 | BIT_5) /* RU Ready, No RBDs */
-
-/* SCB Command Word bit definitions */
-/*- CUC fields */
-/* Changing mask to 4 bits */
-#define SCB_CUC_MASK BIT_4_7 /* CUC 4-bit Mask */
-#define SCB_CUC_NOOP 0
-#define SCB_CUC_START BIT_4 /* CU Start */
-#define SCB_CUC_RESUME BIT_5 /* CU Resume */
-#define SCB_CUC_UNKNOWN BIT_7 /* CU unknown command */
-/* Changed for 82558 enhancements */
-#define SCB_CUC_STATIC_RESUME (BIT_5 | BIT_7) /* 82558/9 Static Resume */
-#define SCB_CUC_DUMP_ADDR BIT_6 /* CU Dump Counters Address */
-#define SCB_CUC_DUMP_STAT (BIT_4 | BIT_6) /* CU Dump stat. counters */
-#define SCB_CUC_LOAD_BASE (BIT_5 | BIT_6) /* Load the CU base */
-/* Below was defined as BIT_4_7 */
-#define SCB_CUC_DUMP_RST_STAT BIT_4_6 /* CU Dump & reset statistics cntrs */
-
-/*- RUC fields */
-#define SCB_RUC_MASK BIT_0_2 /* RUC 3-bit Mask */
-#define SCB_RUC_START BIT_0 /* RU Start */
-#define SCB_RUC_RESUME BIT_1 /* RU Resume */
-#define SCB_RUC_ABORT BIT_2 /* RU Abort */
-#define SCB_RUC_LOAD_HDS (BIT_0 | BIT_2) /* Load RFD Header Data Size */
-#define SCB_RUC_LOAD_BASE (BIT_1 | BIT_2) /* Load the RU base */
-#define SCB_RUC_RBD_RESUME BIT_0_2 /* RBD resume */
-
-/* Interrupt fields (assuming byte addressing) */
-#define SCB_INT_MASK BIT_0 /* Mask interrupts */
-#define SCB_SOFT_INT BIT_1 /* Generate a S/W interrupt */
-/* Specific Interrupt Mask Bits (upper byte of SCB Command word) */
-#define SCB_FCP_INT_MASK BIT_2 /* Flow Control Pause */
-#define SCB_ER_INT_MASK BIT_3 /* Early Receive */
-#define SCB_RNR_INT_MASK BIT_4 /* RU Not Ready */
-#define SCB_CNA_INT_MASK BIT_5 /* CU Not Active */
-#define SCB_FR_INT_MASK BIT_6 /* Frame Received */
-#define SCB_CX_INT_MASK BIT_7 /* CU eXecution w/ I-bit done */
-#define SCB_BACHELOR_INT_MASK BIT_2_7 /* 82558 interrupt mask bits */
-
-#define SCB_GCR2_EEPROM_ACCESS_SEMAPHORE BIT_7
-
-/* EEPROM bit definitions */
-/*- EEPROM control register bits */
-#define EEPROM_FLAG_ASF 0x8000
-#define EEPROM_FLAG_GCL 0x4000
-
-#define EN_TRNF 0x10 /* Enable turnoff */
-#define EEDO 0x08 /* EEPROM data out */
-#define EEDI 0x04 /* EEPROM data in (set for writing data) */
-#define EECS 0x02 /* EEPROM chip select (1=hi, 0=lo) */
-#define EESK 0x01 /* EEPROM shift clock (1=hi, 0=lo) */
-
-/*- EEPROM opcodes */
-#define EEPROM_READ_OPCODE 06
-#define EEPROM_WRITE_OPCODE 05
-#define EEPROM_ERASE_OPCODE 07
-#define EEPROM_EWEN_OPCODE 19 /* Erase/write enable */
-#define EEPROM_EWDS_OPCODE 16 /* Erase/write disable */
-
-/*- EEPROM data locations */
-#define EEPROM_NODE_ADDRESS_BYTE_0 0
-#define EEPROM_COMPATIBILITY_WORD 3
-#define EEPROM_PWA_NO 8
-#define EEPROM_ID_WORD 0x0A
-#define EEPROM_CONFIG_ASF 0x0D
-#define EEPROM_SMBUS_ADDR 0x90
-
-#define EEPROM_SUM 0xbaba
-
-// Zero Locking Algorithm definitions:
-#define ZLOCK_ZERO_MASK 0x00F0
-#define ZLOCK_MAX_READS 50
-#define ZLOCK_SET_ZERO 0x2010
-#define ZLOCK_MAX_SLEEP 300 * HZ
-#define ZLOCK_MAX_ERRORS 300
-
-/* E100 Action Commands */
-#define CB_IA_ADDRESS 1
-#define CB_CONFIGURE 2
-#define CB_MULTICAST 3
-#define CB_TRANSMIT 4
-#define CB_LOAD_MICROCODE 5
-#define CB_LOAD_FILTER 8
-#define CB_MAX_NONTX_CMD 9
-#define CB_IPCB_TRANSMIT 9
-
-/* Pre-defined Filter Bits */
-#define CB_FILTER_EL 0x80000000
-#define CB_FILTER_FIX 0x40000000
-#define CB_FILTER_ARP 0x08000000
-#define CB_FILTER_IA_MATCH 0x02000000
-
-/* Command Block (CB) Field Definitions */
-/*- CB Command Word */
-#define CB_EL_BIT BIT_15 /* CB EL Bit */
-#define CB_S_BIT BIT_14 /* CB Suspend Bit */
-#define CB_I_BIT BIT_13 /* CB Interrupt Bit */
-#define CB_TX_SF_BIT BIT_3 /* TX CB Flexible Mode */
-#define CB_CMD_MASK BIT_0_3 /* CB 4-bit CMD Mask */
-#define CB_CID_DEFAULT (0x1f << 8) /* CB 5-bit CID (max value) */
-
-/*- CB Status Word */
-#define CB_STATUS_MASK BIT_12_15 /* CB Status Mask (4-bits) */
-#define CB_STATUS_COMPLETE BIT_15 /* CB Complete Bit */
-#define CB_STATUS_OK BIT_13 /* CB OK Bit */
-#define CB_STATUS_VLAN BIT_12 /* CB Valn detected Bit */
-#define CB_STATUS_FAIL BIT_11 /* CB Fail (F) Bit */
-
-/*misc command bits */
-#define CB_TX_EOF_BIT BIT_15 /* TX CB/TBD EOF Bit */
-
-/* Config params */
-#define CB_CFIG_BYTE_COUNT 22 /* 22 config bytes */
-#define CB_CFIG_D102_BYTE_COUNT 10
-
-/* Receive Frame Descriptor Fields */
-
-/*- RFD Status Bits */
-#define RFD_RECEIVE_COLLISION BIT_0 /* Collision detected on Receive */
-#define RFD_IA_MATCH BIT_1 /* Indv Address Match Bit */
-#define RFD_RX_ERR BIT_4 /* RX_ERR pin on Phy was set */
-#define RFD_FRAME_TOO_SHORT BIT_7 /* Receive Frame Short */
-#define RFD_DMA_OVERRUN BIT_8 /* Receive DMA Overrun */
-#define RFD_NO_RESOURCES BIT_9 /* No Buffer Space */
-#define RFD_ALIGNMENT_ERROR BIT_10 /* Alignment Error */
-#define RFD_CRC_ERROR BIT_11 /* CRC Error */
-#define RFD_STATUS_OK BIT_13 /* RFD OK Bit */
-#define RFD_STATUS_COMPLETE BIT_15 /* RFD Complete Bit */
-
-/*- RFD Command Bits*/
-#define RFD_EL_BIT BIT_15 /* RFD EL Bit */
-#define RFD_S_BIT BIT_14 /* RFD Suspend Bit */
-#define RFD_H_BIT BIT_4 /* Header RFD Bit */
-#define RFD_SF_BIT BIT_3 /* RFD Flexible Mode */
-
-/*- RFD misc bits*/
-#define RFD_EOF_BIT BIT_15 /* RFD End-Of-Frame Bit */
-#define RFD_F_BIT BIT_14 /* RFD Buffer Fetch Bit */
-#define RFD_ACT_COUNT_MASK BIT_0_13 /* RFD Actual Count Mask */
-
-/* Receive Buffer Descriptor Fields*/
-#define RBD_EOF_BIT BIT_15 /* RBD End-Of-Frame Bit */
-#define RBD_F_BIT BIT_14 /* RBD Buffer Fetch Bit */
-#define RBD_ACT_COUNT_MASK BIT_0_13 /* RBD Actual Count Mask */
-
-#define SIZE_FIELD_MASK BIT_0_13 /* Size of the associated buffer */
-#define RBD_EL_BIT BIT_15 /* RBD EL Bit */
-
-/* Self Test Results*/
-#define CB_SELFTEST_FAIL_BIT BIT_12
-#define CB_SELFTEST_DIAG_BIT BIT_5
-#define CB_SELFTEST_REGISTER_BIT BIT_3
-#define CB_SELFTEST_ROM_BIT BIT_2
-
-#define CB_SELFTEST_ERROR_MASK ( \
- CB_SELFTEST_FAIL_BIT | CB_SELFTEST_DIAG_BIT | \
- CB_SELFTEST_REGISTER_BIT | CB_SELFTEST_ROM_BIT)
-
-/* adapter vendor & device ids */
-#define PCI_OHIO_BOARD 0x10f0 /* subdevice ID, Ohio dual port nic */
-
-/* Values for PCI_REV_ID_REGISTER values */
-#define D101A4_REV_ID 4 /* 82558 A4 stepping */
-#define D101B0_REV_ID 5 /* 82558 B0 stepping */
-#define D101MA_REV_ID 8 /* 82559 A0 stepping */
-#define D101S_REV_ID 9 /* 82559S A-step */
-#define D102_REV_ID 12
-#define D102C_REV_ID 13 /* 82550 step C */
-#define D102E_REV_ID 15
-
-/* ############Start of 82555 specific defines################## */
-
-#define PHY_82555_LED_SWITCH_CONTROL 0x1b /* 82555 led switch control register */
-
-/* 82555 led switch control reg. opcodes */
-#define PHY_82555_LED_NORMAL_CONTROL 0 // control back to the 8255X
-#define PHY_82555_LED_DRIVER_CONTROL BIT_2 // the driver is in control
-#define PHY_82555_LED_OFF BIT_2 // activity LED is off
-#define PHY_82555_LED_ON_559 (BIT_0 | BIT_2) // activity LED is on for 559 and later
-#define PHY_82555_LED_ON_PRE_559 (BIT_0 | BIT_1 | BIT_2) // activity LED is on for 558 and before
-
-// Describe the state of the phy led.
-// needed for the function : 'e100_blink_timer'
-enum led_state_e {
- LED_OFF = 0,
- LED_ON,
-};
-
-/* ############End of 82555 specific defines##################### */
-
-#define RFD_PARSE_BIT BIT_3
-#define RFD_TCP_PACKET 0x00
-#define RFD_UDP_PACKET 0x01
-#define TCPUDP_CHECKSUM_BIT_VALID BIT_4
-#define TCPUDP_CHECKSUM_VALID BIT_5
-#define CHECKSUM_PROTOCOL_MASK 0x03
-
-#define VLAN_SIZE 4
-#define CHKSUM_SIZE 2
-#define RFD_DATA_SIZE (ETH_FRAME_LEN + CHKSUM_SIZE + VLAN_SIZE)
-
-/* Bits for bdp->flags */
-#define DF_LINK_FC_CAP 0x00000001 /* Link is flow control capable */
-#define DF_CSUM_OFFLOAD 0x00000002
-#define DF_UCODE_LOADED 0x00000004
-#define USE_IPCB 0x00000008 /* set if using ipcb for transmits */
-#define IS_BACHELOR 0x00000010 /* set if 82558 or newer board */
-#define IS_ICH 0x00000020
-#define DF_SPEED_FORCED 0x00000040 /* set if speed is forced */
-#define LED_IS_ON 0x00000080 /* LED is turned ON by the driver */
-#define DF_LINK_FC_TX_ONLY 0x00000100 /* Received PAUSE frames are honored*/
-
-typedef struct net_device_stats net_dev_stats_t;
-
-/* needed macros */
-/* These macros use the bdp pointer. If you use them it better be defined */
-#define PREV_TCB_USED(X) ((X).tail ? (X).tail - 1 : bdp->params.TxDescriptors - 1)
-#define NEXT_TCB_TOUSE(X) ((((X) + 1) >= bdp->params.TxDescriptors) ? 0 : (X) + 1)
-#define TCB_TO_USE(X) ((X).tail)
-#define TCBS_AVAIL(X) (NEXT_TCB_TOUSE( NEXT_TCB_TOUSE((X).tail)) != (X).head)
-
-#define RFD_POINTER(skb,bdp) ((rfd_t *) (((unsigned char *)((skb)->data))-((bdp)->rfd_size)))
-#define SKB_RFD_STATUS(skb,bdp) ((RFD_POINTER((skb),(bdp)))->rfd_header.cb_status)
-
-/* ====================================================================== */
-/* 82557 */
-/* ====================================================================== */
-
-/* Changed for 82558 enhancement */
-typedef struct _d101_scb_ext_t {
- u32 scb_rx_dma_cnt; /* Rx DMA byte count */
- u8 scb_early_rx_int; /* Early Rx DMA byte count */
- u8 scb_fc_thld; /* Flow Control threshold */
- u8 scb_fc_xon_xoff; /* Flow Control XON/XOFF values */
- u8 scb_pmdr; /* Power Mgmt. Driver Reg */
-} d101_scb_ext __attribute__ ((__packed__));
-
-/* Changed for 82559 enhancement */
-typedef struct _d101m_scb_ext_t {
- u32 scb_rx_dma_cnt; /* Rx DMA byte count */
- u8 scb_early_rx_int; /* Early Rx DMA byte count */
- u8 scb_fc_thld; /* Flow Control threshold */
- u8 scb_fc_xon_xoff; /* Flow Control XON/XOFF values */
- u8 scb_pmdr; /* Power Mgmt. Driver Reg */
- u8 scb_gen_ctrl; /* General Control */
- u8 scb_gen_stat; /* General Status */
- u16 scb_reserved; /* Reserved */
- u32 scb_function_event; /* Cardbus Function Event */
- u32 scb_function_event_mask; /* Cardbus Function Mask */
- u32 scb_function_present_state; /* Cardbus Function state */
- u32 scb_force_event; /* Cardbus Force Event */
-} d101m_scb_ext __attribute__ ((__packed__));
-
-/* Changed for 82550 enhancement */
-typedef struct _d102_scb_ext_t {
- u32 scb_rx_dma_cnt; /* Rx DMA byte count */
- u8 scb_early_rx_int; /* Early Rx DMA byte count */
- u8 scb_fc_thld; /* Flow Control threshold */
- u8 scb_fc_xon_xoff; /* Flow Control XON/XOFF values */
- u8 scb_pmdr; /* Power Mgmt. Driver Reg */
- u8 scb_gen_ctrl; /* General Control */
- u8 scb_gen_stat; /* General Status */
- u8 scb_gen_ctrl2;
- u8 scb_reserved; /* Reserved */
- u32 scb_scheduling_reg;
- u32 scb_reserved2;
- u32 scb_function_event; /* Cardbus Function Event */
- u32 scb_function_event_mask; /* Cardbus Function Mask */
- u32 scb_function_present_state; /* Cardbus Function state */
- u32 scb_force_event; /* Cardbus Force Event */
-} d102_scb_ext __attribute__ ((__packed__));
-
-/*
- * 82557 status control block. this will be memory mapped & will hang of the
- * the bdp, which hangs of the bdp. This is the brain of it.
- */
-typedef struct _scb_t {
- u16 scb_status; /* SCB Status register */
- u8 scb_cmd_low; /* SCB Command register (low byte) */
- u8 scb_cmd_hi; /* SCB Command register (high byte) */
- u32 scb_gen_ptr; /* SCB General pointer */
- u32 scb_port; /* PORT register */
- u16 scb_flsh_cntrl; /* Flash Control register */
- u16 scb_eprm_cntrl; /* EEPROM control register */
- u32 scb_mdi_cntrl; /* MDI Control Register */
- /* Changed for 82558 enhancement */
- union {
- u32 scb_rx_dma_cnt; /* Rx DMA byte count */
- d101_scb_ext d101_scb; /* 82558/9 specific fields */
- d101m_scb_ext d101m_scb; /* 82559 specific fields */
- d102_scb_ext d102_scb;
- } scb_ext;
-} scb_t __attribute__ ((__packed__));
-
-/* Self test
- * This is used to dump results of the self test
- */
-typedef struct _self_test_t {
- u32 st_sign; /* Self Test Signature */
- u32 st_result; /* Self Test Results */
-} self_test_t __attribute__ ((__packed__));
-
-/*
- * Statistical Counters
- */
-/* 82557 counters */
-typedef struct _basic_cntr_t {
- u32 xmt_gd_frames; /* Good frames transmitted */
- u32 xmt_max_coll; /* Fatal frames -- had max collisions */
- u32 xmt_late_coll; /* Fatal frames -- had a late coll. */
- u32 xmt_uruns; /* Xmit underruns (fatal or re-transmit) */
- u32 xmt_lost_crs; /* Frames transmitted without CRS */
- u32 xmt_deferred; /* Deferred transmits */
- u32 xmt_sngl_coll; /* Transmits that had 1 and only 1 coll. */
- u32 xmt_mlt_coll; /* Transmits that had multiple coll. */
- u32 xmt_ttl_coll; /* Transmits that had 1+ collisions. */
- u32 rcv_gd_frames; /* Good frames received */
- u32 rcv_crc_errs; /* Aligned frames that had a CRC error */
- u32 rcv_algn_errs; /* Receives that had alignment errors */
- u32 rcv_rsrc_err; /* Good frame dropped cuz no resources */
- u32 rcv_oruns; /* Overrun errors - bus was busy */
- u32 rcv_err_coll; /* Received frms. that encountered coll. */
- u32 rcv_shrt_frames; /* Received frames that were to short */
-} basic_cntr_t;
-
-/* 82558 extended statistic counters */
-typedef struct _ext_cntr_t {
- u32 xmt_fc_frames;
- u32 rcv_fc_frames;
- u32 rcv_fc_unsupported;
-} ext_cntr_t;
-
-/* 82559 TCO statistic counters */
-typedef struct _tco_cntr_t {
- u16 xmt_tco_frames;
- u16 rcv_tco_frames;
-} tco_cntr_t;
-
-/* Structures to access thet physical dump area */
-/* Use one of these types, according to the statisitcal counters mode,
- to cast the pointer to the physical dump area and access the cmd_complete
- DWORD. */
-
-/* 557-mode : only basic counters + cmd_complete */
-typedef struct _err_cntr_557_t {
- basic_cntr_t basic_stats;
- u32 cmd_complete;
-} err_cntr_557_t;
-
-/* 558-mode : basic + extended counters + cmd_complete */
-typedef struct _err_cntr_558_t {
- basic_cntr_t basic_stats;
- ext_cntr_t extended_stats;
- u32 cmd_complete;
-} err_cntr_558_t;
-
-/* 559-mode : basic + extended + TCO counters + cmd_complete */
-typedef struct _err_cntr_559_t {
- basic_cntr_t basic_stats;
- ext_cntr_t extended_stats;
- tco_cntr_t tco_stats;
- u32 cmd_complete;
-} err_cntr_559_t;
-
-/* This typedef defines the struct needed to hold the largest number of counters */
-typedef err_cntr_559_t max_counters_t;
-
-/* Different statistical-counters mode the controller may be in */
-typedef enum _stat_mode_t {
- E100_BASIC_STATS = 0, /* 82557 stats : 16 counters / 16 dw */
- E100_EXTENDED_STATS, /* 82558 stats : 19 counters / 19 dw */
- E100_TCO_STATS /* 82559 stats : 21 counters / 20 dw */
-} stat_mode_t;
-
-/* dump statistical counters complete codes */
-#define DUMP_STAT_COMPLETED 0xA005
-#define DUMP_RST_STAT_COMPLETED 0xA007
-
-/* Command Block (CB) Generic Header Structure*/
-typedef struct _cb_header_t {
- u16 cb_status; /* Command Block Status */
- u16 cb_cmd; /* Command Block Command */
- u32 cb_lnk_ptr; /* Link To Next CB */
-} cb_header_t __attribute__ ((__packed__));
-
-//* Individual Address Command Block (IA_CB)*/
-typedef struct _ia_cb_t {
- cb_header_t ia_cb_hdr;
- u8 ia_addr[ETH_ALEN];
-} ia_cb_t __attribute__ ((__packed__));
-
-/* Configure Command Block (CONFIG_CB)*/
-typedef struct _config_cb_t {
- cb_header_t cfg_cbhdr;
- u8 cfg_byte[CB_CFIG_BYTE_COUNT + CB_CFIG_D102_BYTE_COUNT];
-} config_cb_t __attribute__ ((__packed__));
-
-/* MultiCast Command Block (MULTICAST_CB)*/
-typedef struct _multicast_cb_t {
- cb_header_t mc_cbhdr;
- u16 mc_count; /* Number of multicast addresses */
- u8 mc_addr[(ETH_ALEN * MAX_MULTICAST_ADDRS)];
-} mltcst_cb_t __attribute__ ((__packed__));
-
-#define UCODE_MAX_DWORDS 134
-/* Load Microcode Command Block (LOAD_UCODE_CB)*/
-typedef struct _load_ucode_cb_t {
- cb_header_t load_ucode_cbhdr;
- u32 ucode_dword[UCODE_MAX_DWORDS];
-} load_ucode_cb_t __attribute__ ((__packed__));
-
-/* Load Programmable Filter Data*/
-typedef struct _filter_cb_t {
- cb_header_t filter_cb_hdr;
- u32 filter_data[MAX_FILTER];
-} filter_cb_t __attribute__ ((__packed__));
-
-/* NON_TRANSMIT_CB -- Generic Non-Transmit Command Block
- */
-typedef struct _nxmit_cb_t {
- union {
- config_cb_t config;
- ia_cb_t setup;
- load_ucode_cb_t load_ucode;
- mltcst_cb_t multicast;
- filter_cb_t filter;
- } ntcb;
-} nxmit_cb_t __attribute__ ((__packed__));
-
-/*Block for queuing for postponed execution of the non-transmit commands*/
-typedef struct _nxmit_cb_entry_t {
- struct list_head list_elem;
- nxmit_cb_t *non_tx_cmd;
- dma_addr_t dma_addr;
- unsigned long expiration_time;
-} nxmit_cb_entry_t;
-
-/* States for postponed non tx commands execution */
-typedef enum _non_tx_cmd_state_t {
- E100_NON_TX_IDLE = 0, /* No queued NON-TX commands */
- E100_WAIT_TX_FINISH, /* Wait for completion of the TX activities */
- E100_WAIT_NON_TX_FINISH /* Wait for completion of the non TX command */
-} non_tx_cmd_state_t;
-
-/* some defines for the ipcb */
-#define IPCB_IP_CHECKSUM_ENABLE BIT_4
-#define IPCB_TCPUDP_CHECKSUM_ENABLE BIT_5
-#define IPCB_TCP_PACKET BIT_6
-#define IPCB_LARGESEND_ENABLE BIT_7
-#define IPCB_HARDWAREPARSING_ENABLE BIT_0
-#define IPCB_INSERTVLAN_ENABLE BIT_1
-#define IPCB_IP_ACTIVATION_DEFAULT IPCB_HARDWAREPARSING_ENABLE
-
-/* Transmit Buffer Descriptor (TBD)*/
-typedef struct _tbd_t {
- u32 tbd_buf_addr; /* Physical Transmit Buffer Address */
- u16 tbd_buf_cnt; /* Actual Count Of Bytes */
- u16 padd;
-} tbd_t __attribute__ ((__packed__));
-
-/* d102 specific fields */
-typedef struct _tcb_ipcb_t {
- u16 schedule_low;
- u8 ip_schedule;
- u8 ip_activation_high;
- u16 vlan;
- u8 ip_header_offset;
- u8 tcp_header_offset;
- union {
- u32 sec_rec_phys_addr;
- u32 tbd_zero_address;
- } tbd_sec_addr;
- union {
- u16 sec_rec_size;
- u16 tbd_zero_size;
- } tbd_sec_size;
- u16 total_tcp_payload;
-} tcb_ipcb_t __attribute__ ((__packed__));
-
-#define E100_TBD_ARRAY_SIZE (2+MAX_SKB_FRAGS)
-
-/* Transmit Command Block (TCB)*/
-struct _tcb_t {
- cb_header_t tcb_hdr;
- u32 tcb_tbd_ptr; /* TBD address */
- u16 tcb_cnt; /* Data Bytes In TCB past header */
- u8 tcb_thrshld; /* TX Threshold for FIFO Extender */
- u8 tcb_tbd_num;
-
- union {
- tcb_ipcb_t ipcb; /* d102 ipcb fields */
- tbd_t tbd_array[E100_TBD_ARRAY_SIZE];
- } tcbu;
-
- /* From here onward we can dump anything we want as long as the
- * size of the total structure is a multiple of a paragraph
- * boundary ( i.e. -16 bit aligned ).
- */
- tbd_t *tbd_ptr;
-
- u32 tcb_tbd_dflt_ptr; /* TBD address for non-segmented packet */
- u32 tcb_tbd_expand_ptr; /* TBD address for segmented packet */
-
- struct sk_buff *tcb_skb; /* the associated socket buffer */
- dma_addr_t tcb_phys; /* phys addr of the TCB */
-} __attribute__ ((__packed__));
-
-#define _TCB_T_
-typedef struct _tcb_t tcb_t;
-
-/* Receive Frame Descriptor (RFD) - will be using the simple model*/
-struct _rfd_t {
- /* 8255x */
- cb_header_t rfd_header;
- u32 rfd_rbd_ptr; /* Receive Buffer Descriptor Addr */
- u16 rfd_act_cnt; /* Number Of Bytes Received */
- u16 rfd_sz; /* Number Of Bytes In RFD */
- /* D102 aka Gamla */
- u16 vlanid;
- u8 rcvparserstatus;
- u8 reserved;
- u16 securitystatus;
- u8 checksumstatus;
- u8 zerocopystatus;
- u8 pad[8]; /* data should be 16 byte aligned */
- u8 data[RFD_DATA_SIZE];
-
-} __attribute__ ((__packed__));
-
-#define _RFD_T_
-typedef struct _rfd_t rfd_t;
-
-/* Receive Buffer Descriptor (RBD)*/
-typedef struct _rbd_t {
- u16 rbd_act_cnt; /* Number Of Bytes Received */
- u16 rbd_filler;
- u32 rbd_lnk_addr; /* Link To Next RBD */
- u32 rbd_rcb_addr; /* Receive Buffer Address */
- u16 rbd_sz; /* Receive Buffer Size */
- u16 rbd_filler1;
-} rbd_t __attribute__ ((__packed__));
-
-/*
- * This structure is used to maintain a FIFO access to a resource that is
- * maintained as a circular queue. The resource to be maintained is pointed
- * to by the "data" field in the structure below. In this driver the TCBs',
- * TBDs' & RFDs' are maintained as a circular queue & are managed thru this
- * structure.
- */
-typedef struct _buf_pool_t {
- unsigned int head; /* index to first used resource */
- unsigned int tail; /* index to last used resource */
- void *data; /* points to resource pool */
-} buf_pool_t;
-
-/*Rx skb holding structure*/
-struct rx_list_elem {
- struct list_head list_elem;
- dma_addr_t dma_addr;
- struct sk_buff *skb;
-};
-
-enum next_cu_cmd_e { RESUME_NO_WAIT = 0, RESUME_WAIT, START_WAIT };
-enum zlock_state_e { ZLOCK_INITIAL, ZLOCK_READING, ZLOCK_SLEEPING };
-enum tx_queue_stop_type { LONG_STOP = 0, SHORT_STOP };
-
-/* 64 bit aligned size */
-#define E100_SIZE_64A(X) ((sizeof(X) + 7) & ~0x7)
-
-typedef struct _bd_dma_able_t {
- char selftest[E100_SIZE_64A(self_test_t)];
- char stats_counters[E100_SIZE_64A(max_counters_t)];
-} bd_dma_able_t;
-
-/* bit masks for bool parameters */
-#define PRM_XSUMRX 0x00000001
-#define PRM_UCODE 0x00000002
-#define PRM_FC 0x00000004
-#define PRM_IFS 0x00000008
-#define PRM_BUNDLE_SMALL 0x00000010
-
-struct cfg_params {
- int e100_speed_duplex;
- int RxDescriptors;
- int TxDescriptors;
- int IntDelay;
- int BundleMax;
- int ber;
- u32 b_params;
-};
-struct ethtool_lpbk_data{
- dma_addr_t dma_handle;
- tcb_t *tcb;
- rfd_t *rfd;
-
-};
-
-struct e100_private {
- struct vlan_group *vlgrp;
- u32 flags; /* board management flags */
- u32 tx_per_underrun; /* number of good tx frames per underrun */
- unsigned int tx_count; /* count of tx frames, so we can request an interrupt */
- u8 tx_thld; /* stores transmit threshold */
- u16 eeprom_size;
- u32 pwa_no; /* PWA: xxxxxx-0xx */
- u8 perm_node_address[ETH_ALEN];
- struct list_head active_rx_list; /* list of rx buffers */
- struct list_head rx_struct_pool; /* pool of rx buffer struct headers */
- u16 rfd_size; /* size of the adapter's RFD struct */
- int skb_req; /* number of skbs neede by the adapter */
- u8 intr_mask; /* mask for interrupt status */
-
- void *dma_able; /* dma allocated structs */
- dma_addr_t dma_able_phys;
- self_test_t *selftest; /* pointer to self test area */
- dma_addr_t selftest_phys; /* phys addr of selftest */
- max_counters_t *stats_counters; /* pointer to stats table */
- dma_addr_t stat_cnt_phys; /* phys addr of stat counter area */
-
- stat_mode_t stat_mode; /* statistics mode: extended, TCO, basic */
- scb_t *scb; /* memory mapped ptr to 82557 scb */
-
- tcb_t *last_tcb; /* pointer to last tcb sent */
- buf_pool_t tcb_pool; /* adapter's TCB array */
- dma_addr_t tcb_phys; /* phys addr of start of TCBs */
-
- u16 cur_line_speed;
- u16 cur_dplx_mode;
-
- struct net_device *device;
- struct pci_dev *pdev;
- struct driver_stats drv_stats;
-
- u8 rev_id; /* adapter PCI revision ID */
-
- unsigned int phy_addr; /* address of PHY component */
- unsigned int PhyId; /* ID of PHY component */
- unsigned int PhyState; /* state for the fix squelch algorithm */
- unsigned int PhyDelay; /* delay for the fix squelch algorithm */
-
- /* Lock defintions for the driver */
- spinlock_t bd_lock; /* board lock */
- spinlock_t bd_non_tx_lock; /* Non transmit command lock */
- spinlock_t config_lock; /* config block lock */
- spinlock_t mdi_access_lock; /* mdi lock */
-
- struct timer_list watchdog_timer; /* watchdog timer id */
-
- /* non-tx commands parameters */
- struct timer_list nontx_timer_id; /* non-tx timer id */
- struct list_head non_tx_cmd_list;
- non_tx_cmd_state_t non_tx_command_state;
- nxmit_cb_entry_t *same_cmd_entry[CB_MAX_NONTX_CMD];
-
- enum next_cu_cmd_e next_cu_cmd;
-
- /* Zero Locking Algorithm data members */
- enum zlock_state_e zlock_state;
- u8 zlock_read_data[16]; /* number of times each value 0-15 was read */
- u16 zlock_read_cnt; /* counts number of reads */
- ulong zlock_sleep_cnt; /* keeps track of "sleep" time */
-
- u8 config[CB_CFIG_BYTE_COUNT + CB_CFIG_D102_BYTE_COUNT];
-
- /* IFS params */
- u8 ifs_state;
- u8 ifs_value;
-
- struct cfg_params params; /* adapter's command line parameters */
-
- u32 speed_duplex_caps; /* adapter's speed/duplex capabilities */
-
- /* WOL params for ethtool */
- u32 wolsupported;
- u32 wolopts;
- u16 ip_lbytes;
- struct ethtool_lpbk_data loopback;
- struct timer_list blink_timer; /* led blink timer id */
-
-#ifdef CONFIG_PM
- u32 pci_state[16];
-#endif
-#ifdef E100_CU_DEBUG
- u8 last_cmd;
- u8 last_sub_cmd;
-#endif
-};
-
-#define E100_AUTONEG 0
-#define E100_SPEED_10_HALF 1
-#define E100_SPEED_10_FULL 2
-#define E100_SPEED_100_HALF 3
-#define E100_SPEED_100_FULL 4
-
-/********* function prototypes *************/
-extern int e100_open(struct net_device *);
-extern int e100_close(struct net_device *);
-extern void e100_isolate_driver(struct e100_private *bdp);
-extern unsigned char e100_hw_init(struct e100_private *);
-extern void e100_sw_reset(struct e100_private *bdp, u32 reset_cmd);
-extern u8 e100_start_cu(struct e100_private *bdp, tcb_t *tcb);
-extern void e100_free_non_tx_cmd(struct e100_private *bdp,
- nxmit_cb_entry_t *non_tx_cmd);
-extern nxmit_cb_entry_t *e100_alloc_non_tx_cmd(struct e100_private *bdp);
-extern unsigned char e100_exec_non_cu_cmd(struct e100_private *bdp,
- nxmit_cb_entry_t *cmd);
-extern unsigned char e100_selftest(struct e100_private *bdp, u32 *st_timeout,
- u32 *st_result);
-extern unsigned char e100_get_link_state(struct e100_private *bdp);
-extern unsigned char e100_wait_scb(struct e100_private *bdp);
-
-extern void e100_deisolate_driver(struct e100_private *bdp, u8 full_reset);
-extern unsigned char e100_configure_device(struct e100_private *bdp);
-#ifdef E100_CU_DEBUG
-extern unsigned char e100_cu_unknown_state(struct e100_private *bdp);
-#endif
-
-#define ROM_TEST_FAIL 0x01
-#define REGISTER_TEST_FAIL 0x02
-#define SELF_TEST_FAIL 0x04
-#define TEST_TIMEOUT 0x08
-
-enum test_offsets {
- test_link,
- test_eeprom,
- test_self_test,
- test_loopback_mac,
- test_loopback_phy,
- cable_diag,
- max_test_res, /* must be last */
-};
-
-#endif
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-/**********************************************************************
-* *
-* INTEL CORPORATION *
-* *
-* This software is supplied under the terms of the license included *
-* above. All use of this driver must be in accordance with the terms *
-* of that license. *
-* *
-* Module Name: e100_config.c *
-* *
-* Abstract: Functions for configuring the network adapter. *
-* *
-* Environment: This file is intended to be specific to the Linux *
-* operating system. *
-* *
-**********************************************************************/
-#include "e100_config.h"
-
-static void e100_config_long_rx(struct e100_private *bdp, unsigned char enable);
-
-static const u8 def_config[] = {
- CB_CFIG_BYTE_COUNT,
- 0x08, 0x00, 0x00, 0x00, 0x00, 0x32, 0x07, 0x01,
- 0x00, 0x2e, 0x00, 0x60, 0x00, 0xf2, 0xc8, 0x00,
- 0x40, 0xf2, 0x80, 0x3f, 0x05
-};
-
-/**
- * e100_config_init_82557 - config the 82557 adapter
- * @bdp: atapter's private data struct
- *
- * This routine will initialize the 82557 configure block.
- * All other init functions will only set values that are
- * different from the 82557 default.
- */
-void
-e100_config_init_82557(struct e100_private *bdp)
-{
- /* initialize config block */
- memcpy(bdp->config, def_config, sizeof (def_config));
- bdp->config[0] = CB_CFIG_BYTE_COUNT; /* just in case */
-
- e100_config_ifs(bdp);
-
- /*
- * Enable extended statistical counters (82558 and up) and TCO counters
- * (82559 and up) and set the statistical counters' mode in bdp
- *
- * stat. mode | TCO stat. bit (2) | Extended stat. bit (5)
- * ------------------------------------------------------------------
- * Basic (557) | 0 | 1
- * ------------------------------------------------------------------
- * Extended (558) | 0 | 0
- * ------------------------------------------------------------------
- * TCO (559) | 1 | 1
- * ------------------------------------------------------------------
- * Reserved | 1 | 0
- * ------------------------------------------------------------------
- */
- bdp->config[6] &= ~CB_CFIG_TCO_STAT;
- bdp->config[6] |= CB_CFIG_EXT_STAT_DIS;
- bdp->stat_mode = E100_BASIC_STATS;
-
- /* Setup for MII or 503 operation. The CRS+CDT bit should only be set */
- /* when operating in 503 mode. */
- if (bdp->phy_addr == 32) {
- bdp->config[8] &= ~CB_CFIG_503_MII;
- bdp->config[15] |= CB_CFIG_CRS_OR_CDT;
- } else {
- bdp->config[8] |= CB_CFIG_503_MII;
- bdp->config[15] &= ~CB_CFIG_CRS_OR_CDT;
- }
-
- e100_config_fc(bdp);
- e100_config_force_dplx(bdp);
- e100_config_promisc(bdp, false);
- e100_config_mulcast_enbl(bdp, false);
-}
-
-static void
-e100_config_init_82558(struct e100_private *bdp)
-{
- /* MWI enable. This should be turned on only if the adapter is a 82558/9
- * and if the PCI command reg. has enabled the MWI bit. */
- bdp->config[3] |= CB_CFIG_MWI_EN;
-
- bdp->config[6] &= ~CB_CFIG_EXT_TCB_DIS;
-
- if (bdp->rev_id >= D101MA_REV_ID) {
- /* this is 82559 and up - enable TCO counters */
- bdp->config[6] |= CB_CFIG_TCO_STAT;
- bdp->config[6] |= CB_CFIG_EXT_STAT_DIS;
- bdp->stat_mode = E100_TCO_STATS;
-
- if ((bdp->rev_id < D102_REV_ID) &&
- (bdp->params.b_params & PRM_XSUMRX) &&
- (bdp->pdev->device != 0x1209)) {
-
- bdp->flags |= DF_CSUM_OFFLOAD;
- bdp->config[9] |= 1;
- }
- } else {
- /* this is 82558 */
- bdp->config[6] &= ~CB_CFIG_TCO_STAT;
- bdp->config[6] &= ~CB_CFIG_EXT_STAT_DIS;
- bdp->stat_mode = E100_EXTENDED_STATS;
- }
-
- e100_config_long_rx(bdp, true);
-}
-
-static void
-e100_config_init_82550(struct e100_private *bdp)
-{
- /* The D102 chip allows for 32 config bytes. This value is
- * supposed to be in Byte 0. Just add the extra bytes to
- * what was already setup in the block. */
- bdp->config[0] += CB_CFIG_D102_BYTE_COUNT;
-
- /* now we need to enable the extended RFD. When this is
- * enabled, the immediated receive data buffer starts at offset
- * 32 from the RFD base address, instead of at offset 16. */
- bdp->config[7] |= CB_CFIG_EXTENDED_RFD;
-
- /* put the chip into D102 receive mode. This is necessary
- * for any parsing and offloading features. */
- bdp->config[22] = CB_CFIG_RECEIVE_GAMLA_MODE;
-
- /* set the flag if checksum offloading was enabled */
- if (bdp->params.b_params & PRM_XSUMRX) {
- bdp->flags |= DF_CSUM_OFFLOAD;
- }
-}
-
-/* Initialize the adapter's configure block */
-void
-e100_config_init(struct e100_private *bdp)
-{
- e100_config_init_82557(bdp);
-
- if (bdp->flags & IS_BACHELOR)
- e100_config_init_82558(bdp);
-
- if (bdp->rev_id >= D102_REV_ID)
- e100_config_init_82550(bdp);
-}
-
-/**
- * e100_force_config - force a configure command
- * @bdp: atapter's private data struct
- *
- * This routine will force a configure command to the adapter.
- * The command will be executed in polled mode as interrupts
- * are _disabled_ at this time.
- *
- * Returns:
- * true: if the configure command was successfully issued and completed
- * false: otherwise
- */
-unsigned char
-e100_force_config(struct e100_private *bdp)
-{
- spin_lock_bh(&(bdp->config_lock));
-
- bdp->config[0] = CB_CFIG_BYTE_COUNT;
- if (bdp->rev_id >= D102_REV_ID) {
- /* The D102 chip allows for 32 config bytes. This value is
- supposed to be in Byte 0. Just add the extra bytes to
- what was already setup in the block. */
- bdp->config[0] += CB_CFIG_D102_BYTE_COUNT;
- }
-
- spin_unlock_bh(&(bdp->config_lock));
-
- // although we call config outside the lock, there is no
- // race condition because config byte count has maximum value
- return e100_config(bdp);
-}
-
-/**
- * e100_config - issue a configure command
- * @bdp: atapter's private data struct
- *
- * This routine will issue a configure command to the 82557.
- * This command will be executed in polled mode as interrupts
- * are _disabled_ at this time.
- *
- * Returns:
- * true: if the configure command was successfully issued and completed
- * false: otherwise
- */
-unsigned char
-e100_config(struct e100_private *bdp)
-{
- cb_header_t *pntcb_hdr;
- unsigned char res = true;
- nxmit_cb_entry_t *cmd;
-
- if (bdp->config[0] == 0) {
- goto exit;
- }
-
- if ((cmd = e100_alloc_non_tx_cmd(bdp)) == NULL) {
- res = false;
- goto exit;
- }
-
- pntcb_hdr = (cb_header_t *) cmd->non_tx_cmd;
- pntcb_hdr->cb_cmd = __constant_cpu_to_le16(CB_CONFIGURE);
-
- spin_lock_bh(&bdp->config_lock);
-
- if (bdp->config[0] < CB_CFIG_MIN_PARAMS) {
- bdp->config[0] = CB_CFIG_MIN_PARAMS;
- }
-
- /* Copy the device's config block to the device's memory */
- memcpy(cmd->non_tx_cmd->ntcb.config.cfg_byte, bdp->config,
- bdp->config[0]);
- /* reset number of bytes to config next time */
- bdp->config[0] = 0;
-
- spin_unlock_bh(&bdp->config_lock);
-
- res = e100_exec_non_cu_cmd(bdp, cmd);
-
-exit:
- if (netif_running(bdp->device))
- netif_wake_queue(bdp->device);
- return res;
-}
-
-/**
- * e100_config_fc - config flow-control state
- * @bdp: adapter's private data struct
- *
- * This routine will enable or disable flow control support in the adapter's
- * config block. Flow control will be enable only if requested using the command
- * line option, and if the link is flow-contorl capable (both us and the link
- * partner). But, if link partner is capable of autoneg, but not capable of
- * flow control, received PAUSE frames are still honored.
- */
-void
-e100_config_fc(struct e100_private *bdp)
-{
- unsigned char enable = false;
- /* 82557 doesn't support fc. Don't touch this option */
- if (!(bdp->flags & IS_BACHELOR))
- return;
-
- /* Enable fc if requested and if the link supports it */
- if ((bdp->params.b_params & PRM_FC) && (bdp->flags &
- (DF_LINK_FC_CAP | DF_LINK_FC_TX_ONLY))) {
- enable = true;
- }
-
- spin_lock_bh(&(bdp->config_lock));
-
- if (enable) {
- if (bdp->flags & DF_LINK_FC_TX_ONLY) {
- /* If link partner is capable of autoneg, but */
- /* not capable of flow control, Received PAUSE */
- /* frames are still honored, i.e., */
- /* transmitted frames would be paused by */
- /* incoming PAUSE frames */
- bdp->config[16] = DFLT_NO_FC_DELAY_LSB;
- bdp->config[17] = DFLT_NO_FC_DELAY_MSB;
- bdp->config[19] &= ~(CB_CFIG_FC_RESTOP | CB_CFIG_FC_RESTART);
- bdp->config[19] |= CB_CFIG_FC_REJECT;
- bdp->config[19] &= ~CB_CFIG_TX_FC_DIS;
- } else {
- bdp->config[16] = DFLT_FC_DELAY_LSB;
- bdp->config[17] = DFLT_FC_DELAY_MSB;
- bdp->config[19] |= CB_CFIG_FC_OPTS;
- bdp->config[19] &= ~CB_CFIG_TX_FC_DIS;
- }
- } else {
- bdp->config[16] = DFLT_NO_FC_DELAY_LSB;
- bdp->config[17] = DFLT_NO_FC_DELAY_MSB;
- bdp->config[19] &= ~CB_CFIG_FC_OPTS;
- bdp->config[19] |= CB_CFIG_TX_FC_DIS;
- }
- E100_CONFIG(bdp, 19);
- spin_unlock_bh(&(bdp->config_lock));
-
- return;
-}
-
-/**
- * e100_config_promisc - configure promiscuous mode
- * @bdp: atapter's private data struct
- * @enable: should we enable this option or not
- *
- * This routine will enable or disable promiscuous mode
- * in the adapter's config block.
- */
-void
-e100_config_promisc(struct e100_private *bdp, unsigned char enable)
-{
- spin_lock_bh(&(bdp->config_lock));
-
- /* if in promiscuous mode, save bad frames */
- if (enable) {
-
- if (!(bdp->config[6] & CB_CFIG_SAVE_BAD_FRAMES)) {
- bdp->config[6] |= CB_CFIG_SAVE_BAD_FRAMES;
- E100_CONFIG(bdp, 6);
- }
-
- if (bdp->config[7] & (u8) BIT_0) {
- bdp->config[7] &= (u8) (~BIT_0);
- E100_CONFIG(bdp, 7);
- }
-
- if (!(bdp->config[15] & CB_CFIG_PROMISCUOUS)) {
- bdp->config[15] |= CB_CFIG_PROMISCUOUS;
- E100_CONFIG(bdp, 15);
- }
-
- } else { /* not in promiscuous mode */
-
- if (bdp->config[6] & CB_CFIG_SAVE_BAD_FRAMES) {
- bdp->config[6] &= ~CB_CFIG_SAVE_BAD_FRAMES;
- E100_CONFIG(bdp, 6);
- }
-
- if (!(bdp->config[7] & (u8) BIT_0)) {
- bdp->config[7] |= (u8) (BIT_0);
- E100_CONFIG(bdp, 7);
- }
-
- if (bdp->config[15] & CB_CFIG_PROMISCUOUS) {
- bdp->config[15] &= ~CB_CFIG_PROMISCUOUS;
- E100_CONFIG(bdp, 15);
- }
- }
-
- spin_unlock_bh(&(bdp->config_lock));
-}
-
-/**
- * e100_config_mulcast_enbl - configure allmulti mode
- * @bdp: atapter's private data struct
- * @enable: should we enable this option or not
- *
- * This routine will enable or disable reception of all multicast packets
- * in the adapter's config block.
- */
-void
-e100_config_mulcast_enbl(struct e100_private *bdp, unsigned char enable)
-{
- spin_lock_bh(&(bdp->config_lock));
-
- /* this flag is used to enable receiving all multicast packet */
- if (enable) {
- if (!(bdp->config[21] & CB_CFIG_MULTICAST_ALL)) {
- bdp->config[21] |= CB_CFIG_MULTICAST_ALL;
- E100_CONFIG(bdp, 21);
- }
-
- } else {
- if (bdp->config[21] & CB_CFIG_MULTICAST_ALL) {
- bdp->config[21] &= ~CB_CFIG_MULTICAST_ALL;
- E100_CONFIG(bdp, 21);
- }
- }
-
- spin_unlock_bh(&(bdp->config_lock));
-}
-
-/**
- * e100_config_ifs - configure the IFS parameter
- * @bdp: atapter's private data struct
- *
- * This routine will configure the adaptive IFS value
- * in the adapter's config block. IFS values are only
- * relevant in half duplex, so set to 0 in full duplex.
- */
-void
-e100_config_ifs(struct e100_private *bdp)
-{
- u8 value = 0;
-
- spin_lock_bh(&(bdp->config_lock));
-
- /* IFS value is only needed to be specified at half-duplex mode */
- if (bdp->cur_dplx_mode == HALF_DUPLEX) {
- value = (u8) bdp->ifs_value;
- }
-
- if (bdp->config[2] != value) {
- bdp->config[2] = value;
- E100_CONFIG(bdp, 2);
- }
-
- spin_unlock_bh(&(bdp->config_lock));
-}
-
-/**
- * e100_config_force_dplx - configure the forced full duplex mode
- * @bdp: atapter's private data struct
- *
- * This routine will enable or disable force full duplex
- * in the adapter's config block. If the PHY is 503, and
- * the duplex is full, consider the adapter forced.
- */
-void
-e100_config_force_dplx(struct e100_private *bdp)
-{
- spin_lock_bh(&(bdp->config_lock));
-
- /* We must force full duplex on if we are using PHY 0, and we are */
- /* supposed to run in FDX mode. We do this because the e100 has only */
- /* one FDX# input pin, and that pin will be connected to PHY 1. */
- /* Changed the 'if' condition below to fix performance problem * at 10
- * full. The Phy was getting forced to full duplex while the MAC * was
- * not, because the cur_dplx_mode was not being set to 2 by SetupPhy. *
- * This is how the condition was, initially. * This has been changed so
- * that the MAC gets forced to full duplex * simply if the user has
- * forced full duplex. * * if (( bdp->phy_addr == 0 ) && (
- * bdp->cur_dplx_mode == 2 )) */
- /* The rest of the fix is in the PhyDetect code. */
- if ((bdp->params.e100_speed_duplex == E100_SPEED_10_FULL) ||
- (bdp->params.e100_speed_duplex == E100_SPEED_100_FULL) ||
- ((bdp->phy_addr == 32) && (bdp->cur_dplx_mode == FULL_DUPLEX))) {
- if (!(bdp->config[19] & (u8) CB_CFIG_FORCE_FDX)) {
- bdp->config[19] |= (u8) CB_CFIG_FORCE_FDX;
- E100_CONFIG(bdp, 19);
- }
-
- } else {
- if (bdp->config[19] & (u8) CB_CFIG_FORCE_FDX) {
- bdp->config[19] &= (u8) (~CB_CFIG_FORCE_FDX);
- E100_CONFIG(bdp, 19);
- }
- }
-
- spin_unlock_bh(&(bdp->config_lock));
-}
-
-/**
- * e100_config_long_rx
- * @bdp: atapter's private data struct
- * @enable: should we enable this option or not
- *
- * This routine will enable or disable reception of larger packets.
- * This is needed by VLAN implementations.
- */
-static void
-e100_config_long_rx(struct e100_private *bdp, unsigned char enable)
-{
- if (enable) {
- if (!(bdp->config[18] & CB_CFIG_LONG_RX_OK)) {
- bdp->config[18] |= CB_CFIG_LONG_RX_OK;
- E100_CONFIG(bdp, 18);
- }
-
- } else {
- if ((bdp->config[18] & CB_CFIG_LONG_RX_OK)) {
- bdp->config[18] &= ~CB_CFIG_LONG_RX_OK;
- E100_CONFIG(bdp, 18);
- }
- }
-}
-
-/**
- * e100_config_wol
- * @bdp: atapter's private data struct
- *
- * This sets configuration options for PHY and Magic Packet WoL
- */
-void
-e100_config_wol(struct e100_private *bdp)
-{
- spin_lock_bh(&(bdp->config_lock));
-
- if (bdp->wolopts & WAKE_PHY) {
- bdp->config[9] |= CB_LINK_STATUS_WOL;
- }
- else {
- /* Disable PHY WoL */
- bdp->config[9] &= ~CB_LINK_STATUS_WOL;
- }
-
- if (bdp->wolopts & WAKE_MAGIC) {
- bdp->config[19] &= ~CB_DISABLE_MAGPAK_WAKE;
- }
- else {
- /* Disable Magic Packet WoL */
- bdp->config[19] |= CB_DISABLE_MAGPAK_WAKE;
- }
-
- E100_CONFIG(bdp, 19);
- spin_unlock_bh(&(bdp->config_lock));
-}
-
-void
-e100_config_vlan_drop(struct e100_private *bdp, unsigned char enable)
-{
- spin_lock_bh(&(bdp->config_lock));
- if (enable) {
- if (!(bdp->config[22] & CB_CFIG_VLAN_DROP_ENABLE)) {
- bdp->config[22] |= CB_CFIG_VLAN_DROP_ENABLE;
- E100_CONFIG(bdp, 22);
- }
-
- } else {
- if ((bdp->config[22] & CB_CFIG_VLAN_DROP_ENABLE)) {
- bdp->config[22] &= ~CB_CFIG_VLAN_DROP_ENABLE;
- E100_CONFIG(bdp, 22);
- }
- }
- spin_unlock_bh(&(bdp->config_lock));
-}
-
-/**
- * e100_config_loopback_mode
- * @bdp: atapter's private data struct
- * @mode: loopback mode(phy/mac/none)
- *
- */
-unsigned char
-e100_config_loopback_mode(struct e100_private *bdp, u8 mode)
-{
- unsigned char bc_changed = false;
- u8 config_byte;
-
- spin_lock_bh(&(bdp->config_lock));
-
- switch (mode) {
- case NO_LOOPBACK:
- config_byte = CB_CFIG_LOOPBACK_NORMAL;
- break;
- case MAC_LOOPBACK:
- config_byte = CB_CFIG_LOOPBACK_INTERNAL;
- break;
- case PHY_LOOPBACK:
- config_byte = CB_CFIG_LOOPBACK_EXTERNAL;
- break;
- default:
- printk(KERN_NOTICE "e100: e100_config_loopback_mode: "
- "Invalid argument 'mode': %d\n", mode);
- goto exit;
- }
-
- if ((bdp->config[10] & CB_CFIG_LOOPBACK_MODE) != config_byte) {
-
- bdp->config[10] &= (~CB_CFIG_LOOPBACK_MODE);
- bdp->config[10] |= config_byte;
- E100_CONFIG(bdp, 10);
- bc_changed = true;
- }
-
-exit:
- spin_unlock_bh(&(bdp->config_lock));
- return bc_changed;
-}
-unsigned char
-e100_config_tcb_ext_enable(struct e100_private *bdp, unsigned char enable)
-{
- unsigned char bc_changed = false;
-
- spin_lock_bh(&(bdp->config_lock));
-
- if (enable) {
- if (bdp->config[6] & CB_CFIG_EXT_TCB_DIS) {
-
- bdp->config[6] &= (~CB_CFIG_EXT_TCB_DIS);
- E100_CONFIG(bdp, 6);
- bc_changed = true;
- }
-
- } else {
- if (!(bdp->config[6] & CB_CFIG_EXT_TCB_DIS)) {
-
- bdp->config[6] |= CB_CFIG_EXT_TCB_DIS;
- E100_CONFIG(bdp, 6);
- bc_changed = true;
- }
- }
- spin_unlock_bh(&(bdp->config_lock));
-
- return bc_changed;
-}
-unsigned char
-e100_config_dynamic_tbd(struct e100_private *bdp, unsigned char enable)
-{
- unsigned char bc_changed = false;
-
- spin_lock_bh(&(bdp->config_lock));
-
- if (enable) {
- if (!(bdp->config[7] & CB_CFIG_DYNTBD_EN)) {
-
- bdp->config[7] |= CB_CFIG_DYNTBD_EN;
- E100_CONFIG(bdp, 7);
- bc_changed = true;
- }
-
- } else {
- if (bdp->config[7] & CB_CFIG_DYNTBD_EN) {
-
- bdp->config[7] &= (~CB_CFIG_DYNTBD_EN);
- E100_CONFIG(bdp, 7);
- bc_changed = true;
- }
- }
- spin_unlock_bh(&(bdp->config_lock));
-
- return bc_changed;
-}
-
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-#ifndef _E100_CONFIG_INC_
-#define _E100_CONFIG_INC_
-
-#include "e100.h"
-
-#define E100_CONFIG(bdp, X) ((bdp)->config[0] = max_t(u8, (bdp)->config[0], (X)+1))
-
-#define CB_CFIG_MIN_PARAMS 8
-
-/* byte 0 bit definitions*/
-#define CB_CFIG_BYTE_COUNT_MASK BIT_0_5 /* Byte count occupies bit 5-0 */
-
-/* byte 1 bit definitions*/
-#define CB_CFIG_RXFIFO_LIMIT_MASK BIT_0_4 /* RxFifo limit mask */
-#define CB_CFIG_TXFIFO_LIMIT_MASK BIT_4_7 /* TxFifo limit mask */
-
-/* byte 2 bit definitions -- ADAPTIVE_IFS*/
-
-/* word 3 bit definitions -- RESERVED*/
-/* Changed for 82558 enhancements */
-/* byte 3 bit definitions */
-#define CB_CFIG_MWI_EN BIT_0 /* Enable MWI on PCI bus */
-#define CB_CFIG_TYPE_EN BIT_1 /* Type Enable */
-#define CB_CFIG_READAL_EN BIT_2 /* Enable Read Align */
-#define CB_CFIG_TERMCL_EN BIT_3 /* Cache line write */
-
-/* byte 4 bit definitions*/
-#define CB_CFIG_RX_MIN_DMA_MASK BIT_0_6 /* Rx minimum DMA count mask */
-
-/* byte 5 bit definitions*/
-#define CB_CFIG_TX_MIN_DMA_MASK BIT_0_6 /* Tx minimum DMA count mask */
-#define CB_CFIG_DMBC_EN BIT_7 /* Enable Tx/Rx min. DMA counts */
-
-/* Changed for 82558 enhancements */
-/* byte 6 bit definitions*/
-#define CB_CFIG_LATE_SCB BIT_0 /* Update SCB After New Tx Start */
-#define CB_CFIG_DIRECT_DMA_DIS BIT_1 /* Direct DMA mode */
-#define CB_CFIG_TNO_INT BIT_2 /* Tx Not OK Interrupt */
-#define CB_CFIG_TCO_STAT BIT_2 /* TCO statistics in 559 and above */
-#define CB_CFIG_CI_INT BIT_3 /* Command Complete Interrupt */
-#define CB_CFIG_EXT_TCB_DIS BIT_4 /* Extended TCB */
-#define CB_CFIG_EXT_STAT_DIS BIT_5 /* Extended Stats */
-#define CB_CFIG_SAVE_BAD_FRAMES BIT_7 /* Save Bad Frames Enabled */
-
-/* byte 7 bit definitions*/
-#define CB_CFIG_DISC_SHORT_FRAMES BIT_0 /* Discard Short Frames */
-#define CB_CFIG_DYNTBD_EN BIT_7 /* Enable dynamic TBD */
-/* Enable extended RFD's on D102 */
-#define CB_CFIG_EXTENDED_RFD BIT_5
-
-/* byte 8 bit definitions*/
-#define CB_CFIG_503_MII BIT_0 /* 503 vs. MII mode */
-
-/* byte 9 bit definitions -- pre-defined all zeros*/
-#define CB_LINK_STATUS_WOL BIT_5
-
-/* byte 10 bit definitions*/
-#define CB_CFIG_NO_SRCADR BIT_3 /* No Source Address Insertion */
-#define CB_CFIG_PREAMBLE_LEN BIT_4_5 /* Preamble Length */
-#define CB_CFIG_LOOPBACK_MODE BIT_6_7 /* Loopback Mode */
-#define CB_CFIG_LOOPBACK_NORMAL 0
-#define CB_CFIG_LOOPBACK_INTERNAL BIT_6
-#define CB_CFIG_LOOPBACK_EXTERNAL BIT_6_7
-
-/* byte 11 bit definitions*/
-#define CB_CFIG_LINEAR_PRIORITY BIT_0_2 /* Linear Priority */
-
-/* byte 12 bit definitions*/
-#define CB_CFIG_LINEAR_PRI_MODE BIT_0 /* Linear Priority mode */
-#define CB_CFIG_IFS_MASK BIT_4_7 /* Interframe Spacing mask */
-
-/* byte 13 bit definitions -- pre-defined all zeros*/
-
-/* byte 14 bit definitions -- pre-defined 0xf2*/
-
-/* byte 15 bit definitions*/
-#define CB_CFIG_PROMISCUOUS BIT_0 /* Promiscuous Mode Enable */
-#define CB_CFIG_BROADCAST_DIS BIT_1 /* Broadcast Mode Disable */
-#define CB_CFIG_CRS_OR_CDT BIT_7 /* CRS Or CDT */
-
-/* byte 16 bit definitions -- pre-defined all zeros*/
-#define DFLT_FC_DELAY_LSB 0x1f /* Delay for outgoing Pause frames */
-#define DFLT_NO_FC_DELAY_LSB 0x00 /* no flow control default value */
-
-/* byte 17 bit definitions -- pre-defined 0x40*/
-#define DFLT_FC_DELAY_MSB 0x01 /* Delay for outgoing Pause frames */
-#define DFLT_NO_FC_DELAY_MSB 0x40 /* no flow control default value */
-
-/* byte 18 bit definitions*/
-#define CB_CFIG_STRIPPING BIT_0 /* Padding Disabled */
-#define CB_CFIG_PADDING BIT_1 /* Padding Disabled */
-#define CB_CFIG_CRC_IN_MEM BIT_2 /* Transfer CRC To Memory */
-
-/* byte 19 bit definitions*/
-#define CB_CFIG_TX_ADDR_WAKE BIT_0 /* Address Wakeup */
-#define CB_DISABLE_MAGPAK_WAKE BIT_1 /* Magic Packet Wakeup disable */
-/* Changed TX_FC_EN to TX_FC_DIS because 0 enables, 1 disables. Jul 8, 1999 */
-#define CB_CFIG_TX_FC_DIS BIT_2 /* Tx Flow Control Disable */
-#define CB_CFIG_FC_RESTOP BIT_3 /* Rx Flow Control Restop */
-#define CB_CFIG_FC_RESTART BIT_4 /* Rx Flow Control Restart */
-#define CB_CFIG_FC_REJECT BIT_5 /* Rx Flow Control Restart */
-#define CB_CFIG_FC_OPTS (CB_CFIG_FC_RESTOP | CB_CFIG_FC_RESTART | CB_CFIG_FC_REJECT)
-
-/* end 82558/9 specifics */
-
-#define CB_CFIG_FORCE_FDX BIT_6 /* Force Full Duplex */
-#define CB_CFIG_FDX_ENABLE BIT_7 /* Full Duplex Enabled */
-
-/* byte 20 bit definitions*/
-#define CB_CFIG_MULTI_IA BIT_6 /* Multiple IA Addr */
-
-/* byte 21 bit definitions*/
-#define CB_CFIG_MULTICAST_ALL BIT_3 /* Multicast All */
-
-/* byte 22 bit defines */
-#define CB_CFIG_RECEIVE_GAMLA_MODE BIT_0 /* D102 receive mode */
-#define CB_CFIG_VLAN_DROP_ENABLE BIT_1 /* vlan stripping */
-
-#define CB_CFIG_LONG_RX_OK BIT_3
-
-#define NO_LOOPBACK 0
-#define MAC_LOOPBACK 0x01
-#define PHY_LOOPBACK 0x02
-
-/* function prototypes */
-extern void e100_config_init(struct e100_private *bdp);
-extern void e100_config_init_82557(struct e100_private *bdp);
-extern unsigned char e100_force_config(struct e100_private *bdp);
-extern unsigned char e100_config(struct e100_private *bdp);
-extern void e100_config_fc(struct e100_private *bdp);
-extern void e100_config_promisc(struct e100_private *bdp, unsigned char enable);
-extern void e100_config_brdcast_dsbl(struct e100_private *bdp);
-extern void e100_config_mulcast_enbl(struct e100_private *bdp,
- unsigned char enable);
-extern void e100_config_ifs(struct e100_private *bdp);
-extern void e100_config_force_dplx(struct e100_private *bdp);
-extern u8 e100_config_loopback_mode(struct e100_private *bdp, u8 mode);
-extern u8 e100_config_dynamic_tbd(struct e100_private *bdp, u8 enable);
-extern u8 e100_config_tcb_ext_enable(struct e100_private *bdp, u8 enable);
-extern void e100_config_vlan_drop(struct e100_private *bdp, unsigned char enable);
-#endif /* _E100_CONFIG_INC_ */
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-/**********************************************************************
-* *
-* INTEL CORPORATION *
-* *
-* This software is supplied under the terms of the license included *
-* above. All use of this driver must be in accordance with the terms *
-* of that license. *
-* *
-* Module Name: e100_eeprom.c *
-* *
-* Abstract: This module contains routines to read and write to a *
-* serial EEPROM *
-* *
-* Environment: This file is intended to be specific to the Linux *
-* operating system. *
-* *
-**********************************************************************/
-#include "e100.h"
-
-#define CSR_EEPROM_CONTROL_FIELD(bdp) ((bdp)->scb->scb_eprm_cntrl)
-
-#define CSR_GENERAL_CONTROL2_FIELD(bdp) \
- ((bdp)->scb->scb_ext.d102_scb.scb_gen_ctrl2)
-
-#define EEPROM_STALL_TIME 4
-#define EEPROM_CHECKSUM ((u16) 0xBABA)
-#define EEPROM_MAX_WORD_SIZE 256
-
-void e100_eeprom_cleanup(struct e100_private *adapter);
-u16 e100_eeprom_calculate_chksum(struct e100_private *adapter);
-static void e100_eeprom_write_word(struct e100_private *adapter, u16 reg,
- u16 data);
-void e100_eeprom_write_block(struct e100_private *adapter, u16 start, u16 *data,
- u16 size);
-u16 e100_eeprom_size(struct e100_private *adapter);
-u16 e100_eeprom_read(struct e100_private *adapter, u16 reg);
-
-static void shift_out_bits(struct e100_private *adapter, u16 data, u16 count);
-static u16 shift_in_bits(struct e100_private *adapter);
-static void raise_clock(struct e100_private *adapter, u16 *x);
-static void lower_clock(struct e100_private *adapter, u16 *x);
-static u16 eeprom_wait_cmd_done(struct e100_private *adapter);
-static void eeprom_stand_by(struct e100_private *adapter);
-
-//----------------------------------------------------------------------------------------
-// Procedure: eeprom_set_semaphore
-//
-// Description: This function set (write 1) Gamla EEPROM semaphore bit (bit 23 word 0x1C in the CSR).
-//
-// Arguments:
-// Adapter - Adapter context
-//
-// Returns: true if success
-// else return false
-//
-//----------------------------------------------------------------------------------------
-
-inline u8
-eeprom_set_semaphore(struct e100_private *adapter)
-{
- u16 data = 0;
- unsigned long expiration_time = jiffies + HZ / 100 + 1;
-
- do {
- // Get current value of General Control 2
- data = readb(&CSR_GENERAL_CONTROL2_FIELD(adapter));
-
- // Set bit 23 word 0x1C in the CSR.
- data |= SCB_GCR2_EEPROM_ACCESS_SEMAPHORE;
- writeb(data, &CSR_GENERAL_CONTROL2_FIELD(adapter));
-
- // Check to see if this bit set or not.
- data = readb(&CSR_GENERAL_CONTROL2_FIELD(adapter));
-
- if (data & SCB_GCR2_EEPROM_ACCESS_SEMAPHORE) {
- return true;
- }
-
- if (time_before(jiffies, expiration_time))
- yield();
- else
- return false;
-
- } while (true);
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: eeprom_reset_semaphore
-//
-// Description: This function reset (write 0) Gamla EEPROM semaphore bit
-// (bit 23 word 0x1C in the CSR).
-//
-// Arguments: struct e100_private * adapter - Adapter context
-//----------------------------------------------------------------------------------------
-
-inline void
-eeprom_reset_semaphore(struct e100_private *adapter)
-{
- u16 data = 0;
-
- data = readb(&CSR_GENERAL_CONTROL2_FIELD(adapter));
- data &= ~(SCB_GCR2_EEPROM_ACCESS_SEMAPHORE);
- writeb(data, &CSR_GENERAL_CONTROL2_FIELD(adapter));
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: e100_eeprom_size
-//
-// Description: This routine determines the size of the EEPROM. This value should be
-// checked for validity - ie. is it too big or too small. The size returned
-// is then passed to the read/write functions.
-//
-// Returns:
-// Size of the eeprom, or zero if an error occurred
-//----------------------------------------------------------------------------------------
-u16
-e100_eeprom_size(struct e100_private *adapter)
-{
- u16 x, size = 1; // must be one to accumulate a product
-
- // if we've already stored this data, read from memory
- if (adapter->eeprom_size) {
- return adapter->eeprom_size;
- }
- // otherwise, read from the eeprom
- // Set EEPROM semaphore.
- if (adapter->rev_id >= D102_REV_ID) {
- if (!eeprom_set_semaphore(adapter))
- return 0;
- }
- // enable the eeprom by setting EECS.
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- x &= ~(EEDI | EEDO | EESK);
- x |= EECS;
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
-
- // write the read opcode
- shift_out_bits(adapter, EEPROM_READ_OPCODE, 3);
-
- // experiment to discover the size of the eeprom. request register zero
- // and wait for the eeprom to tell us it has accepted the entire address.
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- do {
- size *= 2; // each bit of address doubles eeprom size
- x |= EEDO; // set bit to detect "dummy zero"
- x &= ~EEDI; // address consists of all zeros
-
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
- readw(&(adapter->scb->scb_status));
- udelay(EEPROM_STALL_TIME);
- raise_clock(adapter, &x);
- lower_clock(adapter, &x);
-
- // check for "dummy zero"
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- if (size > EEPROM_MAX_WORD_SIZE) {
- size = 0;
- break;
- }
- } while (x & EEDO);
-
- // read in the value requested
- (void) shift_in_bits(adapter);
- e100_eeprom_cleanup(adapter);
-
- // Clear EEPROM Semaphore.
- if (adapter->rev_id >= D102_REV_ID) {
- eeprom_reset_semaphore(adapter);
- }
-
- return size;
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: eeprom_address_size
-//
-// Description: determines the number of bits in an address for the eeprom acceptable
-// values are 64, 128, and 256
-// Arguments: size of the eeprom
-// Returns: bits in an address for that size eeprom
-//----------------------------------------------------------------------------------------
-
-static inline int
-eeprom_address_size(u16 size)
-{
- int isize = size;
-
- return (ffs(isize) - 1);
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: e100_eeprom_read
-//
-// Description: This routine serially reads one word out of the EEPROM.
-//
-// Arguments:
-// adapter - our adapter context
-// reg - EEPROM word to read.
-//
-// Returns:
-// Contents of EEPROM word (reg).
-//----------------------------------------------------------------------------------------
-
-u16
-e100_eeprom_read(struct e100_private *adapter, u16 reg)
-{
- u16 x, data, bits;
-
- // Set EEPROM semaphore.
- if (adapter->rev_id >= D102_REV_ID) {
- if (!eeprom_set_semaphore(adapter))
- return 0;
- }
- // eeprom size is initialized to zero
- if (!adapter->eeprom_size)
- adapter->eeprom_size = e100_eeprom_size(adapter);
-
- bits = eeprom_address_size(adapter->eeprom_size);
-
- // select EEPROM, reset bits, set EECS
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
-
- x &= ~(EEDI | EEDO | EESK);
- x |= EECS;
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
-
- // write the read opcode and register number in that order
- // The opcode is 3bits in length, reg is 'bits' bits long
- shift_out_bits(adapter, EEPROM_READ_OPCODE, 3);
- shift_out_bits(adapter, reg, bits);
-
- // Now read the data (16 bits) in from the selected EEPROM word
- data = shift_in_bits(adapter);
-
- e100_eeprom_cleanup(adapter);
-
- // Clear EEPROM Semaphore.
- if (adapter->rev_id >= D102_REV_ID) {
- eeprom_reset_semaphore(adapter);
- }
-
- return data;
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: shift_out_bits
-//
-// Description: This routine shifts data bits out to the EEPROM.
-//
-// Arguments:
-// data - data to send to the EEPROM.
-// count - number of data bits to shift out.
-//
-// Returns: (none)
-//----------------------------------------------------------------------------------------
-
-static void
-shift_out_bits(struct e100_private *adapter, u16 data, u16 count)
-{
- u16 x, mask;
-
- mask = 1 << (count - 1);
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- x &= ~(EEDO | EEDI);
-
- do {
- x &= ~EEDI;
- if (data & mask)
- x |= EEDI;
-
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
- readw(&(adapter->scb->scb_status)); /* flush command to card */
- udelay(EEPROM_STALL_TIME);
- raise_clock(adapter, &x);
- lower_clock(adapter, &x);
- mask = mask >> 1;
- } while (mask);
-
- x &= ~EEDI;
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: raise_clock
-//
-// Description: This routine raises the EEPROM's clock input (EESK)
-//
-// Arguments:
-// x - Ptr to the EEPROM control register's current value
-//
-// Returns: (none)
-//----------------------------------------------------------------------------------------
-
-void
-raise_clock(struct e100_private *adapter, u16 *x)
-{
- *x = *x | EESK;
- writew(*x, &CSR_EEPROM_CONTROL_FIELD(adapter));
- readw(&(adapter->scb->scb_status)); /* flush command to card */
- udelay(EEPROM_STALL_TIME);
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: lower_clock
-//
-// Description: This routine lower's the EEPROM's clock input (EESK)
-//
-// Arguments:
-// x - Ptr to the EEPROM control register's current value
-//
-// Returns: (none)
-//----------------------------------------------------------------------------------------
-
-void
-lower_clock(struct e100_private *adapter, u16 *x)
-{
- *x = *x & ~EESK;
- writew(*x, &CSR_EEPROM_CONTROL_FIELD(adapter));
- readw(&(adapter->scb->scb_status)); /* flush command to card */
- udelay(EEPROM_STALL_TIME);
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: shift_in_bits
-//
-// Description: This routine shifts data bits in from the EEPROM.
-//
-// Arguments:
-//
-// Returns:
-// The contents of that particular EEPROM word
-//----------------------------------------------------------------------------------------
-
-static u16
-shift_in_bits(struct e100_private *adapter)
-{
- u16 x, d, i;
-
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- x &= ~(EEDO | EEDI);
- d = 0;
-
- for (i = 0; i < 16; i++) {
- d <<= 1;
- raise_clock(adapter, &x);
-
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
-
- x &= ~EEDI;
- if (x & EEDO)
- d |= 1;
-
- lower_clock(adapter, &x);
- }
-
- return d;
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: e100_eeprom_cleanup
-//
-// Description: This routine returns the EEPROM to an idle state
-//----------------------------------------------------------------------------------------
-
-void
-e100_eeprom_cleanup(struct e100_private *adapter)
-{
- u16 x;
-
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
-
- x &= ~(EECS | EEDI);
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
-
- raise_clock(adapter, &x);
- lower_clock(adapter, &x);
-}
-
-//**********************************************************************************
-// Procedure: e100_eeprom_update_chksum
-//
-// Description: Calculates the checksum and writes it to the EEProm.
-// It calculates the checksum accroding to the formula:
-// Checksum = 0xBABA - (sum of first 63 words).
-//
-//-----------------------------------------------------------------------------------
-u16
-e100_eeprom_calculate_chksum(struct e100_private *adapter)
-{
- u16 idx, xsum_index, checksum = 0;
-
- // eeprom size is initialized to zero
- if (!adapter->eeprom_size)
- adapter->eeprom_size = e100_eeprom_size(adapter);
-
- xsum_index = adapter->eeprom_size - 1;
- for (idx = 0; idx < xsum_index; idx++)
- checksum += e100_eeprom_read(adapter, idx);
-
- checksum = EEPROM_CHECKSUM - checksum;
- return checksum;
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: e100_eeprom_write_word
-//
-// Description: This routine writes a word to a specific EEPROM location without.
-// taking EEPROM semaphore and updating checksum.
-// Use e100_eeprom_write_block for the EEPROM update
-// Arguments: reg - The EEPROM word that we are going to write to.
-// data - The data (word) that we are going to write to the EEPROM.
-//----------------------------------------------------------------------------------------
-static void
-e100_eeprom_write_word(struct e100_private *adapter, u16 reg, u16 data)
-{
- u16 x;
- u16 bits;
-
- bits = eeprom_address_size(adapter->eeprom_size);
-
- /* select EEPROM, mask off ASIC and reset bits, set EECS */
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- x &= ~(EEDI | EEDO | EESK);
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
- readw(&(adapter->scb->scb_status)); /* flush command to card */
- udelay(EEPROM_STALL_TIME);
- x |= EECS;
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
-
- shift_out_bits(adapter, EEPROM_EWEN_OPCODE, 5);
- shift_out_bits(adapter, reg, (u16) (bits - 2));
- if (!eeprom_wait_cmd_done(adapter))
- return;
-
- /* write the new word to the EEPROM & send the write opcode the EEPORM */
- shift_out_bits(adapter, EEPROM_WRITE_OPCODE, 3);
-
- /* select which word in the EEPROM that we are writing to */
- shift_out_bits(adapter, reg, bits);
-
- /* write the data to the selected EEPROM word */
- shift_out_bits(adapter, data, 16);
- if (!eeprom_wait_cmd_done(adapter))
- return;
-
- shift_out_bits(adapter, EEPROM_EWDS_OPCODE, 5);
- shift_out_bits(adapter, reg, (u16) (bits - 2));
- if (!eeprom_wait_cmd_done(adapter))
- return;
-
- e100_eeprom_cleanup(adapter);
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: e100_eeprom_write_block
-//
-// Description: This routine writes a block of words starting from specified EEPROM
-// location and updates checksum
-// Arguments: reg - The EEPROM word that we are going to write to.
-// data - The data (word) that we are going to write to the EEPROM.
-//----------------------------------------------------------------------------------------
-void
-e100_eeprom_write_block(struct e100_private *adapter, u16 start, u16 *data,
- u16 size)
-{
- u16 checksum;
- u16 i;
-
- if (!adapter->eeprom_size)
- adapter->eeprom_size = e100_eeprom_size(adapter);
-
- // Set EEPROM semaphore.
- if (adapter->rev_id >= D102_REV_ID) {
- if (!eeprom_set_semaphore(adapter))
- return;
- }
-
- for (i = 0; i < size; i++) {
- e100_eeprom_write_word(adapter, start + i, data[i]);
- }
- //Update checksum
- checksum = e100_eeprom_calculate_chksum(adapter);
- e100_eeprom_write_word(adapter, (adapter->eeprom_size - 1), checksum);
-
- // Clear EEPROM Semaphore.
- if (adapter->rev_id >= D102_REV_ID) {
- eeprom_reset_semaphore(adapter);
- }
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: eeprom_wait_cmd_done
-//
-// Description: This routine waits for the the EEPROM to finish its command.
-// Specifically, it waits for EEDO (data out) to go high.
-// Returns: true - If the command finished
-// false - If the command never finished (EEDO stayed low)
-//----------------------------------------------------------------------------------------
-static u16
-eeprom_wait_cmd_done(struct e100_private *adapter)
-{
- u16 x;
- unsigned long expiration_time = jiffies + HZ / 100 + 1;
-
- eeprom_stand_by(adapter);
-
- do {
- rmb();
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- if (x & EEDO)
- return true;
- if (time_before(jiffies, expiration_time))
- yield();
- else
- return false;
- } while (true);
-}
-
-//----------------------------------------------------------------------------------------
-// Procedure: eeprom_stand_by
-//
-// Description: This routine lowers the EEPROM chip select (EECS) for a few microseconds.
-//----------------------------------------------------------------------------------------
-static void
-eeprom_stand_by(struct e100_private *adapter)
-{
- u16 x;
-
- x = readw(&CSR_EEPROM_CONTROL_FIELD(adapter));
- x &= ~(EECS | EESK);
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
- readw(&(adapter->scb->scb_status)); /* flush command to card */
- udelay(EEPROM_STALL_TIME);
- x |= EECS;
- writew(x, &CSR_EEPROM_CONTROL_FIELD(adapter));
- readw(&(adapter->scb->scb_status)); /* flush command to card */
- udelay(EEPROM_STALL_TIME);
-}
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-/**********************************************************************
-* *
-* INTEL CORPORATION *
-* *
-* This software is supplied under the terms of the license included *
-* above. All use of this driver must be in accordance with the terms *
-* of that license. *
-* *
-* Module Name: e100_main.c *
-* *
-* Abstract: Functions for the driver entry points like load, *
-* unload, open and close. All board specific calls made *
-* by the network interface section of the driver. *
-* *
-* Environment: This file is intended to be specific to the Linux *
-* operating system. *
-* *
-**********************************************************************/
-
-/* Change Log
- *
- * 2.3.36 11/13/03
- * o Moved to 2.6 APIs: pci_name() and free_netdev().
- * o Removed some __devinit from some functions that shouldn't be marked
- * as such (Anton Blanchard [anton@samba.org]).
- *
- * 2.3.33 10/21/03
- * o Bug fix (Bugzilla 97908): Loading e100 was causing crash on Itanium2
- * with HP chipset
- * o Bug fix (Bugzilla 101583): e100 can't pass traffic with ipv6
- * o Bug fix (Bugzilla 101360): PRO/10+ can't pass traffic
- *
- * 2.3.27 08/08/03
- */
-
-#include <linux/config.h>
-#include <net/checksum.h>
-#include <linux/tcp.h>
-#include <linux/udp.h>
-#include "e100.h"
-#include "e100_ucode.h"
-#include "e100_config.h"
-#include "e100_phy.h"
-
-extern void e100_force_speed_duplex_to_phy(struct e100_private *bdp);
-
-static char e100_gstrings_stats[][ETH_GSTRING_LEN] = {
- "rx_packets", "tx_packets", "rx_bytes", "tx_bytes", "rx_errors",
- "tx_errors", "rx_dropped", "tx_dropped", "multicast", "collisions",
- "rx_length_errors", "rx_over_errors", "rx_crc_errors",
- "rx_frame_errors", "rx_fifo_errors", "rx_missed_errors",
- "tx_aborted_errors", "tx_carrier_errors", "tx_fifo_errors",
- "tx_heartbeat_errors", "tx_window_errors",
-};
-#define E100_STATS_LEN sizeof(e100_gstrings_stats) / ETH_GSTRING_LEN
-
-static int e100_do_ethtool_ioctl(struct net_device *, struct ifreq *);
-static void e100_get_speed_duplex_caps(struct e100_private *);
-static int e100_ethtool_get_settings(struct net_device *, struct ifreq *);
-static int e100_ethtool_set_settings(struct net_device *, struct ifreq *);
-
-static int e100_ethtool_get_drvinfo(struct net_device *, struct ifreq *);
-static int e100_ethtool_eeprom(struct net_device *, struct ifreq *);
-
-#define E100_EEPROM_MAGIC 0x1234
-static int e100_ethtool_glink(struct net_device *, struct ifreq *);
-static int e100_ethtool_gregs(struct net_device *, struct ifreq *);
-static int e100_ethtool_nway_rst(struct net_device *, struct ifreq *);
-static int e100_ethtool_wol(struct net_device *, struct ifreq *);
-#ifdef CONFIG_PM
-static unsigned char e100_setup_filter(struct e100_private *bdp);
-static void e100_do_wol(struct pci_dev *pcid, struct e100_private *bdp);
-#endif
-static u16 e100_get_ip_lbytes(struct net_device *dev);
-extern void e100_config_wol(struct e100_private *bdp);
-extern u32 e100_run_diag(struct net_device *dev, u64 *test_info, u32 flags);
-static int e100_ethtool_test(struct net_device *, struct ifreq *);
-static int e100_ethtool_gstrings(struct net_device *, struct ifreq *);
-static char test_strings[][ETH_GSTRING_LEN] = {
- "Link test (on/offline)",
- "Eeprom test (on/offline)",
- "Self test (offline)",
- "Mac loopback (offline)",
- "Phy loopback (offline)",
- "Cable diagnostic (offline)"
-};
-
-static int e100_ethtool_led_blink(struct net_device *, struct ifreq *);
-
-static int e100_mii_ioctl(struct net_device *, struct ifreq *, int);
-
-static unsigned char e100_delayed_exec_non_cu_cmd(struct e100_private *,
- nxmit_cb_entry_t *);
-static void e100_free_nontx_list(struct e100_private *);
-static void e100_non_tx_background(unsigned long);
-static inline void e100_tx_skb_free(struct e100_private *bdp, tcb_t *tcb);
-/* Global Data structures and variables */
-char e100_copyright[] = "Copyright (c) 2003 Intel Corporation";
-char e100_driver_version[]="2.3.36-k1";
-const char *e100_full_driver_name = "Intel(R) PRO/100 Network Driver";
-char e100_short_driver_name[] = "e100";
-static int e100nics = 0;
-static void e100_vlan_rx_register(struct net_device *netdev, struct vlan_group
- *grp);
-static void e100_vlan_rx_add_vid(struct net_device *netdev, u16 vid);
-static void e100_vlan_rx_kill_vid(struct net_device *netdev, u16 vid);
-
-#ifdef CONFIG_PM
-static int e100_notify_reboot(struct notifier_block *, unsigned long event, void *ptr);
-static int e100_suspend(struct pci_dev *pcid, u32 state);
-static int e100_resume(struct pci_dev *pcid);
-static unsigned char e100_asf_enabled(struct e100_private *bdp);
-struct notifier_block e100_notifier_reboot = {
- .notifier_call = e100_notify_reboot,
- .next = NULL,
- .priority = 0
-};
-#endif
-
-/*********************************************************************/
-/*! This is a GCC extension to ANSI C.
- * See the item "Labeled Elements in Initializers" in the section
- * "Extensions to the C Language Family" of the GCC documentation.
- *********************************************************************/
-#define E100_PARAM_INIT { [0 ... E100_MAX_NIC] = -1 }
-
-/* All parameters are treated the same, as an integer array of values.
- * This macro just reduces the need to repeat the same declaration code
- * over and over (plus this helps to avoid typo bugs).
- */
-#define E100_PARAM(X, S) \
- static const int X[E100_MAX_NIC + 1] = E100_PARAM_INIT; \
- MODULE_PARM(X, "1-" __MODULE_STRING(E100_MAX_NIC) "i"); \
- MODULE_PARM_DESC(X, S);
-
-/* ====================================================================== */
-static u8 e100_D101M_checksum(struct e100_private *, struct sk_buff *);
-static u8 e100_D102_check_checksum(rfd_t *);
-static int e100_ioctl(struct net_device *, struct ifreq *, int);
-static int e100_change_mtu(struct net_device *, int);
-static int e100_xmit_frame(struct sk_buff *, struct net_device *);
-static unsigned char e100_init(struct e100_private *);
-static int e100_set_mac(struct net_device *, void *);
-struct net_device_stats *e100_get_stats(struct net_device *);
-
-static irqreturn_t e100intr(int, void *, struct pt_regs *);
-static void e100_print_brd_conf(struct e100_private *);
-static void e100_set_multi(struct net_device *);
-
-static u8 e100_pci_setup(struct pci_dev *, struct e100_private *);
-static u8 e100_sw_init(struct e100_private *);
-static void e100_tco_workaround(struct e100_private *);
-static unsigned char e100_alloc_space(struct e100_private *);
-static void e100_dealloc_space(struct e100_private *);
-static int e100_alloc_tcb_pool(struct e100_private *);
-static void e100_setup_tcb_pool(tcb_t *, unsigned int, struct e100_private *);
-static void e100_free_tcb_pool(struct e100_private *);
-static int e100_alloc_rfd_pool(struct e100_private *);
-static void e100_free_rfd_pool(struct e100_private *);
-
-static void e100_rd_eaddr(struct e100_private *);
-static void e100_rd_pwa_no(struct e100_private *);
-extern u16 e100_eeprom_read(struct e100_private *, u16);
-extern void e100_eeprom_write_block(struct e100_private *, u16, u16 *, u16);
-extern u16 e100_eeprom_size(struct e100_private *);
-u16 e100_eeprom_calculate_chksum(struct e100_private *adapter);
-
-static unsigned char e100_clr_cntrs(struct e100_private *);
-static unsigned char e100_load_microcode(struct e100_private *);
-static unsigned char e100_setup_iaaddr(struct e100_private *, u8 *);
-static unsigned char e100_update_stats(struct e100_private *bdp);
-
-static void e100_start_ru(struct e100_private *);
-static void e100_dump_stats_cntrs(struct e100_private *);
-
-static void e100_check_options(int board, struct e100_private *bdp);
-static void e100_set_int_option(int *, int, int, int, int, char *);
-static void e100_set_bool_option(struct e100_private *bdp, int, u32, int,
- char *);
-unsigned char e100_wait_exec_cmplx(struct e100_private *, u32, u8, u8);
-void e100_exec_cmplx(struct e100_private *, u32, u8);
-
-/**
- * e100_get_rx_struct - retrieve cell to hold skb buff from the pool
- * @bdp: atapter's private data struct
- *
- * Returns the new cell to hold sk_buff or %NULL.
- */
-static inline struct rx_list_elem *
-e100_get_rx_struct(struct e100_private *bdp)
-{
- struct rx_list_elem *rx_struct = NULL;
-
- if (!list_empty(&(bdp->rx_struct_pool))) {
- rx_struct = list_entry(bdp->rx_struct_pool.next,
- struct rx_list_elem, list_elem);
- list_del(&(rx_struct->list_elem));
- }
-
- return rx_struct;
-}
-
-/**
- * e100_alloc_skb - allocate an skb for the adapter
- * @bdp: atapter's private data struct
- *
- * Allocates skb with enough room for rfd, and data, and reserve non-data space.
- * Returns the new cell with sk_buff or %NULL.
- */
-static inline struct rx_list_elem *
-e100_alloc_skb(struct e100_private *bdp)
-{
- struct sk_buff *new_skb;
- u32 skb_size = sizeof (rfd_t);
- struct rx_list_elem *rx_struct;
-
- new_skb = (struct sk_buff *) dev_alloc_skb(skb_size);
- if (new_skb) {
- /* The IP data should be
- DWORD aligned. since the ethernet header is 14 bytes long,
- we need to reserve 2 extra bytes so that the TCP/IP headers
- will be DWORD aligned. */
- skb_reserve(new_skb, 2);
- if ((rx_struct = e100_get_rx_struct(bdp)) == NULL)
- goto err;
- rx_struct->skb = new_skb;
- rx_struct->dma_addr = pci_map_single(bdp->pdev, new_skb->data,
- sizeof (rfd_t),
- PCI_DMA_FROMDEVICE);
- if (!rx_struct->dma_addr)
- goto err;
- skb_reserve(new_skb, bdp->rfd_size);
- return rx_struct;
- } else {
- return NULL;
- }
-
-err:
- dev_kfree_skb_irq(new_skb);
- return NULL;
-}
-
-/**
- * e100_add_skb_to_end - add an skb to the end of our rfd list
- * @bdp: atapter's private data struct
- * @rx_struct: rx_list_elem with the new skb
- *
- * Adds a newly allocated skb to the end of our rfd list.
- */
-inline void
-e100_add_skb_to_end(struct e100_private *bdp, struct rx_list_elem *rx_struct)
-{
- rfd_t *rfdn; /* The new rfd */
- rfd_t *rfd; /* The old rfd */
- struct rx_list_elem *rx_struct_last;
-
- (rx_struct->skb)->dev = bdp->device;
- rfdn = RFD_POINTER(rx_struct->skb, bdp);
- rfdn->rfd_header.cb_status = 0;
- rfdn->rfd_header.cb_cmd = __constant_cpu_to_le16(RFD_EL_BIT);
- rfdn->rfd_act_cnt = 0;
- rfdn->rfd_sz = __constant_cpu_to_le16(RFD_DATA_SIZE);
-
- pci_dma_sync_single(bdp->pdev, rx_struct->dma_addr, bdp->rfd_size,
- PCI_DMA_TODEVICE);
-
- if (!list_empty(&(bdp->active_rx_list))) {
- rx_struct_last = list_entry(bdp->active_rx_list.prev,
- struct rx_list_elem, list_elem);
- rfd = RFD_POINTER(rx_struct_last->skb, bdp);
- pci_dma_sync_single(bdp->pdev, rx_struct_last->dma_addr,
- 4, PCI_DMA_FROMDEVICE);
- put_unaligned(cpu_to_le32(rx_struct->dma_addr),
- ((u32 *) (&(rfd->rfd_header.cb_lnk_ptr))));
-
- pci_dma_sync_single(bdp->pdev, rx_struct_last->dma_addr,
- 8, PCI_DMA_TODEVICE);
- rfd->rfd_header.cb_cmd &=
- __constant_cpu_to_le16((u16) ~RFD_EL_BIT);
-
- pci_dma_sync_single(bdp->pdev, rx_struct_last->dma_addr,
- 4, PCI_DMA_TODEVICE);
- }
-
- list_add_tail(&(rx_struct->list_elem), &(bdp->active_rx_list));
-}
-
-static inline void
-e100_alloc_skbs(struct e100_private *bdp)
-{
- for (; bdp->skb_req > 0; bdp->skb_req--) {
- struct rx_list_elem *rx_struct;
-
- if ((rx_struct = e100_alloc_skb(bdp)) == NULL)
- return;
-
- e100_add_skb_to_end(bdp, rx_struct);
- }
-}
-
-void e100_tx_srv(struct e100_private *);
-u32 e100_rx_srv(struct e100_private *);
-
-void e100_watchdog(struct net_device *);
-void e100_refresh_txthld(struct e100_private *);
-void e100_manage_adaptive_ifs(struct e100_private *);
-void e100_clear_pools(struct e100_private *);
-static void e100_clear_structs(struct net_device *);
-static inline tcb_t *e100_prepare_xmit_buff(struct e100_private *,
- struct sk_buff *);
-static void e100_set_multi_exec(struct net_device *dev);
-
-MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
-MODULE_DESCRIPTION("Intel(R) PRO/100 Network Driver");
-MODULE_LICENSE("GPL");
-
-E100_PARAM(TxDescriptors, "Number of transmit descriptors");
-E100_PARAM(RxDescriptors, "Number of receive descriptors");
-E100_PARAM(XsumRX, "Disable or enable Receive Checksum offload");
-E100_PARAM(e100_speed_duplex, "Speed and Duplex settings");
-E100_PARAM(ucode, "Disable or enable microcode loading");
-E100_PARAM(ber, "Value for the BER correction algorithm");
-E100_PARAM(flow_control, "Disable or enable Ethernet PAUSE frames processing");
-E100_PARAM(IntDelay, "Value for CPU saver's interrupt delay");
-E100_PARAM(BundleSmallFr, "Disable or enable interrupt bundling of small frames");
-E100_PARAM(BundleMax, "Maximum number for CPU saver's packet bundling");
-E100_PARAM(IFS, "Disable or enable the adaptive IFS algorithm");
-
-/**
- * e100_exec_cmd - issue a comand
- * @bdp: atapter's private data struct
- * @scb_cmd_low: the command that is to be issued
- *
- * This general routine will issue a command to the e100.
- */
-static inline void
-e100_exec_cmd(struct e100_private *bdp, u8 cmd_low)
-{
- writeb(cmd_low, &(bdp->scb->scb_cmd_low));
- readw(&(bdp->scb->scb_status)); /* flushes last write, read-safe */
-}
-
-/**
- * e100_wait_scb - wait for SCB to clear
- * @bdp: atapter's private data struct
- *
- * This routine checks to see if the e100 has accepted a command.
- * It does so by checking the command field in the SCB, which will
- * be zeroed by the e100 upon accepting a command. The loop waits
- * for up to 1 millisecond for command acceptance.
- *
- * Returns:
- * true if the SCB cleared within 1 millisecond.
- * false if it didn't clear within 1 millisecond
- */
-unsigned char
-e100_wait_scb(struct e100_private *bdp)
-{
- int i;
-
- /* loop on the scb for a few times */
- for (i = 0; i < 100; i++) {
- if (!readb(&bdp->scb->scb_cmd_low))
- return true;
- cpu_relax();
- }
-
- /* it didn't work. do it the slow way using udelay()s */
- for (i = 0; i < E100_MAX_SCB_WAIT; i++) {
- if (!readb(&bdp->scb->scb_cmd_low))
- return true;
- cpu_relax();
- udelay(1);
- }
-
- return false;
-}
-
-/**
- * e100_wait_exec_simple - issue a command
- * @bdp: atapter's private data struct
- * @scb_cmd_low: the command that is to be issued
- *
- * This general routine will issue a command to the e100 after waiting for
- * the previous command to finish.
- *
- * Returns:
- * true if the command was issued to the chip successfully
- * false if the command was not issued to the chip
- */
-inline unsigned char
-e100_wait_exec_simple(struct e100_private *bdp, u8 scb_cmd_low)
-{
- if (!e100_wait_scb(bdp)) {
- printk(KERN_DEBUG "e100: %s: e100_wait_exec_simple: failed\n",
- bdp->device->name);
-#ifdef E100_CU_DEBUG
- printk(KERN_ERR "e100: %s: Last command (%x/%x) "
- "timeout\n", bdp->device->name,
- bdp->last_cmd, bdp->last_sub_cmd);
- printk(KERN_ERR "e100: %s: Current simple command (%x) "
- "can't be executed\n",
- bdp->device->name, scb_cmd_low);
-#endif
- return false;
- }
- e100_exec_cmd(bdp, scb_cmd_low);
-#ifdef E100_CU_DEBUG
- bdp->last_cmd = scb_cmd_low;
- bdp->last_sub_cmd = 0;
-#endif
- return true;
-}
-
-void
-e100_exec_cmplx(struct e100_private *bdp, u32 phys_addr, u8 cmd)
-{
- writel(phys_addr, &(bdp->scb->scb_gen_ptr));
- readw(&(bdp->scb->scb_status)); /* flushes last write, read-safe */
- e100_exec_cmd(bdp, cmd);
-}
-
-unsigned char
-e100_wait_exec_cmplx(struct e100_private *bdp, u32 phys_addr, u8 cmd, u8 sub_cmd)
-{
- if (!e100_wait_scb(bdp)) {
-#ifdef E100_CU_DEBUG
- printk(KERN_ERR "e100: %s: Last command (%x/%x) "
- "timeout\n", bdp->device->name,
- bdp->last_cmd, bdp->last_sub_cmd);
- printk(KERN_ERR "e100: %s: Current complex command "
- "(%x/%x) can't be executed\n",
- bdp->device->name, cmd, sub_cmd);
-#endif
- return false;
- }
- e100_exec_cmplx(bdp, phys_addr, cmd);
-#ifdef E100_CU_DEBUG
- bdp->last_cmd = cmd;
- bdp->last_sub_cmd = sub_cmd;
-#endif
- return true;
-}
-
-inline u8
-e100_wait_cus_idle(struct e100_private *bdp)
-{
- int i;
-
- /* loop on the scb for a few times */
- for (i = 0; i < 100; i++) {
- if (((readw(&(bdp->scb->scb_status)) & SCB_CUS_MASK) !=
- SCB_CUS_ACTIVE)) {
- return true;
- }
- cpu_relax();
- }
-
- for (i = 0; i < E100_MAX_CU_IDLE_WAIT; i++) {
- if (((readw(&(bdp->scb->scb_status)) & SCB_CUS_MASK) !=
- SCB_CUS_ACTIVE)) {
- return true;
- }
- cpu_relax();
- udelay(1);
- }
-
- return false;
-}
-
-/**
- * e100_disable_clear_intr - disable and clear/ack interrupts
- * @bdp: atapter's private data struct
- *
- * This routine disables interrupts at the hardware, by setting
- * the M (mask) bit in the adapter's CSR SCB command word.
- * It also clear/ack interrupts.
- */
-static inline void
-e100_disable_clear_intr(struct e100_private *bdp)
-{
- u16 intr_status;
- /* Disable interrupts on our PCI board by setting the mask bit */
- writeb(SCB_INT_MASK, &bdp->scb->scb_cmd_hi);
- intr_status = readw(&bdp->scb->scb_status);
- /* ack and clear intrs */
- writew(intr_status, &bdp->scb->scb_status);
- readw(&bdp->scb->scb_status);
-}
-
-/**
- * e100_set_intr_mask - set interrupts
- * @bdp: atapter's private data struct
- *
- * This routine sets interrupts at the hardware, by resetting
- * the M (mask) bit in the adapter's CSR SCB command word
- */
-static inline void
-e100_set_intr_mask(struct e100_private *bdp)
-{
- writeb(bdp->intr_mask, &bdp->scb->scb_cmd_hi);
- readw(&(bdp->scb->scb_status)); /* flushes last write, read-safe */
-}
-
-static inline void
-e100_trigger_SWI(struct e100_private *bdp)
-{
- /* Trigger interrupt on our PCI board by asserting SWI bit */
- writeb(SCB_SOFT_INT, &bdp->scb->scb_cmd_hi);
- readw(&(bdp->scb->scb_status)); /* flushes last write, read-safe */
-}
-
-static int
-e100_found1(struct pci_dev *pcid, const struct pci_device_id *ent)
-{
- static int first_time = true;
- struct net_device *dev = NULL;
- struct e100_private *bdp = NULL;
- int rc = 0;
- u16 cal_checksum, read_checksum;
-
- dev = alloc_etherdev(sizeof (struct e100_private));
- if (dev == NULL) {
- printk(KERN_ERR "e100: Not able to alloc etherdev struct\n");
- rc = -ENODEV;
- goto out;
- }
-
- SET_MODULE_OWNER(dev);
-
- if (first_time) {
- first_time = false;
- printk(KERN_NOTICE "%s - version %s\n",
- e100_full_driver_name, e100_driver_version);
- printk(KERN_NOTICE "%s\n", e100_copyright);
- printk(KERN_NOTICE "\n");
- }
-
- bdp = dev->priv;
- bdp->pdev = pcid;
- bdp->device = dev;
-
- pci_set_drvdata(pcid, dev);
- SET_NETDEV_DEV(dev, &pcid->dev);
-
- bdp->flags = 0;
- bdp->ifs_state = 0;
- bdp->ifs_value = 0;
- bdp->scb = 0;
-
- init_timer(&bdp->nontx_timer_id);
- bdp->nontx_timer_id.data = (unsigned long) bdp;
- bdp->nontx_timer_id.function = (void *) &e100_non_tx_background;
- INIT_LIST_HEAD(&(bdp->non_tx_cmd_list));
- bdp->non_tx_command_state = E100_NON_TX_IDLE;
-
- init_timer(&bdp->watchdog_timer);
- bdp->watchdog_timer.data = (unsigned long) dev;
- bdp->watchdog_timer.function = (void *) &e100_watchdog;
-
- if ((rc = e100_pci_setup(pcid, bdp)) != 0) {
- goto err_dev;
- }
-
- if ((rc = e100_alloc_space(bdp)) != 0) {
- goto err_pci;
- }
-
- if (((bdp->pdev->device > 0x1030)
- && (bdp->pdev->device < 0x103F))
- || ((bdp->pdev->device >= 0x1050)
- && (bdp->pdev->device <= 0x1057))
- || (bdp->pdev->device == 0x2449)
- || (bdp->pdev->device == 0x2459)
- || (bdp->pdev->device == 0x245D)) {
- bdp->rev_id = D101MA_REV_ID; /* workaround for ICH3 */
- bdp->flags |= IS_ICH;
- }
-
- if (bdp->rev_id == 0xff)
- bdp->rev_id = 1;
-
- if ((u8) bdp->rev_id >= D101A4_REV_ID)
- bdp->flags |= IS_BACHELOR;
-
- if ((u8) bdp->rev_id >= D102_REV_ID) {
- bdp->flags |= USE_IPCB;
- bdp->rfd_size = 32;
- } else {
- bdp->rfd_size = 16;
- }
-
- dev->vlan_rx_register = e100_vlan_rx_register;
- dev->vlan_rx_add_vid = e100_vlan_rx_add_vid;
- dev->vlan_rx_kill_vid = e100_vlan_rx_kill_vid;
- dev->irq = pcid->irq;
- dev->open = &e100_open;
- dev->hard_start_xmit = &e100_xmit_frame;
- dev->stop = &e100_close;
- dev->change_mtu = &e100_change_mtu;
- dev->get_stats = &e100_get_stats;
- dev->set_multicast_list = &e100_set_multi;
- dev->set_mac_address = &e100_set_mac;
- dev->do_ioctl = &e100_ioctl;
-
- if (bdp->flags & USE_IPCB)
- dev->features = NETIF_F_SG | NETIF_F_IP_CSUM |
- NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
-
- if ((rc = register_netdev(dev)) != 0) {
- goto err_dealloc;
- }
-
- e100_check_options(e100nics, bdp);
-
- if (!e100_init(bdp)) {
- printk(KERN_ERR "e100: Failed to initialize, instance #%d\n",
- e100nics);
- rc = -ENODEV;
- goto err_unregister_netdev;
- }
-
- /* Check if checksum is valid */
- cal_checksum = e100_eeprom_calculate_chksum(bdp);
- read_checksum = e100_eeprom_read(bdp, (bdp->eeprom_size - 1));
- if (cal_checksum != read_checksum) {
- printk(KERN_ERR "e100: Corrupted EEPROM on instance #%d\n",
- e100nics);
- rc = -ENODEV;
- goto err_unregister_netdev;
- }
-
- e100nics++;
-
- e100_get_speed_duplex_caps(bdp);
-
- printk(KERN_NOTICE
- "e100: %s: %s\n",
- bdp->device->name, "Intel(R) PRO/100 Network Connection");
- e100_print_brd_conf(bdp);
-
- bdp->wolsupported = 0;
- bdp->wolopts = 0;
- if (bdp->rev_id >= D101A4_REV_ID)
- bdp->wolsupported = WAKE_PHY | WAKE_MAGIC;
- if (bdp->rev_id >= D101MA_REV_ID)
- bdp->wolsupported |= WAKE_UCAST | WAKE_ARP;
-
- /* Check if WoL is enabled on EEPROM */
- if (e100_eeprom_read(bdp, EEPROM_ID_WORD) & BIT_5) {
- /* Magic Packet WoL is enabled on device by default */
- /* if EEPROM WoL bit is TRUE */
- bdp->wolopts = WAKE_MAGIC;
- }
-
- printk(KERN_NOTICE "\n");
-
- goto out;
-
-err_unregister_netdev:
- unregister_netdev(dev);
-err_dealloc:
- e100_dealloc_space(bdp);
-err_pci:
- iounmap(bdp->scb);
- pci_release_regions(pcid);
- pci_disable_device(pcid);
-err_dev:
- pci_set_drvdata(pcid, NULL);
- free_netdev(dev);
-out:
- return rc;
-}
-
-/**
- * e100_clear_structs - free resources
- * @dev: adapter's net_device struct
- *
- * Free all device specific structs, unmap i/o address, etc.
- */
-static void __devexit
-e100_clear_structs(struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
-
- iounmap(bdp->scb);
- pci_release_regions(bdp->pdev);
- pci_disable_device(bdp->pdev);
-
- e100_dealloc_space(bdp);
- pci_set_drvdata(bdp->pdev, NULL);
- free_netdev(dev);
-}
-
-static void __devexit
-e100_remove1(struct pci_dev *pcid)
-{
- struct net_device *dev;
- struct e100_private *bdp;
-
- if (!(dev = (struct net_device *) pci_get_drvdata(pcid)))
- return;
-
- bdp = dev->priv;
-
- unregister_netdev(dev);
-
- e100_sw_reset(bdp, PORT_SELECTIVE_RESET);
-
- if (bdp->non_tx_command_state != E100_NON_TX_IDLE) {
- del_timer_sync(&bdp->nontx_timer_id);
- e100_free_nontx_list(bdp);
- bdp->non_tx_command_state = E100_NON_TX_IDLE;
- }
-
- e100_clear_structs(dev);
-
- --e100nics;
-}
-
-static struct pci_device_id e100_id_table[] = {
- {0x8086, 0x1229, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x2449, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1059, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1209, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1029, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1030, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1031, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1032, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1033, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1034, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1038, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1039, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x103A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x103B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x103C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x103D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x103E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1052, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1053, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1054, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x1055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x2459, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0x8086, 0x245D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
- {0,} /* This has to be the last entry*/
-};
-MODULE_DEVICE_TABLE(pci, e100_id_table);
-
-static struct pci_driver e100_driver = {
- .name = "e100",
- .id_table = e100_id_table,
- .probe = e100_found1,
- .remove = __devexit_p(e100_remove1),
-#ifdef CONFIG_PM
- .suspend = e100_suspend,
- .resume = e100_resume,
-#endif
-};
-
-static int __init
-e100_init_module(void)
-{
- int ret;
- ret = pci_module_init(&e100_driver);
-
- if(ret >= 0) {
-#ifdef CONFIG_PM
- register_reboot_notifier(&e100_notifier_reboot);
-#endif
- }
-
- return ret;
-}
-
-static void __exit
-e100_cleanup_module(void)
-{
-#ifdef CONFIG_PM
- unregister_reboot_notifier(&e100_notifier_reboot);
-#endif
-
- pci_unregister_driver(&e100_driver);
-}
-
-module_init(e100_init_module);
-module_exit(e100_cleanup_module);
-
-/**
- * e100_check_options - check command line options
- * @board: board number
- * @bdp: atapter's private data struct
- *
- * This routine does range checking on command-line options
- */
-void
-e100_check_options(int board, struct e100_private *bdp)
-{
- if (board >= E100_MAX_NIC) {
- printk(KERN_NOTICE
- "e100: No configuration available for board #%d\n",
- board);
- printk(KERN_NOTICE "e100: Using defaults for all values\n");
- board = E100_MAX_NIC;
- }
-
- e100_set_int_option(&(bdp->params.TxDescriptors), TxDescriptors[board],
- E100_MIN_TCB, E100_MAX_TCB, E100_DEFAULT_TCB,
- "TxDescriptor count");
-
- e100_set_int_option(&(bdp->params.RxDescriptors), RxDescriptors[board],
- E100_MIN_RFD, E100_MAX_RFD, E100_DEFAULT_RFD,
- "RxDescriptor count");
-
- e100_set_int_option(&(bdp->params.e100_speed_duplex),
- e100_speed_duplex[board], 0, 4,
- E100_DEFAULT_SPEED_DUPLEX, "speed/duplex mode");
-
- e100_set_int_option(&(bdp->params.ber), ber[board], 0, ZLOCK_MAX_ERRORS,
- E100_DEFAULT_BER, "Bit Error Rate count");
-
- e100_set_bool_option(bdp, XsumRX[board], PRM_XSUMRX, E100_DEFAULT_XSUM,
- "XsumRX value");
-
- /* Default ucode value depended on controller revision */
- if (bdp->rev_id >= D101MA_REV_ID) {
- e100_set_bool_option(bdp, ucode[board], PRM_UCODE,
- E100_DEFAULT_UCODE, "ucode value");
- } else {
- e100_set_bool_option(bdp, ucode[board], PRM_UCODE, false,
- "ucode value");
- }
-
- e100_set_bool_option(bdp, flow_control[board], PRM_FC, E100_DEFAULT_FC,
- "flow control value");
-
- e100_set_bool_option(bdp, IFS[board], PRM_IFS, E100_DEFAULT_IFS,
- "IFS value");
-
- e100_set_bool_option(bdp, BundleSmallFr[board], PRM_BUNDLE_SMALL,
- E100_DEFAULT_BUNDLE_SMALL_FR,
- "CPU saver bundle small frames value");
-
- e100_set_int_option(&(bdp->params.IntDelay), IntDelay[board], 0x0,
- 0xFFFF, E100_DEFAULT_CPUSAVER_INTERRUPT_DELAY,
- "CPU saver interrupt delay value");
-
- e100_set_int_option(&(bdp->params.BundleMax), BundleMax[board], 0x1,
- 0xFFFF, E100_DEFAULT_CPUSAVER_BUNDLE_MAX,
- "CPU saver bundle max value");
-
-}
-
-/**
- * e100_set_int_option - check and set an integer option
- * @option: a pointer to the relevant option field
- * @val: the value specified
- * @min: the minimum valid value
- * @max: the maximum valid value
- * @default_val: the default value
- * @name: the name of the option
- *
- * This routine does range checking on a command-line option.
- * If the option's value is '-1' use the specified default.
- * Otherwise, if the value is invalid, change it to the default.
- */
-void
-e100_set_int_option(int *option, int val, int min, int max, int default_val,
- char *name)
-{
- if (val == -1) { /* no value specified. use default */
- *option = default_val;
-
- } else if ((val < min) || (val > max)) {
- printk(KERN_NOTICE
- "e100: Invalid %s specified (%i). "
- "Valid range is %i-%i\n",
- name, val, min, max);
- printk(KERN_NOTICE "e100: Using default %s of %i\n", name,
- default_val);
- *option = default_val;
- } else {
- printk(KERN_INFO "e100: Using specified %s of %i\n", name, val);
- *option = val;
- }
-}
-
-/**
- * e100_set_bool_option - check and set a boolean option
- * @bdp: atapter's private data struct
- * @val: the value specified
- * @mask: the mask for the relevant option
- * @default_val: the default value
- * @name: the name of the option
- *
- * This routine checks a boolean command-line option.
- * If the option's value is '-1' use the specified default.
- * Otherwise, if the value is invalid (not 0 or 1),
- * change it to the default.
- */
-void
-e100_set_bool_option(struct e100_private *bdp, int val, u32 mask,
- int default_val, char *name)
-{
- if (val == -1) {
- if (default_val)
- bdp->params.b_params |= mask;
-
- } else if ((val != true) && (val != false)) {
- printk(KERN_NOTICE
- "e100: Invalid %s specified (%i). "
- "Valid values are %i/%i\n",
- name, val, false, true);
- printk(KERN_NOTICE "e100: Using default %s of %i\n", name,
- default_val);
-
- if (default_val)
- bdp->params.b_params |= mask;
- } else {
- printk(KERN_INFO "e100: Using specified %s of %i\n", name, val);
- if (val)
- bdp->params.b_params |= mask;
- }
-}
-
-int
-e100_open(struct net_device *dev)
-{
- struct e100_private *bdp;
- int rc = 0;
-
- bdp = dev->priv;
-
- /* setup the tcb pool */
- if (!e100_alloc_tcb_pool(bdp)) {
- rc = -ENOMEM;
- goto err_exit;
- }
- bdp->last_tcb = NULL;
-
- bdp->tcb_pool.head = 0;
- bdp->tcb_pool.tail = 1;
-
- e100_setup_tcb_pool((tcb_t *) bdp->tcb_pool.data,
- bdp->params.TxDescriptors, bdp);
-
- if (!e100_alloc_rfd_pool(bdp)) {
- rc = -ENOMEM;
- goto err_exit;
- }
-
- if (!e100_wait_exec_cmplx(bdp, 0, SCB_CUC_LOAD_BASE, 0)) {
- rc = -EAGAIN;
- goto err_exit;
- }
-
- if (!e100_wait_exec_cmplx(bdp, 0, SCB_RUC_LOAD_BASE, 0)) {
- rc = -EAGAIN;
- goto err_exit;
- }
-
- mod_timer(&(bdp->watchdog_timer), jiffies + (2 * HZ));
-
- if (dev->flags & IFF_UP)
- /* Otherwise process may sleep forever */
- netif_wake_queue(dev);
- else
- netif_start_queue(dev);
-
- e100_start_ru(bdp);
- if ((rc = request_irq(dev->irq, &e100intr, SA_SHIRQ,
- dev->name, dev)) != 0) {
- del_timer_sync(&bdp->watchdog_timer);
- goto err_exit;
- }
- bdp->intr_mask = 0;
- e100_set_intr_mask(bdp);
-
- e100_force_config(bdp);
-
- goto exit;
-
-err_exit:
- e100_clear_pools(bdp);
-exit:
- return rc;
-}
-
-int
-e100_close(struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
-
- e100_disable_clear_intr(bdp);
- free_irq(dev->irq, dev);
- bdp->intr_mask = SCB_INT_MASK;
- e100_isolate_driver(bdp);
-
- netif_carrier_off(bdp->device);
- bdp->cur_line_speed = 0;
- bdp->cur_dplx_mode = 0;
- e100_clear_pools(bdp);
-
- return 0;
-}
-
-static int
-e100_change_mtu(struct net_device *dev, int new_mtu)
-{
- if ((new_mtu < 68) || (new_mtu > (ETH_DATA_LEN + VLAN_SIZE)))
- return -EINVAL;
-
- dev->mtu = new_mtu;
- return 0;
-}
-
-static int
-e100_xmit_frame(struct sk_buff *skb, struct net_device *dev)
-{
- int rc = 0;
- int notify_stop = false;
- struct e100_private *bdp = dev->priv;
-
- if (!spin_trylock(&bdp->bd_non_tx_lock)) {
- notify_stop = true;
- rc = 1;
- goto exit2;
- }
-
- /* tcb list may be empty temporarily during releasing resources */
- if (!TCBS_AVAIL(bdp->tcb_pool) || (bdp->tcb_phys == 0) ||
- (bdp->non_tx_command_state != E100_NON_TX_IDLE)) {
- notify_stop = true;
- rc = 1;
- goto exit1;
- }
-
- bdp->drv_stats.net_stats.tx_bytes += skb->len;
-
- e100_prepare_xmit_buff(bdp, skb);
-
- dev->trans_start = jiffies;
-
-exit1:
- spin_unlock(&bdp->bd_non_tx_lock);
-exit2:
- if (notify_stop) {
- netif_stop_queue(dev);
- }
-
- return rc;
-}
-
-/**
- * e100_get_stats - get driver statistics
- * @dev: adapter's net_device struct
- *
- * This routine is called when the OS wants the adapter's stats returned.
- * It returns the address of the net_device_stats stucture for the device.
- * If the statistics are currently being updated, then they might be incorrect
- * for a short while. However, since this cannot actually cause damage, no
- * locking is used.
- */
-struct net_device_stats *
-e100_get_stats(struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
-
- bdp->drv_stats.net_stats.tx_errors =
- bdp->drv_stats.net_stats.tx_carrier_errors +
- bdp->drv_stats.net_stats.tx_aborted_errors;
-
- bdp->drv_stats.net_stats.rx_errors =
- bdp->drv_stats.net_stats.rx_crc_errors +
- bdp->drv_stats.net_stats.rx_frame_errors +
- bdp->drv_stats.net_stats.rx_length_errors +
- bdp->drv_stats.rcv_cdt_frames;
-
- return &(bdp->drv_stats.net_stats);
-}
-
-/**
- * e100_set_mac - set the MAC address
- * @dev: adapter's net_device struct
- * @addr: the new address
- *
- * This routine sets the ethernet address of the board
- * Returns:
- * 0 - if successful
- * -1 - otherwise
- */
-static int
-e100_set_mac(struct net_device *dev, void *addr)
-{
- struct e100_private *bdp;
- int rc = -1;
- struct sockaddr *p_sockaddr = (struct sockaddr *) addr;
-
- if (!is_valid_ether_addr(p_sockaddr->sa_data))
- return -EADDRNOTAVAIL;
- bdp = dev->priv;
-
- if (e100_setup_iaaddr(bdp, (u8 *) (p_sockaddr->sa_data))) {
- memcpy(&(dev->dev_addr[0]), p_sockaddr->sa_data, ETH_ALEN);
- rc = 0;
- }
-
- return rc;
-}
-
-static void
-e100_set_multi_exec(struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
- mltcst_cb_t *mcast_buff;
- cb_header_t *cb_hdr;
- struct dev_mc_list *mc_list;
- unsigned int i;
- nxmit_cb_entry_t *cmd = e100_alloc_non_tx_cmd(bdp);
-
- if (cmd != NULL) {
- mcast_buff = &((cmd->non_tx_cmd)->ntcb.multicast);
- cb_hdr = &((cmd->non_tx_cmd)->ntcb.multicast.mc_cbhdr);
- } else {
- return;
- }
-
- /* initialize the multi cast command */
- cb_hdr->cb_cmd = __constant_cpu_to_le16(CB_MULTICAST);
-
- /* now fill in the rest of the multicast command */
- *(u16 *) (&(mcast_buff->mc_count)) = cpu_to_le16(dev->mc_count * 6);
- for (i = 0, mc_list = dev->mc_list;
- (i < dev->mc_count) && (i < MAX_MULTICAST_ADDRS);
- i++, mc_list = mc_list->next) {
- /* copy into the command */
- memcpy(&(mcast_buff->mc_addr[i * ETH_ALEN]),
- (u8 *) &(mc_list->dmi_addr), ETH_ALEN);
- }
-
- if (!e100_exec_non_cu_cmd(bdp, cmd)) {
- printk(KERN_WARNING "e100: %s: Multicast setup failed\n",
- dev->name);
- }
-}
-
-/**
- * e100_set_multi - set multicast status
- * @dev: adapter's net_device struct
- *
- * This routine is called to add or remove multicast addresses, and/or to
- * change the adapter's promiscuous state.
- */
-static void
-e100_set_multi(struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
- unsigned char promisc_enbl;
- unsigned char mulcast_enbl;
-
- promisc_enbl = ((dev->flags & IFF_PROMISC) == IFF_PROMISC);
- mulcast_enbl = ((dev->flags & IFF_ALLMULTI) ||
- (dev->mc_count > MAX_MULTICAST_ADDRS));
-
- e100_config_promisc(bdp, promisc_enbl);
- e100_config_mulcast_enbl(bdp, mulcast_enbl);
-
- /* reconfigure the chip if something has changed in its config space */
- e100_config(bdp);
-
- if (promisc_enbl || mulcast_enbl) {
- return; /* no need for Multicast Cmd */
- }
-
- /* get the multicast CB */
- e100_set_multi_exec(dev);
-}
-
-static int
-e100_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
-{
-
- switch (cmd) {
-
- case SIOCETHTOOL:
- return e100_do_ethtool_ioctl(dev, ifr);
- break;
-
- case SIOCGMIIPHY: /* Get address of MII PHY in use. */
- case SIOCGMIIREG: /* Read MII PHY register. */
- case SIOCSMIIREG: /* Write to MII PHY register. */
- return e100_mii_ioctl(dev, ifr, cmd);
- break;
-
- default:
- return -EOPNOTSUPP;
- }
- return 0;
-
-}
-
-/**
- * e100init - initialize the adapter
- * @bdp: atapter's private data struct
- *
- * This routine is called when this driver is loaded. This is the initialization
- * routine which allocates memory, configures the adapter and determines the
- * system resources.
- *
- * Returns:
- * true: if successful
- * false: otherwise
- */
-static unsigned char
-e100_init(struct e100_private *bdp)
-{
- u32 st_timeout = 0;
- u32 st_result = 0;
- e100_sw_init(bdp);
-
- if (!e100_selftest(bdp, &st_timeout, &st_result)) {
- if (st_timeout) {
- printk(KERN_ERR "e100: selftest timeout\n");
- } else {
- printk(KERN_ERR "e100: selftest failed. Results: %x\n",
- st_result);
- }
- return false;
- }
- else
- printk(KERN_DEBUG "e100: selftest OK.\n");
-
- /* read the MAC address from the eprom */
- e100_rd_eaddr(bdp);
- if (!is_valid_ether_addr(bdp->device->dev_addr)) {
- printk(KERN_ERR "e100: Invalid Ethernet address\n");
- return false;
- }
- /* read NIC's part number */
- e100_rd_pwa_no(bdp);
-
- if (!e100_hw_init(bdp))
- return false;
- /* Interrupts are enabled after device reset */
- e100_disable_clear_intr(bdp);
-
- return true;
-}
-
-/**
- * e100_sw_init - initialize software structs
- * @bdp: atapter's private data struct
- *
- * This routine initializes all software structures. Sets up the
- * circular structures for the RFD's & TCB's. Allocates the per board
- * structure for storing adapter information. The CSR is also memory
- * mapped in this routine.
- *
- * Returns :
- * true: if S/W was successfully initialized
- * false: otherwise
- */
-static unsigned char
-e100_sw_init(struct e100_private *bdp)
-{
- bdp->next_cu_cmd = START_WAIT; // init the next cu state
-
- /*
- * Set the value for # of good xmits per underrun. the value assigned
- * here is an intelligent suggested default. Nothing magical about it.
- */
- bdp->tx_per_underrun = DEFAULT_TX_PER_UNDERRUN;
-
- /* get the default transmit threshold value */
- bdp->tx_thld = TX_THRSHLD;
-
- /* get the EPROM size */
- bdp->eeprom_size = e100_eeprom_size(bdp);
-
- /* Initialize our spinlocks */
- spin_lock_init(&(bdp->bd_lock));
- spin_lock_init(&(bdp->bd_non_tx_lock));
- spin_lock_init(&(bdp->config_lock));
- spin_lock_init(&(bdp->mdi_access_lock));
- /* Initialize configuration data */
- e100_config_init(bdp);
-
- return 1;
-}
-
-static void
-e100_tco_workaround(struct e100_private *bdp)
-{
- int i;
-
- /* Do software reset */
- e100_sw_reset(bdp, PORT_SOFTWARE_RESET);
-
- /* Do a dummy LOAD CU BASE command. */
- /* This gets us out of pre-driver to post-driver. */
- e100_exec_cmplx(bdp, 0, SCB_CUC_LOAD_BASE);
-
- /* Wait 20 msec for reset to take effect */
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ / 50 + 1);
-
- /* disable interrupts since they are enabled */
- /* after device reset */
- e100_disable_clear_intr(bdp);
-
- /* Wait for command to be cleared up to 1 sec */
- for (i=0; i<100; i++) {
- if (!readb(&bdp->scb->scb_cmd_low))
- break;
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ / 100 + 1);
- }
-
- /* Wait for TCO request bit in PMDR register to be clear */
- for (i=0; i<50; i++) {
- if (!(readb(&bdp->scb->scb_ext.d101m_scb.scb_pmdr) & BIT_1))
- break;
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ / 100 + 1);
- }
-}
-
-/**
- * e100_hw_init - initialized tthe hardware
- * @bdp: atapter's private data struct
- *
- * This routine performs a reset on the adapter, and configures the adapter.
- * This includes configuring the 82557 LAN controller, validating and setting
- * the node address, detecting and configuring the Phy chip on the adapter,
- * and initializing all of the on chip counters.
- *
- * Returns:
- * true - If the adapter was initialized
- * false - If the adapter failed initialization
- */
-unsigned char
-e100_hw_init(struct e100_private *bdp)
-{
- if (!e100_phy_init(bdp))
- goto err;
-
- e100_sw_reset(bdp, PORT_SELECTIVE_RESET);
-
- /* Only 82559 or above needs TCO workaround */
- if (bdp->rev_id >= D101MA_REV_ID)
- e100_tco_workaround(bdp);
-
- /* Load the CU BASE (set to 0, because we use linear mode) */
- if (!e100_wait_exec_cmplx(bdp, 0, SCB_CUC_LOAD_BASE, 0))
- goto err;
-
- if (!e100_wait_exec_cmplx(bdp, 0, SCB_RUC_LOAD_BASE, 0))
- goto err;
-
- /* Load interrupt microcode */
- if (e100_load_microcode(bdp)) {
- bdp->flags |= DF_UCODE_LOADED;
- }
-
- if ((u8) bdp->rev_id < D101A4_REV_ID)
- e100_config_init_82557(bdp);
-
- if (!e100_config(bdp))
- goto err;
-
- if (!e100_setup_iaaddr(bdp, bdp->device->dev_addr))
- goto err;
-
- /* Clear the internal counters */
- if (!e100_clr_cntrs(bdp))
- goto err;
-
- /* Change for 82558 enhancement */
- /* If 82558/9 and if the user has enabled flow control, set up the
- * Flow Control Reg. in the CSR */
- if ((bdp->flags & IS_BACHELOR)
- && (bdp->params.b_params & PRM_FC)) {
- writeb(DFLT_FC_THLD, &bdp->scb->scb_ext.d101_scb.scb_fc_thld);
- writeb(DFLT_FC_CMD,
- &bdp->scb->scb_ext.d101_scb.scb_fc_xon_xoff);
- }
-
- return true;
-err:
- printk(KERN_ERR "e100: hw init failed\n");
- return false;
-}
-
-/**
- * e100_setup_tcb_pool - setup TCB circular list
- * @head: Pointer to head of the allocated TCBs
- * @qlen: Number of elements in the queue
- * @bdp: atapter's private data struct
- *
- * This routine arranges the contigiously allocated TCB's in a circular list.
- * Also does the one time initialization of the TCBs.
- */
-static void
-e100_setup_tcb_pool(tcb_t *head, unsigned int qlen, struct e100_private *bdp)
-{
- int ele_no;
- tcb_t *pcurr_tcb; /* point to current tcb */
- u32 next_phys; /* the next phys addr */
- u16 txcommand = CB_S_BIT | CB_TX_SF_BIT;
-
- bdp->tx_count = 0;
- if (bdp->flags & USE_IPCB) {
- txcommand |= CB_IPCB_TRANSMIT | CB_CID_DEFAULT;
- } else if (bdp->flags & IS_BACHELOR) {
- txcommand |= CB_TRANSMIT | CB_CID_DEFAULT;
- } else {
- txcommand |= CB_TRANSMIT;
- }
-
- for (ele_no = 0, next_phys = bdp->tcb_phys, pcurr_tcb = head;
- ele_no < qlen; ele_no++, pcurr_tcb++) {
-
- /* set the phys addr for this TCB, next_phys has not incr. yet */
- pcurr_tcb->tcb_phys = next_phys;
- next_phys += sizeof (tcb_t);
-
- /* set the link to next tcb */
- if (ele_no == (qlen - 1))
- pcurr_tcb->tcb_hdr.cb_lnk_ptr =
- cpu_to_le32(bdp->tcb_phys);
- else
- pcurr_tcb->tcb_hdr.cb_lnk_ptr = cpu_to_le32(next_phys);
-
- pcurr_tcb->tcb_hdr.cb_status = 0;
- pcurr_tcb->tcb_hdr.cb_cmd = cpu_to_le16(txcommand);
- pcurr_tcb->tcb_cnt = 0;
- pcurr_tcb->tcb_thrshld = bdp->tx_thld;
- if (ele_no < 2) {
- pcurr_tcb->tcb_hdr.cb_status =
- cpu_to_le16(CB_STATUS_COMPLETE);
- }
- pcurr_tcb->tcb_tbd_num = 1;
-
- if (bdp->flags & IS_BACHELOR) {
- pcurr_tcb->tcb_tbd_ptr =
- __constant_cpu_to_le32(0xFFFFFFFF);
- } else {
- pcurr_tcb->tcb_tbd_ptr =
- cpu_to_le32(pcurr_tcb->tcb_phys + 0x10);
- }
-
- if (bdp->flags & IS_BACHELOR) {
- pcurr_tcb->tcb_tbd_expand_ptr =
- cpu_to_le32(pcurr_tcb->tcb_phys + 0x20);
- } else {
- pcurr_tcb->tcb_tbd_expand_ptr =
- cpu_to_le32(pcurr_tcb->tcb_phys + 0x10);
- }
- pcurr_tcb->tcb_tbd_dflt_ptr = pcurr_tcb->tcb_tbd_ptr;
-
- if (bdp->flags & USE_IPCB) {
- pcurr_tcb->tbd_ptr = &(pcurr_tcb->tcbu.tbd_array[1]);
- pcurr_tcb->tcbu.ipcb.ip_activation_high =
- IPCB_IP_ACTIVATION_DEFAULT;
- pcurr_tcb->tcbu.ipcb.vlan = 0;
- } else {
- pcurr_tcb->tbd_ptr = &(pcurr_tcb->tcbu.tbd_array[0]);
- }
-
- pcurr_tcb->tcb_skb = NULL;
- }
-
- mb();
-}
-
-/***************************************************************************/
-/***************************************************************************/
-/* Memory Management Routines */
-/***************************************************************************/
-
-/**
- * e100_alloc_space - allocate private driver data
- * @bdp: atapter's private data struct
- *
- * This routine allocates memory for the driver. Memory allocated is for the
- * selftest and statistics structures.
- *
- * Returns:
- * 0: if the operation was successful
- * %-ENOMEM: if memory allocation failed
- */
-unsigned char
-e100_alloc_space(struct e100_private *bdp)
-{
- unsigned long off;
-
- /* allocate all the dma-able structures in one call:
- * selftest results, adapter stats, and non-tx cb commands */
- if (!(bdp->dma_able =
- pci_alloc_consistent(bdp->pdev, sizeof (bd_dma_able_t),
- &(bdp->dma_able_phys)))) {
- goto err;
- }
-
- /* now assign the various pointers into the struct we've just allocated */
- off = offsetof(bd_dma_able_t, selftest);
-
- bdp->selftest = (self_test_t *) (bdp->dma_able + off);
- bdp->selftest_phys = bdp->dma_able_phys + off;
-
- off = offsetof(bd_dma_able_t, stats_counters);
-
- bdp->stats_counters = (max_counters_t *) (bdp->dma_able + off);
- bdp->stat_cnt_phys = bdp->dma_able_phys + off;
-
- return 0;
-
-err:
- printk(KERN_ERR
- "e100: Failed to allocate memory\n");
- return -ENOMEM;
-}
-
-/**
- * e100_alloc_tcb_pool - allocate TCB circular list
- * @bdp: atapter's private data struct
- *
- * This routine allocates memory for the circular list of transmit descriptors.
- *
- * Returns:
- * 0: if allocation has failed.
- * 1: Otherwise.
- */
-int
-e100_alloc_tcb_pool(struct e100_private *bdp)
-{
- int stcb = sizeof (tcb_t) * bdp->params.TxDescriptors;
-
- /* allocate space for the TCBs */
- if (!(bdp->tcb_pool.data =
- pci_alloc_consistent(bdp->pdev, stcb, &bdp->tcb_phys)))
- return 0;
-
- memset(bdp->tcb_pool.data, 0x00, stcb);
-
- return 1;
-}
-
-void
-e100_free_tcb_pool(struct e100_private *bdp)
-{
- tcb_t *tcb;
- int i;
- /* Return tx skbs */
- for (i = 0; i < bdp->params.TxDescriptors; i++) {
- tcb = bdp->tcb_pool.data;
- tcb += bdp->tcb_pool.head;
- e100_tx_skb_free(bdp, tcb);
- if (NEXT_TCB_TOUSE(bdp->tcb_pool.head) == bdp->tcb_pool.tail)
- break;
- bdp->tcb_pool.head = NEXT_TCB_TOUSE(bdp->tcb_pool.head);
- }
- pci_free_consistent(bdp->pdev,
- sizeof (tcb_t) * bdp->params.TxDescriptors,
- bdp->tcb_pool.data, bdp->tcb_phys);
- bdp->tcb_pool.head = 0;
- bdp->tcb_pool.tail = 1;
- bdp->tcb_phys = 0;
-}
-
-static void
-e100_dealloc_space(struct e100_private *bdp)
-{
- if (bdp->dma_able) {
- pci_free_consistent(bdp->pdev, sizeof (bd_dma_able_t),
- bdp->dma_able, bdp->dma_able_phys);
- }
-
- bdp->selftest_phys = 0;
- bdp->stat_cnt_phys = 0;
- bdp->dma_able_phys = 0;
- bdp->dma_able = 0;
-}
-
-static void
-e100_free_rfd_pool(struct e100_private *bdp)
-{
- struct rx_list_elem *rx_struct;
-
- while (!list_empty(&(bdp->active_rx_list))) {
-
- rx_struct = list_entry(bdp->active_rx_list.next,
- struct rx_list_elem, list_elem);
- list_del(&(rx_struct->list_elem));
- pci_unmap_single(bdp->pdev, rx_struct->dma_addr,
- sizeof (rfd_t), PCI_DMA_TODEVICE);
- dev_kfree_skb(rx_struct->skb);
- kfree(rx_struct);
- }
-
- while (!list_empty(&(bdp->rx_struct_pool))) {
- rx_struct = list_entry(bdp->rx_struct_pool.next,
- struct rx_list_elem, list_elem);
- list_del(&(rx_struct->list_elem));
- kfree(rx_struct);
- }
-}
-
-/**
- * e100_alloc_rfd_pool - allocate RFDs
- * @bdp: atapter's private data struct
- *
- * Allocates initial pool of skb which holds both rfd and data,
- * and return a pointer to the head of the list
- */
-static int
-e100_alloc_rfd_pool(struct e100_private *bdp)
-{
- struct rx_list_elem *rx_struct;
- int i;
-
- INIT_LIST_HEAD(&(bdp->active_rx_list));
- INIT_LIST_HEAD(&(bdp->rx_struct_pool));
- bdp->skb_req = bdp->params.RxDescriptors;
- for (i = 0; i < bdp->skb_req; i++) {
- rx_struct = kmalloc(sizeof (struct rx_list_elem), GFP_ATOMIC);
- list_add(&(rx_struct->list_elem), &(bdp->rx_struct_pool));
- }
- e100_alloc_skbs(bdp);
- return !list_empty(&(bdp->active_rx_list));
-
-}
-
-void
-e100_clear_pools(struct e100_private *bdp)
-{
- bdp->last_tcb = NULL;
- e100_free_rfd_pool(bdp);
- e100_free_tcb_pool(bdp);
-}
-
-/*****************************************************************************/
-/*****************************************************************************/
-/* Run Time Functions */
-/*****************************************************************************/
-
-/**
- * e100_watchdog
- * @dev: adapter's net_device struct
- *
- * This routine runs every 2 seconds and updates our statitics and link state,
- * and refreshs txthld value.
- */
-void
-e100_watchdog(struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
-
-#ifdef E100_CU_DEBUG
- if (e100_cu_unknown_state(bdp)) {
- printk(KERN_ERR "e100: %s: CU unknown state in e100_watchdog\n",
- dev->name);
- }
-#endif
- if (!netif_running(dev)) {
- return;
- }
-
- /* check if link state has changed */
- if (e100_phy_check(bdp)) {
- if (netif_carrier_ok(dev)) {
- printk(KERN_ERR
- "e100: %s NIC Link is Up %d Mbps %s duplex\n",
- bdp->device->name, bdp->cur_line_speed,
- (bdp->cur_dplx_mode == HALF_DUPLEX) ?
- "Half" : "Full");
-
- e100_config_fc(bdp);
- e100_config(bdp);
-
- } else {
- printk(KERN_ERR "e100: %s NIC Link is Down\n",
- bdp->device->name);
- }
- }
-
- // toggle the tx queue according to link status
- // this also resolves a race condition between tx & non-cu cmd flows
- if (netif_carrier_ok(dev)) {
- if (netif_running(dev))
- netif_wake_queue(dev);
- } else {
- if (netif_running(dev))
- netif_stop_queue(dev);
- /* When changing to non-autoneg, device may lose */
- /* link with some switches. e100 will try to */
- /* revover link by sending command to PHY layer */
- if (bdp->params.e100_speed_duplex != E100_AUTONEG)
- e100_force_speed_duplex_to_phy(bdp);
- }
-
- rmb();
-
- if (e100_update_stats(bdp)) {
-
- /* Check if a change in the IFS parameter is needed,
- and configure the device accordingly */
- if (bdp->params.b_params & PRM_IFS)
- e100_manage_adaptive_ifs(bdp);
-
- /* Now adjust our dynamic tx threshold value */
- e100_refresh_txthld(bdp);
-
- /* Now if we are on a 557 and we havn't received any frames then we
- * should issue a multicast command to reset the RU */
- if (bdp->rev_id < D101A4_REV_ID) {
- if (!(bdp->stats_counters->basic_stats.rcv_gd_frames)) {
- e100_set_multi(dev);
- }
- }
- }
- /* Issue command to dump statistics from device. */
- /* Check for command completion on next watchdog timer. */
- e100_dump_stats_cntrs(bdp);
-
- mb();
-
- /* relaunch watchdog timer in 2 sec */
- mod_timer(&(bdp->watchdog_timer), jiffies + (2 * HZ));
-
- if (list_empty(&bdp->active_rx_list))
- e100_trigger_SWI(bdp);
-}
-
-/**
- * e100_manage_adaptive_ifs
- * @bdp: atapter's private data struct
- *
- * This routine manages the adaptive Inter-Frame Spacing algorithm
- * using a state machine.
- */
-void
-e100_manage_adaptive_ifs(struct e100_private *bdp)
-{
- static u16 state_table[9][4] = { // rows are states
- {2, 0, 0, 0}, // state0 // column0: next state if increasing
- {2, 0, 5, 30}, // state1 // column1: next state if decreasing
- {5, 1, 5, 30}, // state2 // column2: IFS value for 100 mbit
- {5, 3, 0, 0}, // state3 // column3: IFS value for 10 mbit
- {5, 3, 10, 60}, // state4
- {8, 4, 10, 60}, // state5
- {8, 6, 0, 0}, // state6
- {8, 6, 20, 60}, // state7
- {8, 7, 20, 60} // state8
- };
-
- u32 transmits =
- le32_to_cpu(bdp->stats_counters->basic_stats.xmt_gd_frames);
- u32 collisions =
- le32_to_cpu(bdp->stats_counters->basic_stats.xmt_ttl_coll);
- u32 state = bdp->ifs_state;
- u32 old_value = bdp->ifs_value;
- int next_col;
- u32 min_transmits;
-
- if (bdp->cur_dplx_mode == FULL_DUPLEX) {
- bdp->ifs_state = 0;
- bdp->ifs_value = 0;
-
- } else { /* Half Duplex */
- /* Set speed specific parameters */
- if (bdp->cur_line_speed == 100) {
- next_col = 2;
- min_transmits = MIN_NUMBER_OF_TRANSMITS_100;
-
- } else { /* 10 Mbps */
- next_col = 3;
- min_transmits = MIN_NUMBER_OF_TRANSMITS_10;
- }
-
- if ((transmits / 32 < collisions)
- && (transmits > min_transmits)) {
- state = state_table[state][0]; /* increment */
-
- } else if (transmits < min_transmits) {
- state = state_table[state][1]; /* decrement */
- }
-
- bdp->ifs_value = state_table[state][next_col];
- bdp->ifs_state = state;
- }
-
- /* If the IFS value has changed, configure the device */
- if (bdp->ifs_value != old_value) {
- e100_config_ifs(bdp);
- e100_config(bdp);
- }
-}
-
-/**
- * e100intr - interrupt handler
- * @irq: the IRQ number
- * @dev_inst: the net_device struct
- * @regs: registers (unused)
- *
- * This routine is the ISR for the e100 board. It services
- * the RX & TX queues & starts the RU if it has stopped due
- * to no resources.
- */
-irqreturn_t
-e100intr(int irq, void *dev_inst, struct pt_regs *regs)
-{
- struct net_device *dev;
- struct e100_private *bdp;
- u16 intr_status;
-
- dev = dev_inst;
- bdp = dev->priv;
-
- intr_status = readw(&bdp->scb->scb_status);
- /* If not my interrupt, just return */
- if (!(intr_status & SCB_STATUS_ACK_MASK) || (intr_status == 0xffff)) {
- return IRQ_NONE;
- }
-
- /* disable and ack intr */
- e100_disable_clear_intr(bdp);
-
- /* the device is closed, don't continue or else bad things may happen. */
- if (!netif_running(dev)) {
- e100_set_intr_mask(bdp);
- return IRQ_NONE;
- }
-
- /* SWI intr (triggered by watchdog) is signal to allocate new skb buffers */
- if (intr_status & SCB_STATUS_ACK_SWI) {
- e100_alloc_skbs(bdp);
- }
-
- /* do recv work if any */
- if (intr_status &
- (SCB_STATUS_ACK_FR | SCB_STATUS_ACK_RNR | SCB_STATUS_ACK_SWI))
- bdp->drv_stats.rx_intr_pkts += e100_rx_srv(bdp);
-
- /* clean up after tx'ed packets */
- if (intr_status & (SCB_STATUS_ACK_CNA | SCB_STATUS_ACK_CX))
- e100_tx_srv(bdp);
-
- e100_set_intr_mask(bdp);
- return IRQ_HANDLED;
-}
-
-/**
- * e100_tx_skb_free - free TX skbs resources
- * @bdp: atapter's private data struct
- * @tcb: associated tcb of the freed skb
- *
- * This routine frees resources of TX skbs.
- */
-static inline void
-e100_tx_skb_free(struct e100_private *bdp, tcb_t *tcb)
-{
- if (tcb->tcb_skb) {
- int i;
- tbd_t *tbd_arr = tcb->tbd_ptr;
- int frags = skb_shinfo(tcb->tcb_skb)->nr_frags;
-
- for (i = 0; i <= frags; i++, tbd_arr++) {
- pci_unmap_single(bdp->pdev,
- le32_to_cpu(tbd_arr->tbd_buf_addr),
- le16_to_cpu(tbd_arr->tbd_buf_cnt),
- PCI_DMA_TODEVICE);
- }
- dev_kfree_skb_irq(tcb->tcb_skb);
- tcb->tcb_skb = NULL;
- }
-}
-
-/**
- * e100_tx_srv - service TX queues
- * @bdp: atapter's private data struct
- *
- * This routine services the TX queues. It reclaims the TCB's & TBD's & other
- * resources used during the transmit of this buffer. It is called from the ISR.
- * We don't need a tx_lock since we always access buffers which were already
- * prepared.
- */
-void
-e100_tx_srv(struct e100_private *bdp)
-{
- tcb_t *tcb;
- int i;
-
- /* go over at most TxDescriptors buffers */
- for (i = 0; i < bdp->params.TxDescriptors; i++) {
- tcb = bdp->tcb_pool.data;
- tcb += bdp->tcb_pool.head;
-
- rmb();
-
- /* if the buffer at 'head' is not complete, break */
- if (!(tcb->tcb_hdr.cb_status &
- __constant_cpu_to_le16(CB_STATUS_COMPLETE)))
- break;
-
- /* service next buffer, clear the out of resource condition */
- e100_tx_skb_free(bdp, tcb);
-
- if (netif_running(bdp->device))
- netif_wake_queue(bdp->device);
-
- /* if we've caught up with 'tail', break */
- if (NEXT_TCB_TOUSE(bdp->tcb_pool.head) == bdp->tcb_pool.tail) {
- break;
- }
-
- bdp->tcb_pool.head = NEXT_TCB_TOUSE(bdp->tcb_pool.head);
- }
-}
-
-/**
- * e100_rx_srv - service RX queue
- * @bdp: atapter's private data struct
- * @max_number_of_rfds: max number of RFDs to process
- * @rx_congestion: flag pointer, to inform the calling function of congestion.
- *
- * This routine processes the RX interrupt & services the RX queues.
- * For each successful RFD, it allocates a new msg block, links that
- * into the RFD list, and sends the old msg upstream.
- * The new RFD is then put at the end of the free list of RFD's.
- * It returns the number of serviced RFDs.
- */
-u32
-e100_rx_srv(struct e100_private *bdp)
-{
- rfd_t *rfd; /* new rfd, received rfd */
- int i;
- u16 rfd_status;
- struct sk_buff *skb;
- struct net_device *dev;
- unsigned int data_sz;
- struct rx_list_elem *rx_struct;
- u32 rfd_cnt = 0;
-
- dev = bdp->device;
-
- /* current design of rx is as following:
- * 1. socket buffer (skb) used to pass network packet to upper layer
- * 2. all HW host memory structures (like RFDs, RBDs and data buffers)
- * are placed in a skb's data room
- * 3. when rx process is complete, we change skb internal pointers to exclude
- * from data area all unrelated things (RFD, RDB) and to leave
- * just rx'ed packet netto
- * 4. for each skb passed to upper layer, new one is allocated instead.
- * 5. if no skb left, in 2 sec another atempt to allocate skbs will be made
- * (watchdog trigger SWI intr and isr should allocate new skbs)
- */
- for (i = 0; i < bdp->params.RxDescriptors; i++) {
- if (list_empty(&(bdp->active_rx_list))) {
- break;
- }
-
- rx_struct = list_entry(bdp->active_rx_list.next,
- struct rx_list_elem, list_elem);
- skb = rx_struct->skb;
-
- rfd = RFD_POINTER(skb, bdp); /* locate RFD within skb */
-
- // sync only the RFD header
- pci_dma_sync_single(bdp->pdev, rx_struct->dma_addr,
- bdp->rfd_size, PCI_DMA_FROMDEVICE);
- rfd_status = le16_to_cpu(rfd->rfd_header.cb_status); /* get RFD's status */
- if (!(rfd_status & RFD_STATUS_COMPLETE)) /* does not contains data yet - exit */
- break;
-
- /* to allow manipulation with current skb we need to unlink it */
- list_del(&(rx_struct->list_elem));
-
- /* do not free & unmap badly received packet.
- * move it to the end of skb list for reuse */
- if (!(rfd_status & RFD_STATUS_OK)) {
- e100_add_skb_to_end(bdp, rx_struct);
- continue;
- }
-
- data_sz = min_t(u16, (le16_to_cpu(rfd->rfd_act_cnt) & 0x3fff),
- (sizeof (rfd_t) - bdp->rfd_size));
-
- /* now sync all the data */
- pci_dma_sync_single(bdp->pdev, rx_struct->dma_addr,
- (data_sz + bdp->rfd_size),
- PCI_DMA_FROMDEVICE);
-
- pci_unmap_single(bdp->pdev, rx_struct->dma_addr,
- sizeof (rfd_t), PCI_DMA_FROMDEVICE);
-
- list_add(&(rx_struct->list_elem), &(bdp->rx_struct_pool));
-
- /* end of dma access to rfd */
- bdp->skb_req++; /* incr number of requested skbs */
- e100_alloc_skbs(bdp); /* and get them */
-
- /* set packet size, excluding checksum (2 last bytes) if it is present */
- if ((bdp->flags & DF_CSUM_OFFLOAD)
- && (bdp->rev_id < D102_REV_ID))
- skb_put(skb, (int) data_sz - 2);
- else
- skb_put(skb, (int) data_sz);
-
- /* set the protocol */
- skb->protocol = eth_type_trans(skb, dev);
-
- /* set the checksum info */
- if (bdp->flags & DF_CSUM_OFFLOAD) {
- if (bdp->rev_id >= D102_REV_ID) {
- skb->ip_summed = e100_D102_check_checksum(rfd);
- } else {
- skb->ip_summed = e100_D101M_checksum(bdp, skb);
- }
- } else {
- skb->ip_summed = CHECKSUM_NONE;
- }
-
- bdp->drv_stats.net_stats.rx_bytes += skb->len;
-
- if(bdp->vlgrp && (rfd_status & CB_STATUS_VLAN)) {
- vlan_hwaccel_rx(skb, bdp->vlgrp, be16_to_cpu(rfd->vlanid));
- } else {
- netif_rx(skb);
- }
- dev->last_rx = jiffies;
-
- rfd_cnt++;
- } /* end of rfd loop */
-
- /* restart the RU if it has stopped */
- if ((readw(&bdp->scb->scb_status) & SCB_RUS_MASK) != SCB_RUS_READY) {
- e100_start_ru(bdp);
- }
-
- return rfd_cnt;
-}
-
-void
-e100_refresh_txthld(struct e100_private *bdp)
-{
- basic_cntr_t *pstat = &(bdp->stats_counters->basic_stats);
-
- /* as long as tx_per_underrun is not 0, we can go about dynamically *
- * adjusting the xmit threshold. we stop doing that & resort to defaults
- * * once the adjustments become meaningless. the value is adjusted by *
- * dumping the error counters & checking the # of xmit underrun errors *
- * we've had. */
- if (bdp->tx_per_underrun) {
- /* We are going to last values dumped from the dump statistics
- * command */
- if (le32_to_cpu(pstat->xmt_gd_frames)) {
- if (le32_to_cpu(pstat->xmt_uruns)) {
- /*
- * if we have had more than one underrun per "DEFAULT #
- * OF XMITS ALLOWED PER UNDERRUN" good xmits, raise the
- * THRESHOLD.
- */
- if ((le32_to_cpu(pstat->xmt_gd_frames) /
- le32_to_cpu(pstat->xmt_uruns)) <
- bdp->tx_per_underrun) {
- bdp->tx_thld += 3;
- }
- }
-
- /*
- * if we've had less than one underrun per the DEFAULT number of
- * of good xmits allowed, lower the THOLD but not less than 0
- */
- if (le32_to_cpu(pstat->xmt_gd_frames) >
- bdp->tx_per_underrun) {
- bdp->tx_thld--;
-
- if (bdp->tx_thld < 6)
- bdp->tx_thld = 6;
-
- }
- }
-
- /* end good xmits */
- /*
- * * if our adjustments are becoming unresonable, stop adjusting &
- * resort * to defaults & pray. A THOLD value > 190 means that the
- * adapter will * wait for 190*8=1520 bytes in TX FIFO before it
- * starts xmit. Since * MTU is 1514, it doesn't make any sense for
- * further increase. */
- if (bdp->tx_thld >= 190) {
- bdp->tx_per_underrun = 0;
- bdp->tx_thld = 189;
- }
- } /* end underrun check */
-}
-
-/**
- * e100_prepare_xmit_buff - prepare a buffer for transmission
- * @bdp: atapter's private data struct
- * @skb: skb to send
- *
- * This routine prepare a buffer for transmission. It checks
- * the message length for the appropiate size. It picks up a
- * free tcb from the TCB pool and sets up the corresponding
- * TBD's. If the number of fragments are more than the number
- * of TBD/TCB it copies all the fragments in a coalesce buffer.
- * It returns a pointer to the prepared TCB.
- */
-static inline tcb_t *
-e100_prepare_xmit_buff(struct e100_private *bdp, struct sk_buff *skb)
-{
- tcb_t *tcb, *prev_tcb;
-
- tcb = bdp->tcb_pool.data;
- tcb += TCB_TO_USE(bdp->tcb_pool);
-
- if (bdp->flags & USE_IPCB) {
- tcb->tcbu.ipcb.ip_activation_high = IPCB_IP_ACTIVATION_DEFAULT;
- tcb->tcbu.ipcb.ip_schedule &= ~IPCB_TCP_PACKET;
- tcb->tcbu.ipcb.ip_schedule &= ~IPCB_TCPUDP_CHECKSUM_ENABLE;
- }
-
- if(bdp->vlgrp && vlan_tx_tag_present(skb)) {
- (tcb->tcbu).ipcb.ip_activation_high |= IPCB_INSERTVLAN_ENABLE;
- (tcb->tcbu).ipcb.vlan = cpu_to_be16(vlan_tx_tag_get(skb));
- }
-
- tcb->tcb_hdr.cb_status = 0;
- tcb->tcb_thrshld = bdp->tx_thld;
- tcb->tcb_hdr.cb_cmd |= __constant_cpu_to_le16(CB_S_BIT);
-
- /* Set I (Interrupt) bit on every (TX_FRAME_CNT)th packet */
- if (!(++bdp->tx_count % TX_FRAME_CNT))
- tcb->tcb_hdr.cb_cmd |= __constant_cpu_to_le16(CB_I_BIT);
- else
- /* Clear I bit on other packets */
- tcb->tcb_hdr.cb_cmd &= ~__constant_cpu_to_le16(CB_I_BIT);
-
- tcb->tcb_skb = skb;
-
- if (skb->ip_summed == CHECKSUM_HW) {
- const struct iphdr *ip = skb->nh.iph;
-
- if ((ip->protocol == IPPROTO_TCP) ||
- (ip->protocol == IPPROTO_UDP)) {
-
- tcb->tcbu.ipcb.ip_activation_high |=
- IPCB_HARDWAREPARSING_ENABLE;
- tcb->tcbu.ipcb.ip_schedule |=
- IPCB_TCPUDP_CHECKSUM_ENABLE;
-
- if (ip->protocol == IPPROTO_TCP)
- tcb->tcbu.ipcb.ip_schedule |= IPCB_TCP_PACKET;
- }
- }
-
- if (!skb_shinfo(skb)->nr_frags) {
- (tcb->tbd_ptr)->tbd_buf_addr =
- cpu_to_le32(pci_map_single(bdp->pdev, skb->data,
- skb->len, PCI_DMA_TODEVICE));
- (tcb->tbd_ptr)->tbd_buf_cnt = cpu_to_le16(skb->len);
- tcb->tcb_tbd_num = 1;
- tcb->tcb_tbd_ptr = tcb->tcb_tbd_dflt_ptr;
- } else {
- int i;
- void *addr;
- tbd_t *tbd_arr_ptr = &(tcb->tbd_ptr[1]);
- skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
-
- (tcb->tbd_ptr)->tbd_buf_addr =
- cpu_to_le32(pci_map_single(bdp->pdev, skb->data,
- skb_headlen(skb),
- PCI_DMA_TODEVICE));
- (tcb->tbd_ptr)->tbd_buf_cnt =
- cpu_to_le16(skb_headlen(skb));
-
- for (i = 0; i < skb_shinfo(skb)->nr_frags;
- i++, tbd_arr_ptr++, frag++) {
-
- addr = ((void *) page_address(frag->page) +
- frag->page_offset);
-
- tbd_arr_ptr->tbd_buf_addr =
- cpu_to_le32(pci_map_single(bdp->pdev,
- addr, frag->size,
- PCI_DMA_TODEVICE));
- tbd_arr_ptr->tbd_buf_cnt = cpu_to_le16(frag->size);
- }
- tcb->tcb_tbd_num = skb_shinfo(skb)->nr_frags + 1;
- tcb->tcb_tbd_ptr = tcb->tcb_tbd_expand_ptr;
- }
-
- /* clear the S-BIT on the previous tcb */
- prev_tcb = bdp->tcb_pool.data;
- prev_tcb += PREV_TCB_USED(bdp->tcb_pool);
- prev_tcb->tcb_hdr.cb_cmd &= __constant_cpu_to_le16((u16) ~CB_S_BIT);
-
- bdp->tcb_pool.tail = NEXT_TCB_TOUSE(bdp->tcb_pool.tail);
-
- mb();
-
- e100_start_cu(bdp, tcb);
-
- return tcb;
-}
-
-/* Changed for 82558 enhancement */
-/**
- * e100_start_cu - start the adapter's CU
- * @bdp: atapter's private data struct
- * @tcb: TCB to be transmitted
- *
- * This routine issues a CU Start or CU Resume command to the 82558/9.
- * This routine was added because the prepare_ext_xmit_buff takes advantage
- * of the 82558/9's Dynamic TBD chaining feature and has to start the CU as
- * soon as the first TBD is ready.
- *
- * e100_start_cu must be called while holding the tx_lock !
- */
-u8
-e100_start_cu(struct e100_private *bdp, tcb_t *tcb)
-{
- unsigned long lock_flag;
- u8 ret = true;
-
- spin_lock_irqsave(&(bdp->bd_lock), lock_flag);
- switch (bdp->next_cu_cmd) {
- case RESUME_NO_WAIT:
- /*last cu command was a CU_RESMUE if this is a 558 or newer we don't need to
- * wait for command word to clear, we reach here only if we are bachlor
- */
- e100_exec_cmd(bdp, SCB_CUC_RESUME);
- break;
-
- case RESUME_WAIT:
- if ((bdp->flags & IS_ICH) &&
- (bdp->cur_line_speed == 10) &&
- (bdp->cur_dplx_mode == HALF_DUPLEX)) {
- e100_wait_exec_simple(bdp, SCB_CUC_NOOP);
- udelay(1);
- }
- if ((e100_wait_exec_simple(bdp, SCB_CUC_RESUME)) &&
- (bdp->flags & IS_BACHELOR) && (!(bdp->flags & IS_ICH))) {
- bdp->next_cu_cmd = RESUME_NO_WAIT;
- }
- break;
-
- case START_WAIT:
- // The last command was a non_tx CU command
- if (!e100_wait_cus_idle(bdp))
- printk(KERN_DEBUG
- "e100: %s: cu_start: timeout waiting for cu\n",
- bdp->device->name);
- if (!e100_wait_exec_cmplx(bdp, (u32) (tcb->tcb_phys),
- SCB_CUC_START, CB_TRANSMIT)) {
- printk(KERN_DEBUG
- "e100: %s: cu_start: timeout waiting for scb\n",
- bdp->device->name);
- e100_exec_cmplx(bdp, (u32) (tcb->tcb_phys),
- SCB_CUC_START);
- ret = false;
- }
-
- bdp->next_cu_cmd = RESUME_WAIT;
-
- break;
- }
-
- /* save the last tcb */
- bdp->last_tcb = tcb;
-
- spin_unlock_irqrestore(&(bdp->bd_lock), lock_flag);
- return ret;
-}
-
-/* ====================================================================== */
-/* hw */
-/* ====================================================================== */
-
-/**
- * e100_selftest - perform H/W self test
- * @bdp: atapter's private data struct
- * @st_timeout: address to return timeout value, if fails
- * @st_result: address to return selftest result, if fails
- *
- * This routine will issue PORT Self-test command to test the e100.
- * The self-test will fail if the adapter's master-enable bit is not
- * set in the PCI Command Register, or if the adapter is not seated
- * in a PCI master-enabled slot. we also disable interrupts when the
- * command is completed.
- *
- * Returns:
- * true: if adapter passes self_test
- * false: otherwise
- */
-unsigned char
-e100_selftest(struct e100_private *bdp, u32 *st_timeout, u32 *st_result)
-{
- u32 selftest_cmd;
-
- /* initialize the nic state before running test */
- e100_sw_reset(bdp, PORT_SOFTWARE_RESET);
- /* Setup the address of the self_test area */
- selftest_cmd = bdp->selftest_phys;
-
- /* Setup SELF TEST Command Code in D3 - D0 */
- selftest_cmd |= PORT_SELFTEST;
-
- /* Initialize the self-test signature and results DWORDS */
- bdp->selftest->st_sign = 0;
- bdp->selftest->st_result = 0xffffffff;
-
- /* Do the port command */
- writel(selftest_cmd, &bdp->scb->scb_port);
- readw(&(bdp->scb->scb_status)); /* flushes last write, read-safe */
-
- /* Wait at least 10 milliseconds for the self-test to complete */
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ / 100 + 1);
-
- /* disable interrupts since they are enabled */
- /* after device reset during selftest */
- e100_disable_clear_intr(bdp);
-
- /* if The First Self Test DWORD Still Zero, We've timed out. If the
- * second DWORD is not zero then we have an error. */
- if ((bdp->selftest->st_sign == 0) || (bdp->selftest->st_result != 0)) {
-
- if (st_timeout)
- *st_timeout = !(le32_to_cpu(bdp->selftest->st_sign));
-
- if (st_result)
- *st_result = le32_to_cpu(bdp->selftest->st_result);
-
- return false;
- }
-
- return true;
-}
-
-/**
- * e100_setup_iaaddr - issue IA setup sommand
- * @bdp: atapter's private data struct
- * @eaddr: new ethernet address
- *
- * This routine will issue the IA setup command. This command
- * will notify the 82557 (e100) of what its individual (node)
- * address is. This command will be executed in polled mode.
- *
- * Returns:
- * true: if the IA setup command was successfully issued and completed
- * false: otherwise
- */
-unsigned char
-e100_setup_iaaddr(struct e100_private *bdp, u8 *eaddr)
-{
- unsigned int i;
- cb_header_t *ntcb_hdr;
- unsigned char res;
- nxmit_cb_entry_t *cmd;
-
- if ((cmd = e100_alloc_non_tx_cmd(bdp)) == NULL) {
- res = false;
- goto exit;
- }
-
- ntcb_hdr = (cb_header_t *) cmd->non_tx_cmd;
- ntcb_hdr->cb_cmd = __constant_cpu_to_le16(CB_IA_ADDRESS);
-
- for (i = 0; i < ETH_ALEN; i++) {
- (cmd->non_tx_cmd)->ntcb.setup.ia_addr[i] = eaddr[i];
- }
-
- res = e100_exec_non_cu_cmd(bdp, cmd);
- if (!res)
- printk(KERN_WARNING "e100: %s: IA setup failed\n",
- bdp->device->name);
-
-exit:
- return res;
-}
-
-/**
- * e100_start_ru - start the RU if needed
- * @bdp: atapter's private data struct
- *
- * This routine checks the status of the 82557's receive unit(RU),
- * and starts the RU if it was not already active. However,
- * before restarting the RU, the driver gives the RU the buffers
- * it freed up during the servicing of the ISR. If there are
- * no free buffers to give to the RU, (i.e. we have reached a
- * no resource condition) the RU will not be started till the
- * next ISR.
- */
-void
-e100_start_ru(struct e100_private *bdp)
-{
- struct rx_list_elem *rx_struct = NULL;
- int buffer_found = 0;
- struct list_head *entry_ptr;
-
- list_for_each(entry_ptr, &(bdp->active_rx_list)) {
- rx_struct =
- list_entry(entry_ptr, struct rx_list_elem, list_elem);
- pci_dma_sync_single(bdp->pdev, rx_struct->dma_addr,
- bdp->rfd_size, PCI_DMA_FROMDEVICE);
- if (!((SKB_RFD_STATUS(rx_struct->skb, bdp) &
- __constant_cpu_to_le16(RFD_STATUS_COMPLETE)))) {
- buffer_found = 1;
- break;
- }
- }
-
- /* No available buffers */
- if (!buffer_found) {
- return;
- }
-
- spin_lock(&bdp->bd_lock);
-
- if (!e100_wait_exec_cmplx(bdp, rx_struct->dma_addr, SCB_RUC_START, 0)) {
- printk(KERN_DEBUG
- "e100: %s: start_ru: wait_scb failed\n",
- bdp->device->name);
- e100_exec_cmplx(bdp, rx_struct->dma_addr, SCB_RUC_START);
- }
- if (bdp->next_cu_cmd == RESUME_NO_WAIT) {
- bdp->next_cu_cmd = RESUME_WAIT;
- }
- spin_unlock(&bdp->bd_lock);
-}
-
-/**
- * e100_cmd_complete_location
- * @bdp: atapter's private data struct
- *
- * This routine returns a pointer to the location of the command-complete
- * DWord in the dump statistical counters area, according to the statistical
- * counters mode (557 - basic, 558 - extended, or 559 - TCO mode).
- * See e100_config_init() for the setting of the statistical counters mode.
- */
-static u32 *
-e100_cmd_complete_location(struct e100_private *bdp)
-{
- u32 *cmd_complete;
- max_counters_t *stats = bdp->stats_counters;
-
- switch (bdp->stat_mode) {
- case E100_EXTENDED_STATS:
- cmd_complete =
- (u32 *) &(((err_cntr_558_t *) (stats))->cmd_complete);
- break;
-
- case E100_TCO_STATS:
- cmd_complete =
- (u32 *) &(((err_cntr_559_t *) (stats))->cmd_complete);
- break;
-
- case E100_BASIC_STATS:
- default:
- cmd_complete =
- (u32 *) &(((err_cntr_557_t *) (stats))->cmd_complete);
- break;
- }
-
- return cmd_complete;
-}
-
-/**
- * e100_clr_cntrs - clear statistics counters
- * @bdp: atapter's private data struct
- *
- * This routine will clear the adapter error statistic counters.
- *
- * Returns:
- * true: if successfully cleared stat counters
- * false: otherwise
- */
-static unsigned char
-e100_clr_cntrs(struct e100_private *bdp)
-{
- volatile u32 *pcmd_complete;
-
- /* clear the dump counter complete word */
- pcmd_complete = e100_cmd_complete_location(bdp);
- *pcmd_complete = 0;
- mb();
-
- if (!e100_wait_exec_cmplx(bdp, bdp->stat_cnt_phys, SCB_CUC_DUMP_ADDR, 0))
- return false;
-
- /* wait 10 microseconds for the command to complete */
- udelay(10);
-
- if (!e100_wait_exec_simple(bdp, SCB_CUC_DUMP_RST_STAT))
- return false;
-
- if (bdp->next_cu_cmd == RESUME_NO_WAIT) {
- bdp->next_cu_cmd = RESUME_WAIT;
- }
-
- return true;
-}
-
-static unsigned char
-e100_update_stats(struct e100_private *bdp)
-{
- u32 *pcmd_complete;
- basic_cntr_t *pstat = &(bdp->stats_counters->basic_stats);
-
- // check if last dump command completed
- pcmd_complete = e100_cmd_complete_location(bdp);
- if (*pcmd_complete != le32_to_cpu(DUMP_RST_STAT_COMPLETED) &&
- *pcmd_complete != le32_to_cpu(DUMP_STAT_COMPLETED)) {
- *pcmd_complete = 0;
- return false;
- }
-
- /* increment the statistics */
- bdp->drv_stats.net_stats.rx_packets +=
- le32_to_cpu(pstat->rcv_gd_frames);
- bdp->drv_stats.net_stats.tx_packets +=
- le32_to_cpu(pstat->xmt_gd_frames);
- bdp->drv_stats.net_stats.rx_dropped += le32_to_cpu(pstat->rcv_rsrc_err);
- bdp->drv_stats.net_stats.collisions += le32_to_cpu(pstat->xmt_ttl_coll);
- bdp->drv_stats.net_stats.rx_length_errors +=
- le32_to_cpu(pstat->rcv_shrt_frames);
- bdp->drv_stats.net_stats.rx_over_errors +=
- le32_to_cpu(pstat->rcv_rsrc_err);
- bdp->drv_stats.net_stats.rx_crc_errors +=
- le32_to_cpu(pstat->rcv_crc_errs);
- bdp->drv_stats.net_stats.rx_frame_errors +=
- le32_to_cpu(pstat->rcv_algn_errs);
- bdp->drv_stats.net_stats.rx_fifo_errors +=
- le32_to_cpu(pstat->rcv_oruns);
- bdp->drv_stats.net_stats.tx_aborted_errors +=
- le32_to_cpu(pstat->xmt_max_coll);
- bdp->drv_stats.net_stats.tx_carrier_errors +=
- le32_to_cpu(pstat->xmt_lost_crs);
- bdp->drv_stats.net_stats.tx_fifo_errors +=
- le32_to_cpu(pstat->xmt_uruns);
-
- bdp->drv_stats.tx_late_col += le32_to_cpu(pstat->xmt_late_coll);
- bdp->drv_stats.tx_ok_defrd += le32_to_cpu(pstat->xmt_deferred);
- bdp->drv_stats.tx_one_retry += le32_to_cpu(pstat->xmt_sngl_coll);
- bdp->drv_stats.tx_mt_one_retry += le32_to_cpu(pstat->xmt_mlt_coll);
- bdp->drv_stats.rcv_cdt_frames += le32_to_cpu(pstat->rcv_err_coll);
-
- if (bdp->stat_mode != E100_BASIC_STATS) {
- ext_cntr_t *pex_stat = &bdp->stats_counters->extended_stats;
-
- bdp->drv_stats.xmt_fc_pkts +=
- le32_to_cpu(pex_stat->xmt_fc_frames);
- bdp->drv_stats.rcv_fc_pkts +=
- le32_to_cpu(pex_stat->rcv_fc_frames);
- bdp->drv_stats.rcv_fc_unsupported +=
- le32_to_cpu(pex_stat->rcv_fc_unsupported);
- }
-
- if (bdp->stat_mode == E100_TCO_STATS) {
- tco_cntr_t *ptco_stat = &bdp->stats_counters->tco_stats;
-
- bdp->drv_stats.xmt_tco_pkts +=
- le16_to_cpu(ptco_stat->xmt_tco_frames);
- bdp->drv_stats.rcv_tco_pkts +=
- le16_to_cpu(ptco_stat->rcv_tco_frames);
- }
-
- *pcmd_complete = 0;
- return true;
-}
-
-/**
- * e100_dump_stat_cntrs
- * @bdp: atapter's private data struct
- *
- * This routine will dump the board statistical counters without waiting
- * for stat_dump to complete. Any access to this stats should verify the completion
- * of the command
- */
-void
-e100_dump_stats_cntrs(struct e100_private *bdp)
-{
- unsigned long lock_flag_bd;
-
- spin_lock_irqsave(&(bdp->bd_lock), lock_flag_bd);
-
- /* dump h/w stats counters */
- if (e100_wait_exec_simple(bdp, SCB_CUC_DUMP_RST_STAT)) {
- if (bdp->next_cu_cmd == RESUME_NO_WAIT) {
- bdp->next_cu_cmd = RESUME_WAIT;
- }
- }
-
- spin_unlock_irqrestore(&(bdp->bd_lock), lock_flag_bd);
-}
-
-/**
- * e100_exec_non_cu_cmd
- * @bdp: atapter's private data struct
- * @command: the non-cu command to execute
- *
- * This routine will submit a command block to be executed,
- */
-unsigned char
-e100_exec_non_cu_cmd(struct e100_private *bdp, nxmit_cb_entry_t *command)
-{
- cb_header_t *ntcb_hdr;
- unsigned long lock_flag;
- unsigned long expiration_time;
- unsigned char rc = true;
- u8 sub_cmd;
-
- ntcb_hdr = (cb_header_t *) command->non_tx_cmd; /* get hdr of non tcb cmd */
- sub_cmd = cpu_to_le16(ntcb_hdr->cb_cmd);
-
- /* Set the Command Block to be the last command block */
- ntcb_hdr->cb_cmd |= __constant_cpu_to_le16(CB_EL_BIT);
- ntcb_hdr->cb_status = 0;
- ntcb_hdr->cb_lnk_ptr = 0;
-
- mb();
- if (in_interrupt())
- return e100_delayed_exec_non_cu_cmd(bdp, command);
-
- if (netif_running(bdp->device) && netif_carrier_ok(bdp->device))
- return e100_delayed_exec_non_cu_cmd(bdp, command);
-
- spin_lock_bh(&(bdp->bd_non_tx_lock));
-
- if (bdp->non_tx_command_state != E100_NON_TX_IDLE) {
- goto delayed_exec;
- }
-
- if (bdp->last_tcb) {
- rmb();
- if ((bdp->last_tcb->tcb_hdr.cb_status &
- __constant_cpu_to_le16(CB_STATUS_COMPLETE)) == 0)
- goto delayed_exec;
- }
-
- if ((readw(&bdp->scb->scb_status) & SCB_CUS_MASK) == SCB_CUS_ACTIVE) {
- goto delayed_exec;
- }
-
- spin_lock_irqsave(&bdp->bd_lock, lock_flag);
-
- if (!e100_wait_exec_cmplx(bdp, command->dma_addr, SCB_CUC_START, sub_cmd)) {
- spin_unlock_irqrestore(&(bdp->bd_lock), lock_flag);
- rc = false;
- goto exit;
- }
-
- bdp->next_cu_cmd = START_WAIT;
- spin_unlock_irqrestore(&(bdp->bd_lock), lock_flag);
-
- /* now wait for completion of non-cu CB up to 20 msec */
- expiration_time = jiffies + HZ / 50 + 1;
- rmb();
- while (!(ntcb_hdr->cb_status &
- __constant_cpu_to_le16(CB_STATUS_COMPLETE))) {
-
- if (time_before(jiffies, expiration_time)) {
- spin_unlock_bh(&(bdp->bd_non_tx_lock));
- yield();
- spin_lock_bh(&(bdp->bd_non_tx_lock));
- } else {
-#ifdef E100_CU_DEBUG
- printk(KERN_ERR "e100: %s: non-TX command (%x) "
- "timeout\n", bdp->device->name, sub_cmd);
-#endif
- rc = false;
- goto exit;
- }
- rmb();
- }
-
-exit:
- e100_free_non_tx_cmd(bdp, command);
-
- if (netif_running(bdp->device))
- netif_wake_queue(bdp->device);
-
- spin_unlock_bh(&(bdp->bd_non_tx_lock));
- return rc;
-
-delayed_exec:
- spin_unlock_bh(&(bdp->bd_non_tx_lock));
- return e100_delayed_exec_non_cu_cmd(bdp, command);
-}
-
-/**
- * e100_sw_reset
- * @bdp: atapter's private data struct
- * @reset_cmd: s/w reset or selective reset
- *
- * This routine will issue a software reset to the adapter. It
- * will also disable interrupts, as the are enabled after reset.
- */
-void
-e100_sw_reset(struct e100_private *bdp, u32 reset_cmd)
-{
- /* Do a selective reset first to avoid a potential PCI hang */
- writel(PORT_SELECTIVE_RESET, &bdp->scb->scb_port);
- readw(&(bdp->scb->scb_status)); /* flushes last write, read-safe */
-
- /* wait for the reset to take effect */
- udelay(20);
- if (reset_cmd == PORT_SOFTWARE_RESET) {
- writel(PORT_SOFTWARE_RESET, &bdp->scb->scb_port);
-
- /* wait 20 micro seconds for the reset to take effect */
- udelay(20);
- }
-
- /* Mask off our interrupt line -- it is unmasked after reset */
- e100_disable_clear_intr(bdp);
-#ifdef E100_CU_DEBUG
- bdp->last_cmd = 0;
- bdp->last_sub_cmd = 0;
-#endif
-}
-
-/**
- * e100_load_microcode - Download microsocde to controller.
- * @bdp: atapter's private data struct
- *
- * This routine downloads microcode on to the controller. This
- * microcode is available for the 82558/9, 82550. Currently the
- * microcode handles interrupt bundling and TCO workaround.
- *
- * Returns:
- * true: if successfull
- * false: otherwise
- */
-static unsigned char
-e100_load_microcode(struct e100_private *bdp)
-{
- static struct {
- u8 rev_id;
- u32 ucode[UCODE_MAX_DWORDS + 1];
- int timer_dword;
- int bundle_dword;
- int min_size_dword;
- } ucode_opts[] = {
- { D101A4_REV_ID,
- D101_A_RCVBUNDLE_UCODE,
- D101_CPUSAVER_TIMER_DWORD,
- D101_CPUSAVER_BUNDLE_DWORD,
- D101_CPUSAVER_MIN_SIZE_DWORD },
- { D101B0_REV_ID,
- D101_B0_RCVBUNDLE_UCODE,
- D101_CPUSAVER_TIMER_DWORD,
- D101_CPUSAVER_BUNDLE_DWORD,
- D101_CPUSAVER_MIN_SIZE_DWORD },
- { D101MA_REV_ID,
- D101M_B_RCVBUNDLE_UCODE,
- D101M_CPUSAVER_TIMER_DWORD,
- D101M_CPUSAVER_BUNDLE_DWORD,
- D101M_CPUSAVER_MIN_SIZE_DWORD },
- { D101S_REV_ID,
- D101S_RCVBUNDLE_UCODE,
- D101S_CPUSAVER_TIMER_DWORD,
- D101S_CPUSAVER_BUNDLE_DWORD,
- D101S_CPUSAVER_MIN_SIZE_DWORD },
- { D102_REV_ID,
- D102_B_RCVBUNDLE_UCODE,
- D102_B_CPUSAVER_TIMER_DWORD,
- D102_B_CPUSAVER_BUNDLE_DWORD,
- D102_B_CPUSAVER_MIN_SIZE_DWORD },
- { D102C_REV_ID,
- D102_C_RCVBUNDLE_UCODE,
- D102_C_CPUSAVER_TIMER_DWORD,
- D102_C_CPUSAVER_BUNDLE_DWORD,
- D102_C_CPUSAVER_MIN_SIZE_DWORD },
- { D102E_REV_ID,
- D102_E_RCVBUNDLE_UCODE,
- D102_E_CPUSAVER_TIMER_DWORD,
- D102_E_CPUSAVER_BUNDLE_DWORD,
- D102_E_CPUSAVER_MIN_SIZE_DWORD },
- { 0, {0}, 0, 0, 0}
- }, *opts;
-
- opts = ucode_opts;
-
- /* User turned ucode loading off */
- if (!(bdp->params.b_params & PRM_UCODE))
- return false;
-
- /* These controllers do not need ucode */
- if (bdp->flags & IS_ICH)
- return false;
-
- /* Search for ucode match against h/w rev_id */
- while (opts->rev_id) {
- if (bdp->rev_id == opts->rev_id) {
- int i;
- u32 *ucode_dword;
- load_ucode_cb_t *ucode_cmd_ptr;
- nxmit_cb_entry_t *cmd = e100_alloc_non_tx_cmd(bdp);
-
- if (cmd != NULL) {
- ucode_cmd_ptr =
- (load_ucode_cb_t *) cmd->non_tx_cmd;
- ucode_dword = ucode_cmd_ptr->ucode_dword;
- } else {
- return false;
- }
-
- memcpy(ucode_dword, opts->ucode, sizeof (opts->ucode));
-
- /* Insert user-tunable settings */
- ucode_dword[opts->timer_dword] &= 0xFFFF0000;
- ucode_dword[opts->timer_dword] |=
- (u16) bdp->params.IntDelay;
- ucode_dword[opts->bundle_dword] &= 0xFFFF0000;
- ucode_dword[opts->bundle_dword] |=
- (u16) bdp->params.BundleMax;
- ucode_dword[opts->min_size_dword] &= 0xFFFF0000;
- ucode_dword[opts->min_size_dword] |=
- (bdp->params.b_params & PRM_BUNDLE_SMALL) ?
- 0xFFFF : 0xFF80;
-
- for (i = 0; i < UCODE_MAX_DWORDS; i++)
- cpu_to_le32s(&(ucode_dword[i]));
-
- ucode_cmd_ptr->load_ucode_cbhdr.cb_cmd =
- __constant_cpu_to_le16(CB_LOAD_MICROCODE);
-
- return e100_exec_non_cu_cmd(bdp, cmd);
- }
- opts++;
- }
-
- return false;
-}
-
-/***************************************************************************/
-/***************************************************************************/
-/* EEPROM Functions */
-/***************************************************************************/
-
-/* Read PWA (printed wired assembly) number */
-void
-e100_rd_pwa_no(struct e100_private *bdp)
-{
- bdp->pwa_no = e100_eeprom_read(bdp, EEPROM_PWA_NO);
- bdp->pwa_no <<= 16;
- bdp->pwa_no |= e100_eeprom_read(bdp, EEPROM_PWA_NO + 1);
-}
-
-/* Read the permanent ethernet address from the eprom. */
-void
-e100_rd_eaddr(struct e100_private *bdp)
-{
- int i;
- u16 eeprom_word;
-
- for (i = 0; i < 6; i += 2) {
- eeprom_word =
- e100_eeprom_read(bdp,
- EEPROM_NODE_ADDRESS_BYTE_0 + (i / 2));
-
- bdp->device->dev_addr[i] =
- bdp->perm_node_address[i] = (u8) eeprom_word;
- bdp->device->dev_addr[i + 1] =
- bdp->perm_node_address[i + 1] = (u8) (eeprom_word >> 8);
- }
-}
-
-/* Check the D102 RFD flags to see if the checksum passed */
-static unsigned char
-e100_D102_check_checksum(rfd_t *rfd)
-{
- if (((le16_to_cpu(rfd->rfd_header.cb_status)) & RFD_PARSE_BIT)
- && (((rfd->rcvparserstatus & CHECKSUM_PROTOCOL_MASK) ==
- RFD_TCP_PACKET)
- || ((rfd->rcvparserstatus & CHECKSUM_PROTOCOL_MASK) ==
- RFD_UDP_PACKET))
- && (rfd->checksumstatus & TCPUDP_CHECKSUM_BIT_VALID)
- && (rfd->checksumstatus & TCPUDP_CHECKSUM_VALID)) {
- return CHECKSUM_UNNECESSARY;
- }
- return CHECKSUM_NONE;
-}
-
-/**
- * e100_D101M_checksum
- * @bdp: atapter's private data struct
- * @skb: skb received
- *
- * Sets the skb->csum value from D101 csum found at the end of the Rx frame. The
- * D101M sums all words in frame excluding the ethernet II header (14 bytes) so
- * in case the packet is ethernet II and the protocol is IP, all is need is to
- * assign this value to skb->csum.
- */
-static unsigned char
-e100_D101M_checksum(struct e100_private *bdp, struct sk_buff *skb)
-{
- unsigned short proto = (skb->protocol);
-
- if (proto == __constant_htons(ETH_P_IP)) {
-
- skb->csum = get_unaligned((u16 *) (skb->tail));
- return CHECKSUM_HW;
- }
- return CHECKSUM_NONE;
-}
-
-/***************************************************************************/
-/***************************************************************************/
-/***************************************************************************/
-/***************************************************************************/
-/* Auxilary Functions */
-/***************************************************************************/
-
-/* Print the board's configuration */
-void
-e100_print_brd_conf(struct e100_private *bdp)
-{
- /* Print the string if checksum Offloading was enabled */
- if (bdp->flags & DF_CSUM_OFFLOAD)
- printk(KERN_NOTICE " Hardware receive checksums enabled\n");
- else {
- if (bdp->rev_id >= D101MA_REV_ID)
- printk(KERN_NOTICE " Hardware receive checksums disabled\n");
- }
-
- if ((bdp->flags & DF_UCODE_LOADED))
- printk(KERN_NOTICE " cpu cycle saver enabled\n");
-}
-
-/**
- * e100_pci_setup - setup the adapter's PCI information
- * @pcid: adapter's pci_dev struct
- * @bdp: atapter's private data struct
- *
- * This routine sets up all PCI information for the adapter. It enables the bus
- * master bit (some BIOS don't do this), requests memory ans I/O regions, and
- * calls ioremap() on the adapter's memory region.
- *
- * Returns:
- * true: if successfull
- * false: otherwise
- */
-static unsigned char
-e100_pci_setup(struct pci_dev *pcid, struct e100_private *bdp)
-{
- struct net_device *dev = bdp->device;
- int rc = 0;
-
- if ((rc = pci_enable_device(pcid)) != 0) {
- goto err;
- }
-
- /* dev and ven ID have already been checked so it is our device */
- pci_read_config_byte(pcid, PCI_REVISION_ID, (u8 *) &(bdp->rev_id));
-
- /* address #0 is a memory region */
- dev->mem_start = pci_resource_start(pcid, 0);
- dev->mem_end = dev->mem_start + sizeof (scb_t);
-
- /* address #1 is a IO region */
- dev->base_addr = pci_resource_start(pcid, 1);
-
- if ((rc = pci_request_regions(pcid, e100_short_driver_name)) != 0) {
- goto err_disable;
- }
-
- pci_enable_wake(pcid, 0, 0);
-
- /* if Bus Mastering is off, turn it on! */
- pci_set_master(pcid);
-
- /* address #0 is a memory mapping */
- bdp->scb = (scb_t *) ioremap_nocache(dev->mem_start, sizeof (scb_t));
-
- if (!bdp->scb) {
- printk(KERN_ERR "e100: %s: Failed to map PCI address 0x%lX\n",
- dev->name, pci_resource_start(pcid, 0));
- rc = -ENOMEM;
- goto err_region;
- }
-
- return 0;
-
-err_region:
- pci_release_regions(pcid);
-err_disable:
- pci_disable_device(pcid);
-err:
- return rc;
-}
-
-void
-e100_isolate_driver(struct e100_private *bdp)
-{
-
- /* Check if interface is up */
- /* NOTE: Can't use netif_running(bdp->device) because */
- /* dev_close clears __LINK_STATE_START before calling */
- /* e100_close (aka dev->stop) */
- if (bdp->device->flags & IFF_UP) {
- e100_disable_clear_intr(bdp);
- del_timer_sync(&bdp->watchdog_timer);
- netif_carrier_off(bdp->device);
- netif_stop_queue(bdp->device);
- bdp->last_tcb = NULL;
- }
- e100_sw_reset(bdp, PORT_SELECTIVE_RESET);
-}
-
-static void
-e100_tcb_add_C_bit(struct e100_private *bdp)
-{
- tcb_t *tcb = (tcb_t *) bdp->tcb_pool.data;
- int i;
-
- for (i = 0; i < bdp->params.TxDescriptors; i++, tcb++) {
- tcb->tcb_hdr.cb_status |= cpu_to_le16(CB_STATUS_COMPLETE);
- }
-}
-
-/*
- * Procedure: e100_configure_device
- *
- * Description: This routine will configure device
- *
- * Arguments:
- * bdp - Ptr to this card's e100_bdconfig structure
- *
- * Returns:
- * true upon success
- * false upon failure
- */
-unsigned char
-e100_configure_device(struct e100_private *bdp)
-{
- /*load CU & RU base */
- if (!e100_wait_exec_cmplx(bdp, 0, SCB_CUC_LOAD_BASE, 0))
- return false;
-
- if (e100_load_microcode(bdp))
- bdp->flags |= DF_UCODE_LOADED;
-
- if (!e100_wait_exec_cmplx(bdp, 0, SCB_RUC_LOAD_BASE, 0))
- return false;
-
- /* Issue the load dump counters address command */
- if (!e100_wait_exec_cmplx(bdp, bdp->stat_cnt_phys, SCB_CUC_DUMP_ADDR, 0))
- return false;
-
- if (!e100_setup_iaaddr(bdp, bdp->device->dev_addr)) {
- printk(KERN_ERR "e100: e100_configure_device: "
- "setup iaaddr failed\n");
- return false;
- }
-
- e100_set_multi_exec(bdp->device);
-
- /* Change for 82558 enhancement */
- /* If 82558/9 and if the user has enabled flow control, set up */
- /* flow Control Reg. in the CSR */
- if ((bdp->flags & IS_BACHELOR)
- && (bdp->params.b_params & PRM_FC)) {
- writeb(DFLT_FC_THLD,
- &bdp->scb->scb_ext.d101_scb.scb_fc_thld);
- writeb(DFLT_FC_CMD,
- &bdp->scb->scb_ext.d101_scb.scb_fc_xon_xoff);
- }
-
- e100_force_config(bdp);
-
- return true;
-}
-
-void
-e100_deisolate_driver(struct e100_private *bdp, u8 full_reset)
-{
- u32 cmd = full_reset ? PORT_SOFTWARE_RESET : PORT_SELECTIVE_RESET;
- e100_sw_reset(bdp, cmd);
- if (cmd == PORT_SOFTWARE_RESET) {
- if (!e100_configure_device(bdp))
- printk(KERN_ERR "e100: e100_deisolate_driver:"
- " device configuration failed\n");
- }
-
- if (netif_running(bdp->device)) {
-
- bdp->next_cu_cmd = START_WAIT;
- bdp->last_tcb = NULL;
-
- e100_start_ru(bdp);
-
- /* relaunch watchdog timer in 2 sec */
- mod_timer(&(bdp->watchdog_timer), jiffies + (2 * HZ));
-
- // we must clear tcbs since we may have lost Tx intrrupt
- // or have unsent frames on the tcb chain
- e100_tcb_add_C_bit(bdp);
- e100_tx_srv(bdp);
- netif_wake_queue(bdp->device);
- e100_set_intr_mask(bdp);
- }
-}
-
-static int
-e100_do_ethtool_ioctl(struct net_device *dev, struct ifreq *ifr)
-{
- struct ethtool_cmd ecmd;
- int rc = -EOPNOTSUPP;
-
- if (copy_from_user(&ecmd, ifr->ifr_data, sizeof (ecmd.cmd)))
- return -EFAULT;
-
- switch (ecmd.cmd) {
- case ETHTOOL_GSET:
- rc = e100_ethtool_get_settings(dev, ifr);
- break;
- case ETHTOOL_SSET:
- rc = e100_ethtool_set_settings(dev, ifr);
- break;
- case ETHTOOL_GDRVINFO:
- rc = e100_ethtool_get_drvinfo(dev, ifr);
- break;
- case ETHTOOL_GREGS:
- rc = e100_ethtool_gregs(dev, ifr);
- break;
- case ETHTOOL_NWAY_RST:
- rc = e100_ethtool_nway_rst(dev, ifr);
- break;
- case ETHTOOL_GLINK:
- rc = e100_ethtool_glink(dev, ifr);
- break;
- case ETHTOOL_GEEPROM:
- case ETHTOOL_SEEPROM:
- rc = e100_ethtool_eeprom(dev, ifr);
- break;
- case ETHTOOL_GSTATS: {
- struct {
- struct ethtool_stats cmd;
- uint64_t data[E100_STATS_LEN];
- } stats = { {ETHTOOL_GSTATS, E100_STATS_LEN} };
- struct e100_private *bdp = dev->priv;
- void *addr = ifr->ifr_data;
- int i;
-
- for(i = 0; i < E100_STATS_LEN; i++)
- stats.data[i] =
- ((unsigned long *)&bdp->drv_stats.net_stats)[i];
- if(copy_to_user(addr, &stats, sizeof(stats)))
- return -EFAULT;
- return 0;
- }
- case ETHTOOL_GWOL:
- case ETHTOOL_SWOL:
- rc = e100_ethtool_wol(dev, ifr);
- break;
- case ETHTOOL_TEST:
- rc = e100_ethtool_test(dev, ifr);
- break;
- case ETHTOOL_GSTRINGS:
- rc = e100_ethtool_gstrings(dev,ifr);
- break;
- case ETHTOOL_PHYS_ID:
- rc = e100_ethtool_led_blink(dev,ifr);
- break;
-#ifdef ETHTOOL_GRINGPARAM
- case ETHTOOL_GRINGPARAM: {
- struct ethtool_ringparam ering;
- struct e100_private *bdp = dev->priv;
- memset((void *) &ering, 0, sizeof(ering));
- ering.rx_max_pending = E100_MAX_RFD;
- ering.tx_max_pending = E100_MAX_TCB;
- ering.rx_pending = bdp->params.RxDescriptors;
- ering.tx_pending = bdp->params.TxDescriptors;
- rc = copy_to_user(ifr->ifr_data, &ering, sizeof(ering))
- ? -EFAULT : 0;
- return rc;
- }
-#endif
-#ifdef ETHTOOL_SRINGPARAM
- case ETHTOOL_SRINGPARAM: {
- struct ethtool_ringparam ering;
- struct e100_private *bdp = dev->priv;
- if (copy_from_user(&ering, ifr->ifr_data, sizeof(ering)))
- return -EFAULT;
- if (ering.rx_pending > E100_MAX_RFD
- || ering.rx_pending < E100_MIN_RFD)
- return -EINVAL;
- if (ering.tx_pending > E100_MAX_TCB
- || ering.tx_pending < E100_MIN_TCB)
- return -EINVAL;
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- /* Use new values to open interface */
- bdp->params.RxDescriptors = ering.rx_pending;
- bdp->params.TxDescriptors = ering.tx_pending;
- e100_hw_init(bdp);
- e100_open(dev);
- }
- else {
- bdp->params.RxDescriptors = ering.rx_pending;
- bdp->params.TxDescriptors = ering.tx_pending;
- }
- return 0;
- }
-#endif
-#ifdef ETHTOOL_GPAUSEPARAM
- case ETHTOOL_GPAUSEPARAM: {
- struct ethtool_pauseparam epause;
- struct e100_private *bdp = dev->priv;
- memset((void *) &epause, 0, sizeof(epause));
- if ((bdp->flags & IS_BACHELOR)
- && (bdp->params.b_params & PRM_FC)) {
- epause.autoneg = 1;
- if (bdp->flags && DF_LINK_FC_CAP) {
- epause.rx_pause = 1;
- epause.tx_pause = 1;
- }
- if (bdp->flags && DF_LINK_FC_TX_ONLY)
- epause.tx_pause = 1;
- }
- rc = copy_to_user(ifr->ifr_data, &epause, sizeof(epause))
- ? -EFAULT : 0;
- return rc;
- }
-#endif
-#ifdef ETHTOOL_SPAUSEPARAM
- case ETHTOOL_SPAUSEPARAM: {
- struct ethtool_pauseparam epause;
- struct e100_private *bdp = dev->priv;
- if (!(bdp->flags & IS_BACHELOR))
- return -EINVAL;
- if (copy_from_user(&epause, ifr->ifr_data, sizeof(epause)))
- return -EFAULT;
- if (epause.autoneg == 1)
- bdp->params.b_params |= PRM_FC;
- else
- bdp->params.b_params &= ~PRM_FC;
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- e100_hw_init(bdp);
- e100_open(dev);
- }
- return 0;
- }
-#endif
-#ifdef ETHTOOL_GRXCSUM
- case ETHTOOL_GRXCSUM:
- case ETHTOOL_GTXCSUM:
- case ETHTOOL_GSG:
- { struct ethtool_value eval;
- struct e100_private *bdp = dev->priv;
- memset((void *) &eval, 0, sizeof(eval));
- if ((ecmd.cmd == ETHTOOL_GRXCSUM)
- && (bdp->params.b_params & PRM_XSUMRX))
- eval.data = 1;
- else
- eval.data = 0;
- rc = copy_to_user(ifr->ifr_data, &eval, sizeof(eval))
- ? -EFAULT : 0;
- return rc;
- }
-#endif
-#ifdef ETHTOOL_SRXCSUM
- case ETHTOOL_SRXCSUM:
- case ETHTOOL_STXCSUM:
- case ETHTOOL_SSG:
- { struct ethtool_value eval;
- struct e100_private *bdp = dev->priv;
- if (copy_from_user(&eval, ifr->ifr_data, sizeof(eval)))
- return -EFAULT;
- if (ecmd.cmd == ETHTOOL_SRXCSUM) {
- if (eval.data == 1) {
- if (bdp->rev_id >= D101MA_REV_ID)
- bdp->params.b_params |= PRM_XSUMRX;
- else
- return -EINVAL;
- } else {
- if (bdp->rev_id >= D101MA_REV_ID)
- bdp->params.b_params &= ~PRM_XSUMRX;
- else
- return 0;
- }
- } else {
- if (eval.data == 1)
- return -EINVAL;
- else
- return 0;
- }
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- e100_hw_init(bdp);
- e100_open(dev);
- }
- return 0;
- }
-#endif
- default:
- break;
- } //switch
- return rc;
-}
-
-static int
-e100_ethtool_get_settings(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- struct ethtool_cmd ecmd;
- u16 advert = 0;
-
- memset((void *) &ecmd, 0, sizeof (ecmd));
-
- bdp = dev->priv;
-
- ecmd.supported = bdp->speed_duplex_caps;
-
- ecmd.port =
- (bdp->speed_duplex_caps & SUPPORTED_TP) ? PORT_TP : PORT_FIBRE;
- ecmd.transceiver = XCVR_INTERNAL;
- ecmd.phy_address = bdp->phy_addr;
-
- if (netif_carrier_ok(bdp->device)) {
- ecmd.speed = bdp->cur_line_speed;
- ecmd.duplex =
- (bdp->cur_dplx_mode == HALF_DUPLEX) ? DUPLEX_HALF : DUPLEX_FULL;
- }
- else {
- ecmd.speed = -1;
- ecmd.duplex = -1;
- }
-
- ecmd.advertising = ADVERTISED_TP;
-
- if (bdp->params.e100_speed_duplex == E100_AUTONEG) {
- ecmd.autoneg = AUTONEG_ENABLE;
- ecmd.advertising |= ADVERTISED_Autoneg;
- } else {
- ecmd.autoneg = AUTONEG_DISABLE;
- }
-
- if (bdp->speed_duplex_caps & SUPPORTED_MII) {
- e100_mdi_read(bdp, MII_ADVERTISE, bdp->phy_addr, &advert);
-
- if (advert & ADVERTISE_10HALF)
- ecmd.advertising |= ADVERTISED_10baseT_Half;
- if (advert & ADVERTISE_10FULL)
- ecmd.advertising |= ADVERTISED_10baseT_Full;
- if (advert & ADVERTISE_100HALF)
- ecmd.advertising |= ADVERTISED_100baseT_Half;
- if (advert & ADVERTISE_100FULL)
- ecmd.advertising |= ADVERTISED_100baseT_Full;
- } else {
- ecmd.autoneg = AUTONEG_DISABLE;
- ecmd.advertising &= ~ADVERTISED_Autoneg;
- }
-
- if (copy_to_user(ifr->ifr_data, &ecmd, sizeof (ecmd)))
- return -EFAULT;
-
- return 0;
-}
-
-static int
-e100_ethtool_set_settings(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- int e100_new_speed_duplex;
- int ethtool_new_speed_duplex;
- struct ethtool_cmd ecmd;
-
- bdp = dev->priv;
- if (copy_from_user(&ecmd, ifr->ifr_data, sizeof (ecmd))) {
- return -EFAULT;
- }
-
- if ((ecmd.autoneg == AUTONEG_ENABLE)
- && (bdp->speed_duplex_caps & SUPPORTED_Autoneg)) {
- bdp->params.e100_speed_duplex = E100_AUTONEG;
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- e100_hw_init(bdp);
- e100_open(dev);
- }
- } else {
- if (ecmd.speed == SPEED_10) {
- if (ecmd.duplex == DUPLEX_HALF) {
- e100_new_speed_duplex =
- E100_SPEED_10_HALF;
- ethtool_new_speed_duplex =
- SUPPORTED_10baseT_Half;
- } else {
- e100_new_speed_duplex =
- E100_SPEED_10_FULL;
- ethtool_new_speed_duplex =
- SUPPORTED_10baseT_Full;
- }
- } else {
- if (ecmd.duplex == DUPLEX_HALF) {
- e100_new_speed_duplex =
- E100_SPEED_100_HALF;
- ethtool_new_speed_duplex =
- SUPPORTED_100baseT_Half;
- } else {
- e100_new_speed_duplex =
- E100_SPEED_100_FULL;
- ethtool_new_speed_duplex =
- SUPPORTED_100baseT_Full;
- }
- }
-
- if (bdp->speed_duplex_caps & ethtool_new_speed_duplex) {
- bdp->params.e100_speed_duplex =
- e100_new_speed_duplex;
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- e100_hw_init(bdp);
- e100_open(dev);
- }
- } else {
- return -EOPNOTSUPP;
- }
- }
-
- return 0;
-}
-
-static int
-e100_ethtool_glink(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- struct ethtool_value info;
-
- memset((void *) &info, 0, sizeof (info));
-
- bdp = dev->priv;
- info.cmd = ETHTOOL_GLINK;
-
- /* Consider both PHY link and netif_running */
- info.data = e100_update_link_state(bdp);
-
- if (copy_to_user(ifr->ifr_data, &info, sizeof (info)))
- return -EFAULT;
-
- return 0;
-}
-
-static int
-e100_ethtool_test(struct net_device *dev, struct ifreq *ifr)
-{
- struct ethtool_test *info;
- int rc = -EFAULT;
-
- info = kmalloc(sizeof(*info) + max_test_res * sizeof(u64),
- GFP_ATOMIC);
-
- if (!info)
- return -ENOMEM;
-
- memset((void *) info, 0, sizeof(*info) +
- max_test_res * sizeof(u64));
-
- if (copy_from_user(info, ifr->ifr_data, sizeof(*info)))
- goto exit;
-
- info->flags = e100_run_diag(dev, info->data, info->flags);
-
- if (!copy_to_user(ifr->ifr_data, info,
- sizeof(*info) + max_test_res * sizeof(u64)))
- rc = 0;
-exit:
- kfree(info);
- return rc;
-}
-
-static int
-e100_ethtool_gregs(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- u32 regs_buff[E100_REGS_LEN];
- struct ethtool_regs regs = {ETHTOOL_GREGS};
- void *addr = ifr->ifr_data;
- u16 mdi_reg;
-
- bdp = dev->priv;
-
- if(copy_from_user(®s, addr, sizeof(regs)))
- return -EFAULT;
-
- regs.version = (1 << 24) | bdp->rev_id;
- regs_buff[0] = readb(&(bdp->scb->scb_cmd_hi)) << 24 |
- readb(&(bdp->scb->scb_cmd_low)) << 16 |
- readw(&(bdp->scb->scb_status));
- e100_mdi_read(bdp, MII_NCONFIG, bdp->phy_addr, &mdi_reg);
- regs_buff[1] = mdi_reg;
-
- if(copy_to_user(addr, ®s, sizeof(regs)))
- return -EFAULT;
-
- addr += offsetof(struct ethtool_regs, data);
- if(copy_to_user(addr, regs_buff, regs.len))
- return -EFAULT;
-
- return 0;
-}
-
-static int
-e100_ethtool_nway_rst(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
-
- bdp = dev->priv;
-
- if ((bdp->speed_duplex_caps & SUPPORTED_Autoneg) &&
- (bdp->params.e100_speed_duplex == E100_AUTONEG)) {
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- e100_hw_init(bdp);
- e100_open(dev);
- }
- } else {
- return -EFAULT;
- }
- return 0;
-}
-
-static int
-e100_ethtool_get_drvinfo(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- struct ethtool_drvinfo info;
-
- memset((void *) &info, 0, sizeof (info));
-
- bdp = dev->priv;
-
- strncpy(info.driver, e100_short_driver_name, sizeof (info.driver) - 1);
- strncpy(info.version, e100_driver_version, sizeof (info.version) - 1);
- strncpy(info.fw_version, "N/A",
- sizeof (info.fw_version) - 1);
- strncpy(info.bus_info, pci_name(bdp->pdev),
- sizeof (info.bus_info) - 1);
- info.n_stats = E100_STATS_LEN;
- info.regdump_len = E100_REGS_LEN * sizeof(u32);
- info.eedump_len = (bdp->eeprom_size << 1);
- info.testinfo_len = max_test_res;
- if (copy_to_user(ifr->ifr_data, &info, sizeof (info)))
- return -EFAULT;
-
- return 0;
-}
-
-static int
-e100_ethtool_eeprom(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- struct ethtool_eeprom ecmd;
- u16 eeprom_data[256];
- u16 *usr_eeprom_ptr;
- u16 first_word, last_word;
- int i, max_len;
- void *ptr;
- u8 *eeprom_data_bytes = (u8 *)eeprom_data;
-
- bdp = dev->priv;
-
- if (copy_from_user(&ecmd, ifr->ifr_data, sizeof (ecmd)))
- return -EFAULT;
-
- usr_eeprom_ptr =
- (u16 *) (ifr->ifr_data + offsetof(struct ethtool_eeprom, data));
-
- max_len = bdp->eeprom_size * 2;
-
- if (ecmd.offset > ecmd.offset + ecmd.len)
- return -EINVAL;
-
- if ((ecmd.offset + ecmd.len) > max_len)
- ecmd.len = (max_len - ecmd.offset);
-
- first_word = ecmd.offset >> 1;
- last_word = (ecmd.offset + ecmd.len - 1) >> 1;
-
- if (first_word >= bdp->eeprom_size)
- return -EFAULT;
-
- if (ecmd.cmd == ETHTOOL_GEEPROM) {
- for(i = 0; i <= (last_word - first_word); i++)
- eeprom_data[i] = e100_eeprom_read(bdp, first_word + i);
-
- ecmd.magic = E100_EEPROM_MAGIC;
-
- if (copy_to_user(ifr->ifr_data, &ecmd, sizeof (ecmd)))
- return -EFAULT;
-
- if(ecmd.offset & 1)
- eeprom_data_bytes++;
- if (copy_to_user(usr_eeprom_ptr, eeprom_data_bytes, ecmd.len))
- return -EFAULT;
- } else {
- if (ecmd.magic != E100_EEPROM_MAGIC)
- return -EFAULT;
-
- ptr = (void *)eeprom_data;
- if(ecmd.offset & 1) {
- /* need modification of first changed EEPROM word */
- /* only the second byte of the word is being modified */
- eeprom_data[0] = e100_eeprom_read(bdp, first_word);
- ptr++;
- }
- if((ecmd.offset + ecmd.len) & 1) {
- /* need modification of last changed EEPROM word */
- /* only the first byte of the word is being modified */
- eeprom_data[last_word - first_word] =
- e100_eeprom_read(bdp, last_word);
- }
- if(copy_from_user(ptr, usr_eeprom_ptr, ecmd.len))
- return -EFAULT;
-
- e100_eeprom_write_block(bdp, first_word, eeprom_data,
- last_word - first_word + 1);
-
- if (copy_to_user(ifr->ifr_data, &ecmd, sizeof (ecmd)))
- return -EFAULT;
- }
- return 0;
-}
-
-#define E100_BLINK_INTERVAL (HZ/4)
-/**
- * e100_led_control
- * @bdp: atapter's private data struct
- * @led_mdi_op: led operation
- *
- * Software control over adapter's led. The possible operations are:
- * TURN LED OFF, TURN LED ON and RETURN LED CONTROL TO HARDWARE.
- */
-static void
-e100_led_control(struct e100_private *bdp, u16 led_mdi_op)
-{
- e100_mdi_write(bdp, PHY_82555_LED_SWITCH_CONTROL,
- bdp->phy_addr, led_mdi_op);
-
-}
-/**
- * e100_led_blink_callback
- * @data: pointer to atapter's private data struct
- *
- * Blink timer callback function. Toggles ON/OFF led status bit and calls
- * led hardware access function.
- */
-static void
-e100_led_blink_callback(unsigned long data)
-{
- struct e100_private *bdp = (struct e100_private *) data;
-
- if(bdp->flags & LED_IS_ON) {
- bdp->flags &= ~LED_IS_ON;
- e100_led_control(bdp, PHY_82555_LED_OFF);
- } else {
- bdp->flags |= LED_IS_ON;
- if (bdp->rev_id >= D101MA_REV_ID)
- e100_led_control(bdp, PHY_82555_LED_ON_559);
- else
- e100_led_control(bdp, PHY_82555_LED_ON_PRE_559);
- }
-
- mod_timer(&bdp->blink_timer, jiffies + E100_BLINK_INTERVAL);
-}
-/**
- * e100_ethtool_led_blink
- * @dev: pointer to atapter's net_device struct
- * @ifr: pointer to ioctl request structure
- *
- * Blink led ioctl handler. Initialtes blink timer and sleeps until
- * blink period expires. Than it kills timer and returns. The led control
- * is returned back to hardware when blink timer is killed.
- */
-static int
-e100_ethtool_led_blink(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- struct ethtool_value ecmd;
-
- bdp = dev->priv;
-
- if (copy_from_user(&ecmd, ifr->ifr_data, sizeof (ecmd)))
- return -EFAULT;
-
- if(!bdp->blink_timer.function) {
- init_timer(&bdp->blink_timer);
- bdp->blink_timer.function = e100_led_blink_callback;
- bdp->blink_timer.data = (unsigned long) bdp;
- }
-
- mod_timer(&bdp->blink_timer, jiffies);
-
- set_current_state(TASK_INTERRUPTIBLE);
-
- if ((!ecmd.data) || (ecmd.data > (u32)(MAX_SCHEDULE_TIMEOUT / HZ)))
- ecmd.data = (u32)(MAX_SCHEDULE_TIMEOUT / HZ);
-
- schedule_timeout(ecmd.data * HZ);
-
- del_timer_sync(&bdp->blink_timer);
-
- e100_led_control(bdp, PHY_82555_LED_NORMAL_CONTROL);
-
- return 0;
-}
-
-static inline int
-e100_10BaseT_adapter(struct e100_private *bdp)
-{
- return ((bdp->pdev->device == 0x1229) &&
- (bdp->pdev->subsystem_vendor == 0x8086) &&
- (bdp->pdev->subsystem_device == 0x0003));
-}
-
-static void
-e100_get_speed_duplex_caps(struct e100_private *bdp)
-{
- u16 status;
-
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &status);
-
- bdp->speed_duplex_caps = 0;
-
- bdp->speed_duplex_caps |=
- (status & BMSR_ANEGCAPABLE) ? SUPPORTED_Autoneg : 0;
-
- bdp->speed_duplex_caps |=
- (status & BMSR_10HALF) ? SUPPORTED_10baseT_Half : 0;
-
- bdp->speed_duplex_caps |=
- (status & BMSR_10FULL) ? SUPPORTED_10baseT_Full : 0;
-
- bdp->speed_duplex_caps |=
- (status & BMSR_100HALF) ? SUPPORTED_100baseT_Half : 0;
-
- bdp->speed_duplex_caps |=
- (status & BMSR_100FULL) ? SUPPORTED_100baseT_Full : 0;
-
- if (IS_NC3133(bdp))
- bdp->speed_duplex_caps =
- (SUPPORTED_FIBRE | SUPPORTED_100baseT_Full);
- else
- bdp->speed_duplex_caps |= SUPPORTED_TP;
-
- if ((status == 0xFFFF) && e100_10BaseT_adapter(bdp)) {
- bdp->speed_duplex_caps =
- (SUPPORTED_10baseT_Half | SUPPORTED_TP);
- } else {
- bdp->speed_duplex_caps |= SUPPORTED_MII;
- }
-
-}
-
-#ifdef CONFIG_PM
-static unsigned char
-e100_setup_filter(struct e100_private *bdp)
-{
- cb_header_t *ntcb_hdr;
- unsigned char res = false;
- nxmit_cb_entry_t *cmd;
-
- if ((cmd = e100_alloc_non_tx_cmd(bdp)) == NULL) {
- goto exit;
- }
-
- ntcb_hdr = (cb_header_t *) cmd->non_tx_cmd;
- ntcb_hdr->cb_cmd = __constant_cpu_to_le16(CB_LOAD_FILTER);
-
- /* Set EL and FIX bit */
- (cmd->non_tx_cmd)->ntcb.filter.filter_data[0] =
- __constant_cpu_to_le32(CB_FILTER_EL | CB_FILTER_FIX);
-
- if (bdp->wolopts & WAKE_UCAST) {
- (cmd->non_tx_cmd)->ntcb.filter.filter_data[0] |=
- __constant_cpu_to_le32(CB_FILTER_IA_MATCH);
- }
-
- if (bdp->wolopts & WAKE_ARP) {
- /* Setup ARP bit and lower IP parts */
- /* bdp->ip_lbytes contains 2 lower bytes of IP address in network byte order */
- (cmd->non_tx_cmd)->ntcb.filter.filter_data[0] |=
- cpu_to_le32(CB_FILTER_ARP | bdp->ip_lbytes);
- }
-
- res = e100_exec_non_cu_cmd(bdp, cmd);
- if (!res)
- printk(KERN_WARNING "e100: %s: Filter setup failed\n",
- bdp->device->name);
-
-exit:
- return res;
-
-}
-
-static void
-e100_do_wol(struct pci_dev *pcid, struct e100_private *bdp)
-{
- e100_config_wol(bdp);
-
- if (e100_config(bdp)) {
- if (bdp->wolopts & (WAKE_UCAST | WAKE_ARP))
- if (!e100_setup_filter(bdp))
- printk(KERN_ERR
- "e100: WOL options failed\n");
- } else {
- printk(KERN_ERR "e100: config WOL failed\n");
- }
-}
-#endif
-
-static u16
-e100_get_ip_lbytes(struct net_device *dev)
-{
- struct in_ifaddr *ifa;
- struct in_device *in_dev;
- u32 res = 0;
-
- in_dev = (struct in_device *) dev->ip_ptr;
- /* Check if any in_device bound to interface */
- if (in_dev) {
- /* Check if any IP address is bound to interface */
- if ((ifa = in_dev->ifa_list) != NULL) {
- res = __constant_ntohl(ifa->ifa_address);
- res = __constant_htons(res & 0x0000ffff);
- }
- }
- return res;
-}
-
-static int
-e100_ethtool_wol(struct net_device *dev, struct ifreq *ifr)
-{
- struct e100_private *bdp;
- struct ethtool_wolinfo wolinfo;
- int res = 0;
-
- bdp = dev->priv;
-
- if (copy_from_user(&wolinfo, ifr->ifr_data, sizeof (wolinfo))) {
- return -EFAULT;
- }
-
- switch (wolinfo.cmd) {
- case ETHTOOL_GWOL:
- wolinfo.supported = bdp->wolsupported;
- wolinfo.wolopts = bdp->wolopts;
- if (copy_to_user(ifr->ifr_data, &wolinfo, sizeof (wolinfo)))
- res = -EFAULT;
- break;
- case ETHTOOL_SWOL:
- /* If ALL requests are supported or request is DISABLE wol */
- if (((wolinfo.wolopts & bdp->wolsupported) == wolinfo.wolopts)
- || (wolinfo.wolopts == 0)) {
- bdp->wolopts = wolinfo.wolopts;
- } else {
- res = -EOPNOTSUPP;
- }
- if (wolinfo.wolopts & WAKE_ARP)
- bdp->ip_lbytes = e100_get_ip_lbytes(dev);
- break;
- default:
- break;
- }
- return res;
-}
-
-static int e100_ethtool_gstrings(struct net_device *dev, struct ifreq *ifr)
-{
- struct ethtool_gstrings info;
- char *strings = NULL;
- char *usr_strings;
- int i;
-
- memset((void *) &info, 0, sizeof(info));
-
- usr_strings = (u8 *) (ifr->ifr_data +
- offsetof(struct ethtool_gstrings, data));
-
- if (copy_from_user(&info, ifr->ifr_data, sizeof (info)))
- return -EFAULT;
-
- switch (info.string_set) {
- case ETH_SS_TEST: {
- int ret = 0;
- if (info.len > max_test_res)
- info.len = max_test_res;
- strings = kmalloc(info.len * ETH_GSTRING_LEN, GFP_ATOMIC);
- if (!strings)
- return -ENOMEM;
- memset(strings, 0, info.len * ETH_GSTRING_LEN);
-
- for (i = 0; i < info.len; i++) {
- sprintf(strings + i * ETH_GSTRING_LEN, "%s",
- test_strings[i]);
- }
- if (copy_to_user(ifr->ifr_data, &info, sizeof (info)))
- ret = -EFAULT;
- if (copy_to_user(usr_strings, strings, info.len * ETH_GSTRING_LEN))
- ret = -EFAULT;
- kfree(strings);
- return ret;
- }
- case ETH_SS_STATS: {
- char *strings = NULL;
- void *addr = ifr->ifr_data;
- info.len = E100_STATS_LEN;
- strings = *e100_gstrings_stats;
- if(copy_to_user(ifr->ifr_data, &info, sizeof(info)))
- return -EFAULT;
- addr += offsetof(struct ethtool_gstrings, data);
- if(copy_to_user(addr, strings,
- info.len * ETH_GSTRING_LEN))
- return -EFAULT;
- return 0;
- }
- default:
- return -EOPNOTSUPP;
- }
-}
-
-static int
-e100_mii_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
-{
- struct e100_private *bdp;
- struct mii_ioctl_data *data_ptr =
- (struct mii_ioctl_data *) &(ifr->ifr_data);
-
- bdp = dev->priv;
-
- switch (cmd) {
- case SIOCGMIIPHY:
- data_ptr->phy_id = bdp->phy_addr & 0x1f;
- break;
-
- case SIOCGMIIREG:
- if (!capable(CAP_NET_ADMIN))
- return -EPERM;
- e100_mdi_read(bdp, data_ptr->reg_num & 0x1f, bdp->phy_addr,
- &(data_ptr->val_out));
- break;
-
- case SIOCSMIIREG:
- if (!capable(CAP_NET_ADMIN))
- return -EPERM;
- /* If reg = 0 && change speed/duplex */
- if (data_ptr->reg_num == 0 &&
- (data_ptr->val_in == (BMCR_ANENABLE | BMCR_ANRESTART) /* restart cmd */
- || data_ptr->val_in == (BMCR_RESET) /* reset cmd */
- || data_ptr->val_in & (BMCR_SPEED100 | BMCR_FULLDPLX)
- || data_ptr->val_in == 0)) {
- if (data_ptr->val_in == (BMCR_ANENABLE | BMCR_ANRESTART)
- || data_ptr->val_in == (BMCR_RESET))
- bdp->params.e100_speed_duplex = E100_AUTONEG;
- else if (data_ptr->val_in == (BMCR_SPEED100 | BMCR_FULLDPLX))
- bdp->params.e100_speed_duplex = E100_SPEED_100_FULL;
- else if (data_ptr->val_in == (BMCR_SPEED100))
- bdp->params.e100_speed_duplex = E100_SPEED_100_HALF;
- else if (data_ptr->val_in == (BMCR_FULLDPLX))
- bdp->params.e100_speed_duplex = E100_SPEED_10_FULL;
- else
- bdp->params.e100_speed_duplex = E100_SPEED_10_HALF;
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- e100_hw_init(bdp);
- e100_open(dev);
- }
- }
- else
- /* Only allows changing speed/duplex */
- return -EINVAL;
-
- break;
-
- default:
- return -EOPNOTSUPP;
- }
- return 0;
-}
-
-nxmit_cb_entry_t *
-e100_alloc_non_tx_cmd(struct e100_private *bdp)
-{
- nxmit_cb_entry_t *non_tx_cmd_elem;
-
- if (!(non_tx_cmd_elem = (nxmit_cb_entry_t *)
- kmalloc(sizeof (nxmit_cb_entry_t), GFP_ATOMIC))) {
- return NULL;
- }
- non_tx_cmd_elem->non_tx_cmd =
- pci_alloc_consistent(bdp->pdev, sizeof (nxmit_cb_t),
- &(non_tx_cmd_elem->dma_addr));
- if (non_tx_cmd_elem->non_tx_cmd == NULL) {
- kfree(non_tx_cmd_elem);
- return NULL;
- }
- return non_tx_cmd_elem;
-}
-
-void
-e100_free_non_tx_cmd(struct e100_private *bdp,
- nxmit_cb_entry_t *non_tx_cmd_elem)
-{
- pci_free_consistent(bdp->pdev, sizeof (nxmit_cb_t),
- non_tx_cmd_elem->non_tx_cmd,
- non_tx_cmd_elem->dma_addr);
- kfree(non_tx_cmd_elem);
-}
-
-static void
-e100_free_nontx_list(struct e100_private *bdp)
-{
- nxmit_cb_entry_t *command;
- int i;
-
- while (!list_empty(&bdp->non_tx_cmd_list)) {
- command = list_entry(bdp->non_tx_cmd_list.next,
- nxmit_cb_entry_t, list_elem);
- list_del(&(command->list_elem));
- e100_free_non_tx_cmd(bdp, command);
- }
-
- for (i = 0; i < CB_MAX_NONTX_CMD; i++) {
- bdp->same_cmd_entry[i] = NULL;
- }
-}
-
-static unsigned char
-e100_delayed_exec_non_cu_cmd(struct e100_private *bdp,
- nxmit_cb_entry_t *command)
-{
- nxmit_cb_entry_t *same_command;
- cb_header_t *ntcb_hdr;
- u16 cmd;
-
- ntcb_hdr = (cb_header_t *) command->non_tx_cmd;
-
- cmd = CB_CMD_MASK & le16_to_cpu(ntcb_hdr->cb_cmd);
-
- spin_lock_bh(&(bdp->bd_non_tx_lock));
-
- same_command = bdp->same_cmd_entry[cmd];
-
- if (same_command != NULL) {
- memcpy((void *) (same_command->non_tx_cmd),
- (void *) (command->non_tx_cmd), sizeof (nxmit_cb_t));
- e100_free_non_tx_cmd(bdp, command);
- } else {
- list_add_tail(&(command->list_elem), &(bdp->non_tx_cmd_list));
- bdp->same_cmd_entry[cmd] = command;
- }
-
- if (bdp->non_tx_command_state == E100_NON_TX_IDLE) {
- bdp->non_tx_command_state = E100_WAIT_TX_FINISH;
- mod_timer(&(bdp->nontx_timer_id), jiffies + 1);
- }
-
- spin_unlock_bh(&(bdp->bd_non_tx_lock));
- return true;
-}
-
-static void
-e100_non_tx_background(unsigned long ptr)
-{
- struct e100_private *bdp = (struct e100_private *) ptr;
- nxmit_cb_entry_t *active_command;
- int restart = true;
- cb_header_t *non_tx_cmd;
- u8 sub_cmd;
-
- spin_lock_bh(&(bdp->bd_non_tx_lock));
-
- switch (bdp->non_tx_command_state) {
- case E100_WAIT_TX_FINISH:
- if (bdp->last_tcb != NULL) {
- rmb();
- if ((bdp->last_tcb->tcb_hdr.cb_status &
- __constant_cpu_to_le16(CB_STATUS_COMPLETE)) == 0)
- goto exit;
- }
- if ((readw(&bdp->scb->scb_status) & SCB_CUS_MASK) ==
- SCB_CUS_ACTIVE) {
- goto exit;
- }
- break;
-
- case E100_WAIT_NON_TX_FINISH:
- active_command = list_entry(bdp->non_tx_cmd_list.next,
- nxmit_cb_entry_t, list_elem);
- rmb();
-
- if (((((cb_header_t *) (active_command->non_tx_cmd))->cb_status
- & __constant_cpu_to_le16(CB_STATUS_COMPLETE)) == 0)
- && time_before(jiffies, active_command->expiration_time)) {
- goto exit;
- } else {
- non_tx_cmd = (cb_header_t *) active_command->non_tx_cmd;
- sub_cmd = CB_CMD_MASK & le16_to_cpu(non_tx_cmd->cb_cmd);
-#ifdef E100_CU_DEBUG
- if (!(non_tx_cmd->cb_status
- & __constant_cpu_to_le16(CB_STATUS_COMPLETE)))
- printk(KERN_ERR "e100: %s: Queued "
- "command (%x) timeout\n",
- bdp->device->name, sub_cmd);
-#endif
- list_del(&(active_command->list_elem));
- e100_free_non_tx_cmd(bdp, active_command);
- }
- break;
-
- default:
- break;
- } //switch
-
- if (list_empty(&bdp->non_tx_cmd_list)) {
- bdp->non_tx_command_state = E100_NON_TX_IDLE;
- spin_lock_irq(&(bdp->bd_lock));
- bdp->next_cu_cmd = START_WAIT;
- spin_unlock_irq(&(bdp->bd_lock));
- restart = false;
- goto exit;
- } else {
- u16 cmd_type;
-
- bdp->non_tx_command_state = E100_WAIT_NON_TX_FINISH;
- active_command = list_entry(bdp->non_tx_cmd_list.next,
- nxmit_cb_entry_t, list_elem);
- sub_cmd = ((cb_header_t *) active_command->non_tx_cmd)->cb_cmd;
- spin_lock_irq(&(bdp->bd_lock));
- e100_wait_exec_cmplx(bdp, active_command->dma_addr,
- SCB_CUC_START, sub_cmd);
- spin_unlock_irq(&(bdp->bd_lock));
- active_command->expiration_time = jiffies + HZ;
- cmd_type = CB_CMD_MASK &
- le16_to_cpu(((cb_header_t *)
- (active_command->non_tx_cmd))->cb_cmd);
- bdp->same_cmd_entry[cmd_type] = NULL;
- }
-
-exit:
- if (restart) {
- mod_timer(&(bdp->nontx_timer_id), jiffies + 1);
- } else {
- if (netif_running(bdp->device))
- netif_wake_queue(bdp->device);
- }
- spin_unlock_bh(&(bdp->bd_non_tx_lock));
-}
-
-static void
-e100_vlan_rx_register(struct net_device *netdev, struct vlan_group *grp)
-{
- struct e100_private *bdp = netdev->priv;
-
- e100_disable_clear_intr(bdp);
- bdp->vlgrp = grp;
-
- if(grp) {
- /* enable VLAN tag insert/strip */
- e100_config_vlan_drop(bdp, true);
-
- } else {
- /* disable VLAN tag insert/strip */
- e100_config_vlan_drop(bdp, false);
- }
-
- e100_config(bdp);
- e100_set_intr_mask(bdp);
-}
-
-static void
-e100_vlan_rx_add_vid(struct net_device *netdev, u16 vid)
-{
- /* We don't do Vlan filtering */
- return;
-}
-
-static void
-e100_vlan_rx_kill_vid(struct net_device *netdev, u16 vid)
-{
- struct e100_private *bdp = netdev->priv;
-
- if(bdp->vlgrp)
- bdp->vlgrp->vlan_devices[vid] = NULL;
- /* We don't do Vlan filtering */
- return;
-}
-
-#ifdef CONFIG_PM
-static int
-e100_notify_reboot(struct notifier_block *nb, unsigned long event, void *p)
-{
- struct pci_dev *pdev = NULL;
-
- switch(event) {
- case SYS_DOWN:
- case SYS_HALT:
- case SYS_POWER_OFF:
- while ((pdev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, pdev)) != NULL) {
- if(pci_dev_driver(pdev) == &e100_driver) {
- /* If net_device struct is allocated? */
- if (pci_get_drvdata(pdev))
- e100_suspend(pdev, 3);
-
- }
- }
- }
- return NOTIFY_DONE;
-}
-
-static int
-e100_suspend(struct pci_dev *pcid, u32 state)
-{
- struct net_device *netdev = pci_get_drvdata(pcid);
- struct e100_private *bdp = netdev->priv;
-
- e100_isolate_driver(bdp);
- pci_save_state(pcid, bdp->pci_state);
-
- /* Enable or disable WoL */
- e100_do_wol(pcid, bdp);
-
- /* If wol is enabled */
- if (bdp->wolopts || e100_asf_enabled(bdp)) {
- pci_enable_wake(pcid, 3, 1); /* Enable PME for power state D3 */
- pci_set_power_state(pcid, 3); /* Set power state to D3. */
- } else {
- /* Disable bus mastering */
- pci_disable_device(pcid);
- pci_set_power_state(pcid, state);
- }
- return 0;
-}
-
-static int
-e100_resume(struct pci_dev *pcid)
-{
- struct net_device *netdev = pci_get_drvdata(pcid);
- struct e100_private *bdp = netdev->priv;
-
- pci_set_power_state(pcid, 0);
- pci_enable_wake(pcid, 0, 0); /* Clear PME status and disable PME */
- pci_restore_state(pcid, bdp->pci_state);
-
- /* Also do device full reset because device was in D3 state */
- e100_deisolate_driver(bdp, true);
-
- return 0;
-}
-
-/**
- * e100_asf_enabled - checks if ASF is configured on the current adaper
- * by reading registers 0xD and 0x90 in the EEPROM
- * @bdp: atapter's private data struct
- *
- * Returns: true if ASF is enabled
- */
-static unsigned char
-e100_asf_enabled(struct e100_private *bdp)
-{
- u16 asf_reg;
- u16 smbus_addr_reg;
- if ((bdp->pdev->device >= 0x1050) && (bdp->pdev->device <= 0x1055)) {
- asf_reg = e100_eeprom_read(bdp, EEPROM_CONFIG_ASF);
- if ((asf_reg & EEPROM_FLAG_ASF)
- && !(asf_reg & EEPROM_FLAG_GCL)) {
- smbus_addr_reg =
- e100_eeprom_read(bdp, EEPROM_SMBUS_ADDR);
- if ((smbus_addr_reg & 0xFF) != 0xFE)
- return true;
- }
- }
- return false;
-}
-#endif /* CONFIG_PM */
-
-#ifdef E100_CU_DEBUG
-unsigned char
-e100_cu_unknown_state(struct e100_private *bdp)
-{
- u8 scb_cmd_low;
- u16 scb_status;
- scb_cmd_low = bdp->scb->scb_cmd_low;
- scb_status = le16_to_cpu(bdp->scb->scb_status);
- /* If CU is active and executing unknown cmd */
- if (scb_status & SCB_CUS_ACTIVE && scb_cmd_low & SCB_CUC_UNKNOWN)
- return true;
- else
- return false;
-}
-#endif
-
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-#include "e100_phy.h"
-
-void e100_handle_zlock(struct e100_private *bdp);
-
-/*
- * Procedure: e100_mdi_write
- *
- * Description: This routine will write a value to the specified MII register
- * of an external MDI compliant device (e.g. PHY 100). The
- * command will execute in polled mode.
- *
- * Arguments:
- * bdp - Ptr to this card's e100_bdconfig structure
- * reg_addr - The MII register that we are writing to
- * phy_addr - The MDI address of the Phy component.
- * data - The value that we are writing to the MII register.
- *
- * Returns:
- * NOTHING
- */
-int
-e100_mdi_write(struct e100_private *bdp, u32 reg_addr, u32 phy_addr, u16 data)
-{
- int e100_retry;
- u32 temp_val;
- unsigned int mdi_cntrl;
-
- spin_lock_bh(&bdp->mdi_access_lock);
- temp_val = (((u32) data) | (reg_addr << 16) |
- (phy_addr << 21) | (MDI_WRITE << 26));
- writel(temp_val, &bdp->scb->scb_mdi_cntrl);
- readw(&bdp->scb->scb_status);
-
- /* wait 20usec before checking status */
- udelay(20);
-
- /* poll for the mdi write to complete */
- e100_retry = E100_CMD_WAIT;
- while ((!((mdi_cntrl = readl(&bdp->scb->scb_mdi_cntrl)) & MDI_PHY_READY)) && (e100_retry)) {
-
- udelay(20);
- e100_retry--;
- }
- spin_unlock_bh(&bdp->mdi_access_lock);
- if (mdi_cntrl & MDI_PHY_READY)
- return 0;
- else {
- printk(KERN_ERR "e100: MDI write timeout\n");
- return 1;
- }
-}
-
-/*
- * Procedure: e100_mdi_read
- *
- * Description: This routine will read a value from the specified MII register
- * of an external MDI compliant device (e.g. PHY 100), and return
- * it to the calling routine. The command will execute in polled
- * mode.
- *
- * Arguments:
- * bdp - Ptr to this card's e100_bdconfig structure
- * reg_addr - The MII register that we are reading from
- * phy_addr - The MDI address of the Phy component.
- *
- * Results:
- * data - The value that we read from the MII register.
- *
- * Returns:
- * NOTHING
- */
-int
-e100_mdi_read(struct e100_private *bdp, u32 reg_addr, u32 phy_addr, u16 *data)
-{
- int e100_retry;
- u32 temp_val;
- unsigned int mdi_cntrl;
-
- spin_lock_bh(&bdp->mdi_access_lock);
- /* Issue the read command to the MDI control register. */
- temp_val = ((reg_addr << 16) | (phy_addr << 21) | (MDI_READ << 26));
- writel(temp_val, &bdp->scb->scb_mdi_cntrl);
- readw(&bdp->scb->scb_status);
-
- /* wait 20usec before checking status */
- udelay(20);
-
- /* poll for the mdi read to complete */
- e100_retry = E100_CMD_WAIT;
- while ((!((mdi_cntrl = readl(&bdp->scb->scb_mdi_cntrl)) & MDI_PHY_READY)) && (e100_retry)) {
-
- udelay(20);
- e100_retry--;
- }
-
- spin_unlock_bh(&bdp->mdi_access_lock);
- if (mdi_cntrl & MDI_PHY_READY) {
- /* return the lower word */
- *data = (u16) mdi_cntrl;
- return 0;
- }
- else {
- printk(KERN_ERR "e100: MDI read timeout\n");
- return 1;
- }
-}
-
-static unsigned char
-e100_phy_valid(struct e100_private *bdp, unsigned int phy_address)
-{
- u16 ctrl_reg, stat_reg;
-
- /* Read the MDI control register */
- e100_mdi_read(bdp, MII_BMCR, phy_address, &ctrl_reg);
-
- /* Read the status register twice, bacause of sticky bits */
- e100_mdi_read(bdp, MII_BMSR, phy_address, &stat_reg);
- e100_mdi_read(bdp, MII_BMSR, phy_address, &stat_reg);
-
- if ((ctrl_reg == 0xffff) || ((stat_reg == 0) && (ctrl_reg == 0)))
- return false;
-
- return true;
-}
-
-static void
-e100_phy_address_detect(struct e100_private *bdp)
-{
- unsigned int addr;
- unsigned char valid_phy_found = false;
-
- if (IS_NC3133(bdp)) {
- bdp->phy_addr = 0;
- return;
- }
-
- if (e100_phy_valid(bdp, PHY_DEFAULT_ADDRESS)) {
- bdp->phy_addr = PHY_DEFAULT_ADDRESS;
- valid_phy_found = true;
-
- } else {
- for (addr = MIN_PHY_ADDR; addr <= MAX_PHY_ADDR; addr++) {
- if (e100_phy_valid(bdp, addr)) {
- bdp->phy_addr = addr;
- valid_phy_found = true;
- break;
- }
- }
- }
-
- if (!valid_phy_found) {
- bdp->phy_addr = PHY_ADDRESS_503;
- }
-}
-
-static void
-e100_phy_id_detect(struct e100_private *bdp)
-{
- u16 low_id_reg, high_id_reg;
-
- if (bdp->phy_addr == PHY_ADDRESS_503) {
- bdp->PhyId = PHY_503;
- return;
- }
- if (!(bdp->flags & IS_ICH)) {
- if (bdp->rev_id >= D102_REV_ID) {
- bdp->PhyId = PHY_82562ET;
- return;
- }
- }
-
- /* Read phy id from the MII register */
- e100_mdi_read(bdp, MII_PHYSID1, bdp->phy_addr, &low_id_reg);
- e100_mdi_read(bdp, MII_PHYSID2, bdp->phy_addr, &high_id_reg);
-
- bdp->PhyId = ((unsigned int) low_id_reg |
- ((unsigned int) high_id_reg << 16));
-}
-
-static void
-e100_phy_isolate(struct e100_private *bdp)
-{
- unsigned int phy_address;
- u16 ctrl_reg;
-
- /* Go over all phy addresses. Deisolate the selected one, and isolate
- * all the rest */
- for (phy_address = 0; phy_address <= MAX_PHY_ADDR; phy_address++) {
- if (phy_address != bdp->phy_addr) {
- e100_mdi_write(bdp, MII_BMCR, phy_address,
- BMCR_ISOLATE);
-
- } else {
- e100_mdi_read(bdp, MII_BMCR, bdp->phy_addr, &ctrl_reg);
- ctrl_reg &= ~BMCR_ISOLATE;
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr, ctrl_reg);
- }
-
- udelay(100);
- }
-}
-
-static unsigned char
-e100_phy_specific_setup(struct e100_private *bdp)
-{
- u16 misc_reg;
-
- if (bdp->phy_addr == PHY_ADDRESS_503) {
- switch (bdp->params.e100_speed_duplex) {
- case E100_AUTONEG:
- /* The adapter can't autoneg. so set to 10/HALF */
- printk(KERN_INFO
- "e100: 503 serial component detected which "
- "cannot autonegotiate\n");
- printk(KERN_INFO
- "e100: speed/duplex forced to "
- "10Mbps / Half duplex\n");
- bdp->params.e100_speed_duplex = E100_SPEED_10_HALF;
- break;
-
- case E100_SPEED_100_HALF:
- case E100_SPEED_100_FULL:
- printk(KERN_ERR
- "e100: 503 serial component detected "
- "which does not support 100Mbps\n");
- printk(KERN_ERR
- "e100: Change the forced speed/duplex "
- "to a supported setting\n");
- return false;
- }
-
- return true;
- }
-
- if (IS_NC3133(bdp)) {
- u16 int_reg;
-
- /* enable 100BASE fiber interface */
- e100_mdi_write(bdp, MDI_NC3133_CONFIG_REG, bdp->phy_addr,
- MDI_NC3133_100FX_ENABLE);
-
- if ((bdp->params.e100_speed_duplex != E100_AUTONEG) &&
- (bdp->params.e100_speed_duplex != E100_SPEED_100_FULL)) {
- /* just inform user about 100 full */
- printk(KERN_ERR "e100: NC3133 NIC can only run "
- "at 100Mbps full duplex\n");
- }
-
- bdp->params.e100_speed_duplex = E100_SPEED_100_FULL;
-
- /* enable interrupts */
- e100_mdi_read(bdp, MDI_NC3133_INT_ENABLE_REG,
- bdp->phy_addr, &int_reg);
- int_reg |= MDI_NC3133_INT_ENABLE;
- e100_mdi_write(bdp, MDI_NC3133_INT_ENABLE_REG,
- bdp->phy_addr, int_reg);
- }
-
- /* Handle the National TX */
- if ((bdp->PhyId & PHY_MODEL_REV_ID_MASK) == PHY_NSC_TX) {
- e100_mdi_read(bdp, NSC_CONG_CONTROL_REG,
- bdp->phy_addr, &misc_reg);
-
- misc_reg |= NSC_TX_CONG_TXREADY;
-
- /* disable the congestion control bit in the National Phy */
- misc_reg &= ~NSC_TX_CONG_ENABLE;
-
- e100_mdi_write(bdp, NSC_CONG_CONTROL_REG,
- bdp->phy_addr, misc_reg);
- }
-
- return true;
-}
-
-/*
- * Procedure: e100_phy_fix_squelch
- *
- * Description:
- * Help find link on certain rare scenarios.
- * NOTE: This routine must be called once per watchdog,
- * and *after* setting the current link state.
- *
- * Arguments:
- * bdp - Ptr to this card's e100_bdconfig structure
- *
- * Returns:
- * NOTHING
- */
-static void
-e100_phy_fix_squelch(struct e100_private *bdp)
-{
- if ((bdp->PhyId != PHY_82555_TX) || (bdp->flags & DF_SPEED_FORCED))
- return;
-
- if (netif_carrier_ok(bdp->device)) {
- switch (bdp->PhyState) {
- case 0:
- break;
- case 1:
- e100_mdi_write(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr, 0x0000);
- break;
- case 2:
- e100_mdi_write(bdp, PHY_82555_MDI_EQUALIZER_CSR,
- bdp->phy_addr, 0x3000);
- break;
- }
- bdp->PhyState = 0;
- bdp->PhyDelay = 0;
-
- } else if (!bdp->PhyDelay--) {
- switch (bdp->PhyState) {
- case 0:
- e100_mdi_write(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr, EXTENDED_SQUELCH_BIT);
- bdp->PhyState = 1;
- break;
- case 1:
- e100_mdi_write(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr, 0x0000);
- e100_mdi_write(bdp, PHY_82555_MDI_EQUALIZER_CSR,
- bdp->phy_addr, 0x2010);
- bdp->PhyState = 2;
- break;
- case 2:
- e100_mdi_write(bdp, PHY_82555_MDI_EQUALIZER_CSR,
- bdp->phy_addr, 0x3000);
- bdp->PhyState = 0;
- break;
- }
-
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr,
- BMCR_ANENABLE | BMCR_ANRESTART);
- bdp->PhyDelay = 3;
- }
-}
-
-/*
- * Procedure: e100_fix_polarity
- *
- * Description:
- * Fix for 82555 auto-polarity toggle problem. With a short cable
- * connecting an 82555 with an 840A link partner, if the medium is noisy,
- * the 82555 sometime thinks that the polarity might be wrong and so
- * toggles polarity. This happens repeatedly and results in a high bit
- * error rate.
- * NOTE: This happens only at 10 Mbps
- *
- * Arguments:
- * bdp - Ptr to this card's e100_bdconfig structure
- *
- * Returns:
- * NOTHING
- */
-static void
-e100_fix_polarity(struct e100_private *bdp)
-{
- u16 status;
- u16 errors;
- u16 misc_reg;
- int speed;
-
- if ((bdp->PhyId != PHY_82555_TX) && (bdp->PhyId != PHY_82562ET) &&
- (bdp->PhyId != PHY_82562EM))
- return;
-
- /* If the user wants auto-polarity disabled, do only that and nothing *
- * else. * e100_autopolarity == 0 means disable --- we do just the
- * disabling * e100_autopolarity == 1 means enable --- we do nothing at
- * all * e100_autopolarity >= 2 means we do the workaround code. */
- /* Change for 82558 enhancement */
- switch (E100_AUTOPOLARITY) {
- case 0:
- e100_mdi_read(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr, &misc_reg);
- e100_mdi_write(bdp, PHY_82555_SPECIAL_CONTROL, bdp->phy_addr,
- (u16) (misc_reg | DISABLE_AUTO_POLARITY));
- break;
-
- case 1:
- e100_mdi_read(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr, &misc_reg);
- e100_mdi_write(bdp, PHY_82555_SPECIAL_CONTROL, bdp->phy_addr,
- (u16) (misc_reg & ~DISABLE_AUTO_POLARITY));
- break;
-
- case 2:
- /* we do this only if link is up */
- if (!netif_carrier_ok(bdp->device)) {
- break;
- }
-
- e100_mdi_read(bdp, PHY_82555_CSR, bdp->phy_addr, &status);
- speed = (status & PHY_82555_SPEED_BIT) ? 100 : 10;
-
- /* we need to do this only if speed is 10 */
- if (speed != 10) {
- break;
- }
-
- /* see if we have any end of frame errors */
- e100_mdi_read(bdp, PHY_82555_EOF_COUNTER,
- bdp->phy_addr, &errors);
-
- /* if non-zero, wait for 100 ms before reading again */
- if (errors) {
- udelay(200);
- e100_mdi_read(bdp, PHY_82555_EOF_COUNTER,
- bdp->phy_addr, &errors);
-
- /* if non-zero again, we disable polarity */
- if (errors) {
- e100_mdi_read(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr, &misc_reg);
- e100_mdi_write(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr,
- (u16) (misc_reg |
- DISABLE_AUTO_POLARITY));
- }
- }
-
- if (!errors) {
- /* it is safe to read the polarity now */
- e100_mdi_read(bdp, PHY_82555_CSR,
- bdp->phy_addr, &status);
-
- /* if polarity is normal, disable polarity */
- if (!(status & PHY_82555_POLARITY_BIT)) {
- e100_mdi_read(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr, &misc_reg);
- e100_mdi_write(bdp, PHY_82555_SPECIAL_CONTROL,
- bdp->phy_addr,
- (u16) (misc_reg |
- DISABLE_AUTO_POLARITY));
- }
- }
- break;
-
- default:
- break;
- }
-}
-
-/*
- * Procedure: e100_find_speed_duplex
- *
- * Description: This routine will figure out what line speed and duplex mode
- * the PHY is currently using.
- *
- * Arguments:
- * bdp - Ptr to this card's e100_bdconfig structure
- *
- * Returns:
- * NOTHING
- */
-static void
-e100_find_speed_duplex(struct e100_private *bdp)
-{
- unsigned int PhyId;
- u16 stat_reg, misc_reg;
- u16 ad_reg, lp_ad_reg;
-
- PhyId = bdp->PhyId & PHY_MODEL_REV_ID_MASK;
-
- /* First we should check to see if we have link */
- /* If we don't have a link no reason to print a speed and duplex */
- if (!e100_update_link_state(bdp)) {
- bdp->cur_line_speed = 0;
- bdp->cur_dplx_mode = 0;
- return;
- }
-
- /* On the 82559 and later controllers, speed/duplex is part of the *
- * SCB. So, we save an mdi_read and get these from the SCB. * */
- if (bdp->rev_id >= D101MA_REV_ID) {
- /* Read speed */
- if (readb(&bdp->scb->scb_ext.d101m_scb.scb_gen_stat) & BIT_1)
- bdp->cur_line_speed = 100;
- else
- bdp->cur_line_speed = 10;
-
- /* Read duplex */
- if (readb(&bdp->scb->scb_ext.d101m_scb.scb_gen_stat) & BIT_2)
- bdp->cur_dplx_mode = FULL_DUPLEX;
- else
- bdp->cur_dplx_mode = HALF_DUPLEX;
-
- return;
- }
-
- /* If this is a Phy 100, then read bits 1 and 0 of extended register 0,
- * to get the current speed and duplex settings. */
- if ((PhyId == PHY_100_A) || (PhyId == PHY_100_C) ||
- (PhyId == PHY_82555_TX)) {
-
- /* Read Phy 100 extended register 0 */
- e100_mdi_read(bdp, EXTENDED_REG_0, bdp->phy_addr, &misc_reg);
-
- /* Get current speed setting */
- if (misc_reg & PHY_100_ER0_SPEED_INDIC)
- bdp->cur_line_speed = 100;
- else
- bdp->cur_line_speed = 10;
-
- /* Get current duplex setting -- FDX enabled if bit is set */
- if (misc_reg & PHY_100_ER0_FDX_INDIC)
- bdp->cur_dplx_mode = FULL_DUPLEX;
- else
- bdp->cur_dplx_mode = HALF_DUPLEX;
-
- return;
- }
-
- /* See if link partner is capable of Auto-Negotiation (bit 0, reg 6) */
- e100_mdi_read(bdp, MII_EXPANSION, bdp->phy_addr, &misc_reg);
-
- /* See if Auto-Negotiation was complete (bit 5, reg 1) */
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &stat_reg);
-
- /* If a True NWAY connection was made, then we can detect speed/dplx
- * by ANDing our adapter's advertised abilities with our link partner's
- * advertised ablilities, and then assuming that the highest common
- * denominator was chosed by NWAY. */
- if ((misc_reg & EXPANSION_NWAY) && (stat_reg & BMSR_ANEGCOMPLETE)) {
-
- /* Read our advertisement register */
- e100_mdi_read(bdp, MII_ADVERTISE, bdp->phy_addr, &ad_reg);
-
- /* Read our link partner's advertisement register */
- e100_mdi_read(bdp, MII_LPA, bdp->phy_addr, &lp_ad_reg);
-
- /* AND the two advertisement registers together, and get rid
- * of any extraneous bits. */
- ad_reg &= (lp_ad_reg & NWAY_LP_ABILITY);
-
- /* Get speed setting */
- if (ad_reg &
- (ADVERTISE_100HALF | ADVERTISE_100FULL |
- ADVERTISE_100BASE4))
-
- bdp->cur_line_speed = 100;
- else
- bdp->cur_line_speed = 10;
-
- /* Get duplex setting -- use priority resolution algorithm */
- if (ad_reg & ADVERTISE_100BASE4) {
- bdp->cur_dplx_mode = HALF_DUPLEX;
- } else if (ad_reg & ADVERTISE_100FULL) {
- bdp->cur_dplx_mode = FULL_DUPLEX;
- } else if (ad_reg & ADVERTISE_100HALF) {
- bdp->cur_dplx_mode = HALF_DUPLEX;
- } else if (ad_reg & ADVERTISE_10FULL) {
- bdp->cur_dplx_mode = FULL_DUPLEX;
- } else {
- bdp->cur_dplx_mode = HALF_DUPLEX;
- }
-
- return;
- }
-
- /* If we are connected to a dumb (non-NWAY) repeater or hub, and the
- * line speed was determined automatically by parallel detection, then
- * we have no way of knowing exactly what speed the PHY is set to
- * unless that PHY has a propietary register which indicates speed in
- * this situation. The NSC TX PHY does have such a register. Also,
- * since NWAY didn't establish the connection, the duplex setting
- * should HALF duplex. */
- bdp->cur_dplx_mode = HALF_DUPLEX;
-
- if (PhyId == PHY_NSC_TX) {
- /* Read register 25 to get the SPEED_10 bit */
- e100_mdi_read(bdp, NSC_SPEED_IND_REG, bdp->phy_addr, &misc_reg);
-
- /* If bit 6 was set then we're at 10Mbps */
- if (misc_reg & NSC_TX_SPD_INDC_SPEED)
- bdp->cur_line_speed = 10;
- else
- bdp->cur_line_speed = 100;
-
- } else {
- /* If we don't know the line speed, default to 10Mbps */
- bdp->cur_line_speed = 10;
- }
-}
-
-/*
- * Procedure: e100_force_speed_duplex
- *
- * Description: This routine forces line speed and duplex mode of the
- * adapter based on the values the user has set in e100.c.
- *
- * Arguments: bdp - Pointer to the e100_private structure for the board
- *
- * Returns: void
- *
- */
-void
-e100_force_speed_duplex(struct e100_private *bdp)
-{
- u16 control;
- unsigned long expires;
-
- bdp->flags |= DF_SPEED_FORCED;
-
- e100_mdi_read(bdp, MII_BMCR, bdp->phy_addr, &control);
- control &= ~BMCR_ANENABLE;
- control &= ~BMCR_LOOPBACK;
-
- switch (bdp->params.e100_speed_duplex) {
- case E100_SPEED_10_HALF:
- control &= ~BMCR_SPEED100;
- control &= ~BMCR_FULLDPLX;
- bdp->cur_line_speed = 10;
- bdp->cur_dplx_mode = HALF_DUPLEX;
- break;
-
- case E100_SPEED_10_FULL:
- control &= ~BMCR_SPEED100;
- control |= BMCR_FULLDPLX;
- bdp->cur_line_speed = 10;
- bdp->cur_dplx_mode = FULL_DUPLEX;
- break;
-
- case E100_SPEED_100_HALF:
- control |= BMCR_SPEED100;
- control &= ~BMCR_FULLDPLX;
- bdp->cur_line_speed = 100;
- bdp->cur_dplx_mode = HALF_DUPLEX;
- break;
-
- case E100_SPEED_100_FULL:
- control |= BMCR_SPEED100;
- control |= BMCR_FULLDPLX;
- bdp->cur_line_speed = 100;
- bdp->cur_dplx_mode = FULL_DUPLEX;
- break;
- }
-
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr, control);
-
- /* loop must run at least once */
- expires = jiffies + 2 * HZ;
- do {
- if (e100_update_link_state(bdp) ||
- time_after(jiffies, expires)) {
- break;
- } else {
- yield();
- }
-
- } while (true);
-}
-
-void
-e100_force_speed_duplex_to_phy(struct e100_private *bdp)
-{
- u16 control;
-
- e100_mdi_read(bdp, MII_BMCR, bdp->phy_addr, &control);
- control &= ~BMCR_ANENABLE;
- control &= ~BMCR_LOOPBACK;
-
- switch (bdp->params.e100_speed_duplex) {
- case E100_SPEED_10_HALF:
- control &= ~BMCR_SPEED100;
- control &= ~BMCR_FULLDPLX;
- break;
-
- case E100_SPEED_10_FULL:
- control &= ~BMCR_SPEED100;
- control |= BMCR_FULLDPLX;
- break;
-
- case E100_SPEED_100_HALF:
- control |= BMCR_SPEED100;
- control &= ~BMCR_FULLDPLX;
- break;
-
- case E100_SPEED_100_FULL:
- control |= BMCR_SPEED100;
- control |= BMCR_FULLDPLX;
- break;
- }
-
- /* Send speed/duplex command to PHY layer. */
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr, control);
-}
-
-/*
- * Procedure: e100_set_fc
- *
- * Description: Checks the link's capability for flow control.
- *
- * Arguments: bdp - Pointer to the e100_private structure for the board
- *
- * Returns: void
- *
- */
-static void
-e100_set_fc(struct e100_private *bdp)
-{
- u16 ad_reg;
- u16 lp_ad_reg;
- u16 exp_reg;
-
- /* no flow control for 82557, forced links or half duplex */
- if (!netif_carrier_ok(bdp->device) || (bdp->flags & DF_SPEED_FORCED) ||
- (bdp->cur_dplx_mode == HALF_DUPLEX) ||
- !(bdp->flags & IS_BACHELOR)) {
-
- bdp->flags &= ~DF_LINK_FC_CAP;
- return;
- }
-
- /* See if link partner is capable of Auto-Negotiation (bit 0, reg 6) */
- e100_mdi_read(bdp, MII_EXPANSION, bdp->phy_addr, &exp_reg);
-
- if (exp_reg & EXPANSION_NWAY) {
- /* Read our advertisement register */
- e100_mdi_read(bdp, MII_ADVERTISE, bdp->phy_addr, &ad_reg);
-
- /* Read our link partner's advertisement register */
- e100_mdi_read(bdp, MII_LPA, bdp->phy_addr, &lp_ad_reg);
-
- ad_reg &= lp_ad_reg; /* AND the 2 ad registers */
-
- if (ad_reg & NWAY_AD_FC_SUPPORTED)
- bdp->flags |= DF_LINK_FC_CAP;
- else
- /* If link partner is capable of autoneg, but */
- /* not capable of flow control, Received PAUSE */
- /* frames are still honored, i.e., */
- /* transmitted frames would be paused */
- /* by incoming PAUSE frames */
- bdp->flags |= DF_LINK_FC_TX_ONLY;
-
- } else {
- bdp->flags &= ~DF_LINK_FC_CAP;
- }
-}
-
-/*
- * Procedure: e100_phy_check
- *
- * Arguments: bdp - Pointer to the e100_private structure for the board
- *
- * Returns: true if link state was changed
- * false otherwise
- *
- */
-unsigned char
-e100_phy_check(struct e100_private *bdp)
-{
- unsigned char old_link;
- unsigned char changed = false;
-
- old_link = netif_carrier_ok(bdp->device) ? 1 : 0;
- e100_find_speed_duplex(bdp);
-
- if (!old_link && netif_carrier_ok(bdp->device)) {
- e100_set_fc(bdp);
- changed = true;
- }
-
- if (old_link && !netif_carrier_ok(bdp->device)) {
- /* reset the zero lock state */
- bdp->zlock_state = ZLOCK_INITIAL;
-
- // set auto lock for phy auto-negotiation on link up
- if ((bdp->PhyId & PHY_MODEL_REV_ID_MASK) == PHY_82555_TX)
- e100_mdi_write(bdp, PHY_82555_MDI_EQUALIZER_CSR,
- bdp->phy_addr, 0);
- changed = true;
- }
-
- e100_phy_fix_squelch(bdp);
- e100_handle_zlock(bdp);
-
- return changed;
-}
-
-/*
- * Procedure: e100_auto_neg
- *
- * Description: This routine will start autonegotiation and wait
- * for it to complete
- *
- * Arguments:
- * bdp - pointer to this card's e100_bdconfig structure
- * force_restart - defines if autoneg should be restarted even if it
- * has been completed before
- * Returns:
- * NOTHING
- */
-static void
-e100_auto_neg(struct e100_private *bdp, unsigned char force_restart)
-{
- u16 stat_reg;
- unsigned long expires;
-
- bdp->flags &= ~DF_SPEED_FORCED;
-
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &stat_reg);
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &stat_reg);
-
- /* if we are capable of performing autoneg then we restart if needed */
- if ((stat_reg != 0xFFFF) && (stat_reg & BMSR_ANEGCAPABLE)) {
-
- if ((!force_restart) &&
- (stat_reg & BMSR_ANEGCOMPLETE)) {
- goto exit;
- }
-
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr,
- BMCR_ANENABLE | BMCR_ANRESTART);
-
- /* wait for autoneg to complete (up to 3 seconds) */
- expires = jiffies + HZ * 3;
- do {
- /* now re-read the value. Sticky so read twice */
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &stat_reg);
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &stat_reg);
-
- if ((stat_reg & BMSR_ANEGCOMPLETE) ||
- time_after(jiffies, expires) ) {
- goto exit;
- } else {
- yield();
- }
- } while (true);
- }
-
-exit:
- e100_find_speed_duplex(bdp);
-}
-
-void
-e100_phy_set_speed_duplex(struct e100_private *bdp, unsigned char force_restart)
-{
- if (bdp->params.e100_speed_duplex == E100_AUTONEG) {
- if (bdp->rev_id >= D102_REV_ID)
- /* Enable MDI/MDI-X auto switching */
- e100_mdi_write(bdp, MII_NCONFIG, bdp->phy_addr,
- MDI_MDIX_AUTO_SWITCH_ENABLE);
- e100_auto_neg(bdp, force_restart);
-
- } else {
- if (bdp->rev_id >= D102_REV_ID)
- /* Disable MDI/MDI-X auto switching */
- e100_mdi_write(bdp, MII_NCONFIG, bdp->phy_addr,
- MDI_MDIX_RESET_ALL_MASK);
- e100_force_speed_duplex(bdp);
- }
-
- e100_set_fc(bdp);
-}
-
-void
-e100_phy_autoneg(struct e100_private *bdp)
-{
- u16 ctrl_reg;
-
- ctrl_reg = BMCR_ANENABLE | BMCR_ANRESTART | BMCR_RESET;
-
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr, ctrl_reg);
-
- udelay(100);
-}
-
-void
-e100_phy_set_loopback(struct e100_private *bdp)
-{
- u16 ctrl_reg;
- ctrl_reg = BMCR_LOOPBACK;
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr, ctrl_reg);
- udelay(100);
-}
-
-void
-e100_phy_reset(struct e100_private *bdp)
-{
- u16 ctrl_reg;
- ctrl_reg = BMCR_RESET;
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr, ctrl_reg);
- /* ieee 802.3 : The reset process shall be completed */
- /* within 0.5 seconds from the settting of PHY reset bit. */
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ / 2);
-}
-
-unsigned char
-e100_phy_init(struct e100_private *bdp)
-{
- e100_phy_reset(bdp);
- e100_phy_address_detect(bdp);
- e100_phy_isolate(bdp);
- e100_phy_id_detect(bdp);
-
- if (!e100_phy_specific_setup(bdp))
- return false;
-
- bdp->PhyState = 0;
- bdp->PhyDelay = 0;
- bdp->zlock_state = ZLOCK_INITIAL;
-
- e100_phy_set_speed_duplex(bdp, false);
- e100_fix_polarity(bdp);
-
- return true;
-}
-
-/*
- * Procedure: e100_get_link_state
- *
- * Description: This routine checks the link status of the adapter
- *
- * Arguments: bdp - Pointer to the e100_private structure for the board
- *
- *
- * Returns: true - If a link is found
- * false - If there is no link
- *
- */
-unsigned char
-e100_get_link_state(struct e100_private *bdp)
-{
- unsigned char link = false;
- u16 status;
-
- /* Check link status */
- /* If the controller is a 82559 or later one, link status is available
- * from the CSR. This avoids the mdi_read. */
- if (bdp->rev_id >= D101MA_REV_ID) {
- if (readb(&bdp->scb->scb_ext.d101m_scb.scb_gen_stat) & BIT_0) {
- link = true;
- } else {
- link = false;
- }
-
- } else {
- /* Read the status register twice because of sticky bits */
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &status);
- e100_mdi_read(bdp, MII_BMSR, bdp->phy_addr, &status);
-
- if (status & BMSR_LSTATUS) {
- link = true;
- } else {
- link = false;
- }
- }
-
- return link;
-}
-
-/*
- * Procedure: e100_update_link_state
- *
- * Description: This routine updates the link status of the adapter,
- * also considering netif_running
- *
- * Arguments: bdp - Pointer to the e100_private structure for the board
- *
- *
- * Returns: true - If a link is found
- * false - If there is no link
- *
- */
-unsigned char
-e100_update_link_state(struct e100_private *bdp)
-{
- unsigned char link;
-
- /* Logical AND PHY link & netif_running */
- link = e100_get_link_state(bdp) && netif_running(bdp->device);
-
- if (link) {
- if (!netif_carrier_ok(bdp->device))
- netif_carrier_on(bdp->device);
- } else {
- if (netif_carrier_ok(bdp->device))
- netif_carrier_off(bdp->device);
- }
-
- return link;
-}
-
-/**************************************************************************\
- **
- ** PROC NAME: e100_handle_zlock
- ** This function manages a state machine that controls
- ** the driver's zero locking algorithm.
- ** This function is called by e100_watchdog() every ~2 second.
- ** States:
- ** The current link handling state is stored in
- ** bdp->zlock_state, and is one of:
- ** ZLOCK_INITIAL, ZLOCK_READING, ZLOCK_SLEEPING
- ** Detailed description of the states and the transitions
- ** between states is found below.
- ** Note that any time the link is down / there is a reset
- ** state will be changed outside this function to ZLOCK_INITIAL
- ** Algorithm:
- ** 1. If link is up & 100 Mbps continue else stay in #1:
- ** 2. Set 'auto lock'
- ** 3. Read & Store 100 times 'Zero' locked in 1 sec interval
- ** 4. If max zero read >= 0xB continue else goto 1
- ** 5. Set most popular 'Zero' read in #3
- ** 6. Sleep 5 minutes
- ** 7. Read number of errors, if it is > 300 goto 2 else goto 6
- ** Data Structures (in DRIVER_DATA):
- ** zlock_state - current state of the algorithm
- ** zlock_read_cnt - counts number of reads (up to 100)
- ** zlock_read_data[i] - counts number of times 'Zero' read was i, 0 <= i <= 15
- ** zlock_sleep_cnt - keeps track of "sleep" time (up to 300 secs = 5 minutes)
- **
- ** Parameters: DRIVER_DATA *bdp
- **
- ** bdp - Pointer to HSM's adapter data space
- **
- ** Return Value: NONE
- **
- ** See Also: e100_watchdog()
- **
- \**************************************************************************/
-void
-e100_handle_zlock(struct e100_private *bdp)
-{
- u16 pos;
- u16 eq_reg;
- u16 err_cnt;
- u8 mpz; /* Most Popular Zero */
-
- switch (bdp->zlock_state) {
- case ZLOCK_INITIAL:
-
- if (((u8) bdp->rev_id <= D102_REV_ID) ||
- !(bdp->cur_line_speed == 100) ||
- !netif_carrier_ok(bdp->device)) {
- break;
- }
-
- /* initialize hw and sw and start reading */
- e100_mdi_write(bdp, PHY_82555_MDI_EQUALIZER_CSR,
- bdp->phy_addr, 0);
- /* reset read counters: */
- bdp->zlock_read_cnt = 0;
- for (pos = 0; pos < 16; pos++)
- bdp->zlock_read_data[pos] = 0;
- /* start reading in the next call back: */
- bdp->zlock_state = ZLOCK_READING;
-
- /* FALL THROUGH !! */
-
- case ZLOCK_READING:
- /* state: reading (100 times) zero locked in 1 sec interval
- * prev states: ZLOCK_INITIAL
- * next states: ZLOCK_INITIAL, ZLOCK_SLEEPING */
-
- e100_mdi_read(bdp, PHY_82555_MDI_EQUALIZER_CSR,
- bdp->phy_addr, &eq_reg);
- pos = (eq_reg & ZLOCK_ZERO_MASK) >> 4;
- bdp->zlock_read_data[pos]++;
- bdp->zlock_read_cnt++;
-
- if (bdp->zlock_read_cnt == ZLOCK_MAX_READS) {
- /* check if we read a 'Zero' value of 0xB or greater */
- if ((bdp->zlock_read_data[0xB]) ||
- (bdp->zlock_read_data[0xC]) ||
- (bdp->zlock_read_data[0xD]) ||
- (bdp->zlock_read_data[0xE]) ||
- (bdp->zlock_read_data[0xF])) {
-
- /* we've read 'Zero' value of 0xB or greater,
- * find most popular 'Zero' value and lock it */
- mpz = 0;
- /* this loop finds the most popular 'Zero': */
- for (pos = 1; pos < 16; pos++) {
- if (bdp->zlock_read_data[pos] >
- bdp->zlock_read_data[mpz])
-
- mpz = pos;
- }
- /* now lock the most popular 'Zero': */
- eq_reg = (ZLOCK_SET_ZERO | mpz);
- e100_mdi_write(bdp,
- PHY_82555_MDI_EQUALIZER_CSR,
- bdp->phy_addr, eq_reg);
-
- /* sleep for 5 minutes: */
- bdp->zlock_sleep_cnt = jiffies;
- bdp->zlock_state = ZLOCK_SLEEPING;
- /* we will be reading the # of errors after 5
- * minutes, so we need to reset the error
- * counters - these registers are self clearing
- * on read, so read them */
- e100_mdi_read(bdp, PHY_82555_SYMBOL_ERR,
- bdp->phy_addr, &err_cnt);
-
- } else {
- /* we did not read a 'Zero' value of 0xB or
- * above. go back to the start */
- bdp->zlock_state = ZLOCK_INITIAL;
- }
-
- }
- break;
-
- case ZLOCK_SLEEPING:
- /* state: sleeping for 5 minutes
- * prev states: ZLOCK_READING
- * next states: ZLOCK_READING, ZLOCK_SLEEPING */
-
- /* if 5 minutes have passed: */
- if ((jiffies - bdp->zlock_sleep_cnt) >= ZLOCK_MAX_SLEEP) {
- /* read and sum up the number of errors: */
- e100_mdi_read(bdp, PHY_82555_SYMBOL_ERR,
- bdp->phy_addr, &err_cnt);
- /* if we've more than 300 errors (this number was
- * calculated according to the spec max allowed errors
- * (80 errors per 1 million frames) for 5 minutes in
- * 100 Mbps (or the user specified max BER number) */
- if (err_cnt > bdp->params.ber) {
- /* start again in the next callback: */
- bdp->zlock_state = ZLOCK_INITIAL;
- } else {
- /* we don't have more errors than allowed,
- * sleep for 5 minutes */
- bdp->zlock_sleep_cnt = jiffies;
- }
- }
- break;
-
- default:
- break;
- }
-}
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-#ifndef _E100_PHY_INC_
-#define _E100_PHY_INC_
-
-#include "e100.h"
-
-/*
- * Auto-polarity enable/disable
- * e100_autopolarity = 0 => disable auto-polarity
- * e100_autopolarity = 1 => enable auto-polarity
- * e100_autopolarity = 2 => let software determine
- */
-#define E100_AUTOPOLARITY 2
-
-#define IS_NC3133(bdp) (((bdp)->pdev->subsystem_vendor == 0x0E11) && \
- ((bdp)->pdev->subsystem_device == 0xB0E1))
-
-#define PHY_503 0
-#define PHY_100_A 0x000003E0
-#define PHY_100_C 0x035002A8
-#define PHY_NSC_TX 0x5c002000
-#define PHY_82562ET 0x033002A8
-#define PHY_82562EM 0x032002A8
-#define PHY_82562EH 0x017002A8
-#define PHY_82555_TX 0x015002a8 /* added this for 82555 */
-#define PHY_OTHER 0xFFFF
-#define MAX_PHY_ADDR 31
-#define MIN_PHY_ADDR 0
-
-#define PHY_MODEL_REV_ID_MASK 0xFFF0FFFF
-
-#define PHY_DEFAULT_ADDRESS 1
-#define PHY_ADDRESS_503 32
-
-/* MDI Control register bit definitions */
-#define MDI_PHY_READY BIT_28 /* PHY is ready for next MDI cycle */
-
-#define MDI_NC3133_CONFIG_REG 0x19
-#define MDI_NC3133_100FX_ENABLE BIT_2
-#define MDI_NC3133_INT_ENABLE_REG 0x17
-#define MDI_NC3133_INT_ENABLE BIT_1
-
-/* MDI Control register opcode definitions */
-#define MDI_WRITE 1 /* Phy Write */
-#define MDI_READ 2 /* Phy read */
-
-/* MDI register set*/
-#define AUTO_NEG_NEXT_PAGE_REG 0x07 /* Auto-negotiation next page xmit */
-#define EXTENDED_REG_0 0x10 /* Extended reg 0 (Phy 100 modes) */
-#define EXTENDED_REG_1 0x14 /* Extended reg 1 (Phy 100 error indications) */
-#define NSC_CONG_CONTROL_REG 0x17 /* National (TX) congestion control */
-#define NSC_SPEED_IND_REG 0x19 /* National (TX) speed indication */
-
-#define HWI_CONTROL_REG 0x1D /* HWI Control register */
-/* MDI/MDI-X Control Register bit definitions */
-#define MDI_MDIX_RES_TIMER BIT_0_3 /* minimum slot time for resolution timer */
-#define MDI_MDIX_CONFIG_IS_OK BIT_4 /* 1 = resolution algorithm completes OK */
-#define MDI_MDIX_STATUS BIT_5 /* 1 = MDIX (croos over), 0 = MDI (straight through) */
-#define MDI_MDIX_SWITCH BIT_6 /* 1 = Forces to MDIX, 0 = Forces to MDI */
-#define MDI_MDIX_AUTO_SWITCH_ENABLE BIT_7 /* 1 = MDI/MDI-X feature enabled */
-#define MDI_MDIX_CONCT_CONFIG BIT_8 /* Sets the MDI/MDI-X connectivity configuration (test prupose only) */
-#define MDI_MDIX_CONCT_TEST_ENABLE BIT_9 /* 1 = Enables connectivity testing */
-#define MDI_MDIX_RESET_ALL_MASK 0x0000
-
-/* HWI Control Register bit definitions */
-#define HWI_TEST_DISTANCE BIT_0_8 /* distance to cable problem */
-#define HWI_TEST_HIGHZ_PROBLEM BIT_9 /* 1 = Open Circuit */
-#define HWI_TEST_LOWZ_PROBLEM BIT_10 /* 1 = Short Circuit */
-#define HWI_TEST_RESERVED (BIT_11 | BIT_12) /* reserved */
-#define HWI_TEST_EXECUTE BIT_13 /* 1 = Execute the HWI test on the PHY */
-#define HWI_TEST_ABILITY BIT_14 /* 1 = test passed */
-#define HWI_TEST_ENABLE BIT_15 /* 1 = Enables the HWI feature */
-#define HWI_RESET_ALL_MASK 0x0000
-
-/* ############Start of 82555 specific defines################## */
-
-/* Intel 82555 specific registers */
-#define PHY_82555_CSR 0x10 /* 82555 CSR */
-#define PHY_82555_SPECIAL_CONTROL 0x11 /* 82555 special control register */
-
-#define PHY_82555_RCV_ERR 0x15 /* 82555 100BaseTx Receive Error
- * Frame Counter */
-#define PHY_82555_SYMBOL_ERR 0x16 /* 82555 RCV Symbol Error Counter */
-#define PHY_82555_PREM_EOF_ERR 0x17 /* 82555 100BaseTx RCV Premature End
- * of Frame Error Counter */
-#define PHY_82555_EOF_COUNTER 0x18 /* 82555 end of frame error counter */
-#define PHY_82555_MDI_EQUALIZER_CSR 0x1a /* 82555 specific equalizer reg. */
-
-/* 82555 CSR bits */
-#define PHY_82555_SPEED_BIT BIT_1
-#define PHY_82555_POLARITY_BIT BIT_8
-
-/* 82555 equalizer reg. opcodes */
-#define ENABLE_ZERO_FORCING 0x2010 /* write to ASD conf. reg. 0 */
-#define DISABLE_ZERO_FORCING 0x2000 /* write to ASD conf. reg. 0 */
-
-/* 82555 special control reg. opcodes */
-#define DISABLE_AUTO_POLARITY 0x0010
-#define EXTENDED_SQUELCH_BIT BIT_2
-
-/* ############End of 82555 specific defines##################### */
-
-/* Auto-Negotiation advertisement register bit definitions*/
-#define NWAY_AD_FC_SUPPORTED 0x0400 /* Flow Control supported */
-
-/* Auto-Negotiation link partner ability register bit definitions*/
-#define NWAY_LP_ABILITY 0x07e0 /* technologies supported */
-
-/* PHY 100 Extended Register 0 bit definitions*/
-#define PHY_100_ER0_FDX_INDIC BIT_0 /* 1 = FDX, 0 = half duplex */
-#define PHY_100_ER0_SPEED_INDIC BIT_1 /* 1 = 100Mbps, 0= 10Mbps */
-
-/* National Semiconductor TX phy congestion control register bit definitions*/
-#define NSC_TX_CONG_TXREADY BIT_10 /* Makes TxReady an input */
-#define NSC_TX_CONG_ENABLE BIT_8 /* Enables congestion control */
-
-/* National Semiconductor TX phy speed indication register bit definitions*/
-#define NSC_TX_SPD_INDC_SPEED BIT_6 /* 0 = 100Mbps, 1=10Mbps */
-
-/************* function prototypes ************/
-extern unsigned char e100_phy_init(struct e100_private *bdp);
-extern unsigned char e100_update_link_state(struct e100_private *bdp);
-extern unsigned char e100_phy_check(struct e100_private *bdp);
-extern void e100_phy_set_speed_duplex(struct e100_private *bdp,
- unsigned char force_restart);
-extern void e100_phy_autoneg(struct e100_private *bdp);
-extern void e100_phy_reset(struct e100_private *bdp);
-extern void e100_phy_set_loopback(struct e100_private *bdp);
-extern int e100_mdi_write(struct e100_private *, u32, u32, u16);
-extern int e100_mdi_read(struct e100_private *, u32, u32, u16 *);
-
-#endif
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-#include "e100_phy.h"
-#include "e100_config.h"
-
-extern u16 e100_eeprom_read(struct e100_private *, u16);
-extern int e100_wait_exec_cmplx(struct e100_private *, u32,u8, u8);
-extern void e100_phy_reset(struct e100_private *bdp);
-extern void e100_phy_autoneg(struct e100_private *bdp);
-extern void e100_phy_set_loopback(struct e100_private *bdp);
-extern void e100_force_speed_duplex(struct e100_private *bdp);
-
-static u8 e100_diag_selftest(struct net_device *);
-static u8 e100_diag_eeprom(struct net_device *);
-static u8 e100_diag_loopback(struct net_device *);
-
-static u8 e100_diag_one_loopback (struct net_device *, u8);
-static u8 e100_diag_rcv_loopback_pkt(struct e100_private *);
-static void e100_diag_config_loopback(struct e100_private *, u8, u8, u8 *,u8 *);
-static u8 e100_diag_loopback_alloc(struct e100_private *);
-static void e100_diag_loopback_cu_ru_exec(struct e100_private *);
-static u8 e100_diag_check_pkt(u8 *);
-static void e100_diag_loopback_free(struct e100_private *);
-static int e100_cable_diag(struct e100_private *bdp);
-
-#define LB_PACKET_SIZE 1500
-
-/**
- * e100_run_diag - main test execution handler - checks mask of requests and calls the diag routines
- * @dev: atapter's net device data struct
- * @test_info: array with test request mask also used to store test results
- *
- * RETURNS: updated flags field of struct ethtool_test
- */
-u32
-e100_run_diag(struct net_device *dev, u64 *test_info, u32 flags)
-{
- struct e100_private* bdp = dev->priv;
- u8 test_result = 0;
-
- if (!e100_get_link_state(bdp)) {
- test_result = ETH_TEST_FL_FAILED;
- test_info[test_link] = true;
- }
- if (!e100_diag_eeprom(dev)) {
- test_result = ETH_TEST_FL_FAILED;
- test_info[test_eeprom] = true;
- }
- if (flags & ETH_TEST_FL_OFFLINE) {
- u8 fail_mask;
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- e100_close(dev);
- spin_unlock_bh(&dev->xmit_lock);
- }
- if (e100_diag_selftest(dev)) {
- test_result = ETH_TEST_FL_FAILED;
- test_info[test_self_test] = true;
- }
-
- fail_mask = e100_diag_loopback(dev);
- if (fail_mask) {
- test_result = ETH_TEST_FL_FAILED;
- if (fail_mask & PHY_LOOPBACK)
- test_info[test_loopback_phy] = true;
- if (fail_mask & MAC_LOOPBACK)
- test_info[test_loopback_mac] = true;
- }
-
- test_info[cable_diag] = e100_cable_diag(bdp);
- /* Need hw init regardless of netif_running */
- e100_hw_init(bdp);
- if (netif_running(dev)) {
- e100_open(dev);
- }
- }
- else {
- test_info[test_self_test] = false;
- test_info[test_loopback_phy] = false;
- test_info[test_loopback_mac] = false;
- test_info[cable_diag] = false;
- }
-
- return flags | test_result;
-}
-
-/**
- * e100_diag_selftest - run hardware selftest
- * @dev: atapter's net device data struct
- */
-static u8
-e100_diag_selftest(struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
- u32 st_timeout, st_result;
- u8 retval = 0;
-
- if (!e100_selftest(bdp, &st_timeout, &st_result)) {
- if (!st_timeout) {
- if (st_result & CB_SELFTEST_REGISTER_BIT)
- retval |= REGISTER_TEST_FAIL;
- if (st_result & CB_SELFTEST_DIAG_BIT)
- retval |= SELF_TEST_FAIL;
- if (st_result & CB_SELFTEST_ROM_BIT)
- retval |= ROM_TEST_FAIL;
- } else {
- retval = TEST_TIMEOUT;
- }
- }
-
- return retval;
-}
-
-/**
- * e100_diag_eeprom - validate eeprom checksum correctness
- * @dev: atapter's net device data struct
- *
- */
-static u8
-e100_diag_eeprom (struct net_device *dev)
-{
- struct e100_private *bdp = dev->priv;
- u16 i, eeprom_sum, eeprom_actual_csm;
-
- for (i = 0, eeprom_sum = 0; i < (bdp->eeprom_size - 1); i++) {
- eeprom_sum += e100_eeprom_read(bdp, i);
- }
-
- eeprom_actual_csm = e100_eeprom_read(bdp, bdp->eeprom_size - 1);
-
- if (eeprom_actual_csm == (u16)(EEPROM_SUM - eeprom_sum)) {
- return true;
- }
-
- return false;
-}
-
-/**
- * e100_diag_loopback - performs loopback test
- * @dev: atapter's net device data struct
- */
-static u8
-e100_diag_loopback (struct net_device *dev)
-{
- u8 rc = 0;
-
- printk(KERN_DEBUG "%s: PHY loopback test starts\n", dev->name);
- e100_hw_init(dev->priv);
- if (!e100_diag_one_loopback(dev, PHY_LOOPBACK)) {
- rc |= PHY_LOOPBACK;
- }
- printk(KERN_DEBUG "%s: PHY loopback test ends\n", dev->name);
-
- printk(KERN_DEBUG "%s: MAC loopback test starts\n", dev->name);
- e100_hw_init(dev->priv);
- if (!e100_diag_one_loopback(dev, MAC_LOOPBACK)) {
- rc |= MAC_LOOPBACK;
- }
- printk(KERN_DEBUG "%s: MAC loopback test ends\n", dev->name);
-
- return rc;
-}
-
-/**
- * e100_diag_loopback - performs loopback test
- * @dev: atapter's net device data struct
- * @mode: lopback test type
- */
-static u8
-e100_diag_one_loopback (struct net_device *dev, u8 mode)
-{
- struct e100_private *bdp = dev->priv;
- u8 res = false;
- u8 saved_dynamic_tbd = false;
- u8 saved_extended_tcb = false;
-
- if (!e100_diag_loopback_alloc(bdp))
- return false;
-
- /* change the config block to standard tcb and the correct loopback */
- e100_diag_config_loopback(bdp, true, mode,
- &saved_extended_tcb, &saved_dynamic_tbd);
-
- e100_diag_loopback_cu_ru_exec(bdp);
-
- if (e100_diag_rcv_loopback_pkt(bdp)) {
- res = true;
- }
-
- e100_diag_loopback_free(bdp);
-
- /* change the config block to previous tcb mode and the no loopback */
- e100_diag_config_loopback(bdp, false, mode,
- &saved_extended_tcb, &saved_dynamic_tbd);
- return res;
-}
-
-/**
- * e100_diag_config_loopback - setup/clear loopback before/after lpbk test
- * @bdp: atapter's private data struct
- * @set_loopback: true if the function is called to set lb
- * @loopback_mode: the loopback mode(MAC or PHY)
- * @tcb_extended: true if need to set extended tcb mode after clean loopback
- * @dynamic_tbd: true if needed to set dynamic tbd mode after clean loopback
- *
- */
-void
-e100_diag_config_loopback(struct e100_private* bdp,
- u8 set_loopback,
- u8 loopback_mode,
- u8* tcb_extended,
- u8* dynamic_tbd)
-{
- /* if set_loopback == true - we want to clear tcb_extended/dynamic_tbd.
- * the previous values are saved in the params tcb_extended/dynamic_tbd
- * if set_loopback == false - we want to restore previous value.
- */
- if (set_loopback || (*tcb_extended))
- *tcb_extended = e100_config_tcb_ext_enable(bdp,*tcb_extended);
-
- if (set_loopback || (*dynamic_tbd))
- *dynamic_tbd = e100_config_dynamic_tbd(bdp,*dynamic_tbd);
-
- if (set_loopback) {
- /* ICH PHY loopback is broken */
- if (bdp->flags & IS_ICH && loopback_mode == PHY_LOOPBACK)
- loopback_mode = MAC_LOOPBACK;
- /* Configure loopback on MAC */
- e100_config_loopback_mode(bdp,loopback_mode);
- } else {
- e100_config_loopback_mode(bdp,NO_LOOPBACK);
- }
-
- e100_config(bdp);
-
- if (loopback_mode == PHY_LOOPBACK) {
- if (set_loopback)
- /* Set PHY loopback mode */
- e100_phy_set_loopback(bdp);
- else
- /* Reset PHY loopback mode */
- e100_phy_reset(bdp);
- /* Wait for PHY state change */
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ);
- } else { /* For MAC loopback wait 500 msec to take effect */
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ / 2);
- }
-}
-
-/**
- * e100_diag_loopback_alloc - alloc & initate tcb and rfd for the loopback
- * @bdp: atapter's private data struct
- *
- */
-static u8
-e100_diag_loopback_alloc(struct e100_private *bdp)
-{
- dma_addr_t dma_handle;
- tcb_t *tcb;
- rfd_t *rfd;
- tbd_t *tbd;
-
- /* tcb, tbd and transmit buffer are allocated */
- tcb = pci_alloc_consistent(bdp->pdev,
- (sizeof (tcb_t) + sizeof (tbd_t) +
- LB_PACKET_SIZE),
- &dma_handle);
- if (tcb == NULL)
- return false;
-
- memset(tcb, 0x00, sizeof (tcb_t) + sizeof (tbd_t) + LB_PACKET_SIZE);
- tcb->tcb_phys = dma_handle;
- tcb->tcb_hdr.cb_status = 0;
- tcb->tcb_hdr.cb_cmd =
- cpu_to_le16(CB_EL_BIT | CB_TRANSMIT | CB_TX_SF_BIT);
- /* Next command is null */
- tcb->tcb_hdr.cb_lnk_ptr = cpu_to_le32(0xffffffff);
- tcb->tcb_cnt = 0;
- tcb->tcb_thrshld = bdp->tx_thld;
- tcb->tcb_tbd_num = 1;
- /* Set up tcb tbd pointer */
- tcb->tcb_tbd_ptr = cpu_to_le32(tcb->tcb_phys + sizeof (tcb_t));
- tbd = (tbd_t *) ((u8 *) tcb + sizeof (tcb_t));
- /* Set up tbd transmit buffer */
- tbd->tbd_buf_addr =
- cpu_to_le32(le32_to_cpu(tcb->tcb_tbd_ptr) + sizeof (tbd_t));
- tbd->tbd_buf_cnt = __constant_cpu_to_le16(1024);
- /* The value of first 512 bytes is FF */
- memset((void *) ((u8 *) tbd + sizeof (tbd_t)), 0xFF, 512);
- /* The value of second 512 bytes is BA */
- memset((void *) ((u8 *) tbd + sizeof (tbd_t) + 512), 0xBA, 512);
- mb();
- rfd = pci_alloc_consistent(bdp->pdev, sizeof (rfd_t), &dma_handle);
-
- if (rfd == NULL) {
- pci_free_consistent(bdp->pdev,
- sizeof (tcb_t) + sizeof (tbd_t) +
- LB_PACKET_SIZE, tcb, tcb->tcb_phys);
- return false;
- }
-
- memset(rfd, 0x00, sizeof (rfd_t));
-
- /* init all fields in rfd */
- rfd->rfd_header.cb_cmd = cpu_to_le16(RFD_EL_BIT);
- rfd->rfd_sz = cpu_to_le16(ETH_FRAME_LEN + CHKSUM_SIZE);
- /* dma_handle is physical address of rfd */
- bdp->loopback.dma_handle = dma_handle;
- bdp->loopback.tcb = tcb;
- bdp->loopback.rfd = rfd;
- mb();
- return true;
-}
-
-/**
- * e100_diag_loopback_cu_ru_exec - activates cu and ru to send & receive the pkt
- * @bdp: atapter's private data struct
- *
- */
-static void
-e100_diag_loopback_cu_ru_exec(struct e100_private *bdp)
-{
- /*load CU & RU base */
- if(!e100_wait_exec_cmplx(bdp, bdp->loopback.dma_handle, SCB_RUC_START, 0))
- printk(KERN_ERR "e100: SCB_RUC_START failed!\n");
-
- bdp->next_cu_cmd = START_WAIT;
- e100_start_cu(bdp, bdp->loopback.tcb);
- bdp->last_tcb = NULL;
- rmb();
-}
-/**
- * e100_diag_check_pkt - checks if a given packet is a loopback packet
- * @bdp: atapter's private data struct
- *
- * Returns true if OK false otherwise.
- */
-static u8
-e100_diag_check_pkt(u8 *datap)
-{
- int i;
- for (i = 0; i<512; i++) {
- if( !((*datap)==0xFF && (*(datap + 512) == 0xBA)) ) {
- printk (KERN_ERR "e100: check loopback packet failed at: %x\n", i);
- return false;
- }
- }
- printk (KERN_DEBUG "e100: Check received loopback packet OK\n");
- return true;
-}
-
-/**
- * e100_diag_rcv_loopback_pkt - waits for receive and checks lpbk packet
- * @bdp: atapter's private data struct
- *
- * Returns true if OK false otherwise.
- */
-static u8
-e100_diag_rcv_loopback_pkt(struct e100_private* bdp)
-{
- rfd_t *rfdp;
- u16 rfd_status;
- unsigned long expires = jiffies + HZ * 2;
-
- rfdp =bdp->loopback.rfd;
-
- rfd_status = le16_to_cpu(rfdp->rfd_header.cb_status);
-
- while (!(rfd_status & RFD_STATUS_COMPLETE)) {
- if (time_before(jiffies, expires)) {
- yield();
- rmb();
- rfd_status = le16_to_cpu(rfdp->rfd_header.cb_status);
- } else {
- break;
- }
- }
-
- if (rfd_status & RFD_STATUS_COMPLETE) {
- printk(KERN_DEBUG "e100: Loopback packet received\n");
- return e100_diag_check_pkt(((u8 *)rfdp+bdp->rfd_size));
- }
- else {
- printk(KERN_ERR "e100: Loopback packet not received\n");
- return false;
- }
-}
-
-/**
- * e100_diag_loopback_free - free data allocated for loopback pkt send/receive
- * @bdp: atapter's private data struct
- *
- */
-static void
-e100_diag_loopback_free (struct e100_private *bdp)
-{
- pci_free_consistent(bdp->pdev,
- sizeof(tcb_t) + sizeof(tbd_t) + LB_PACKET_SIZE,
- bdp->loopback.tcb, bdp->loopback.tcb->tcb_phys);
-
- pci_free_consistent(bdp->pdev, sizeof(rfd_t), bdp->loopback.rfd,
- bdp->loopback.dma_handle);
-}
-
-static int
-e100_cable_diag(struct e100_private *bdp)
-{
- int saved_open_circut = 0xffff;
- int saved_short_circut = 0xffff;
- int saved_distance = 0xffff;
- int saved_same = 0;
- int cable_status = E100_CABLE_UNKNOWN;
- int i;
-
- /* If we have link, */
- if (e100_get_link_state(bdp))
- return E100_CABLE_OK;
-
- if (bdp->rev_id < D102_REV_ID)
- return E100_CABLE_UNKNOWN;
-
- /* Disable MDI/MDI-X auto switching */
- e100_mdi_write(bdp, MII_NCONFIG, bdp->phy_addr,
- MDI_MDIX_RESET_ALL_MASK);
- /* Set to 100 Full as required by cable test */
- e100_mdi_write(bdp, MII_BMCR, bdp->phy_addr,
- BMCR_SPEED100 | BMCR_FULLDPLX);
-
- /* Test up to 100 times */
- for (i = 0; i < 100; i++) {
- u16 ctrl_reg;
- int distance, open_circut, short_circut, near_end;
-
- /* Enable and execute cable test */
- e100_mdi_write(bdp, HWI_CONTROL_REG, bdp->phy_addr,
- (HWI_TEST_ENABLE | HWI_TEST_EXECUTE));
- /* Wait for cable test finished */
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ/100 + 1);
- /* Read results */
- e100_mdi_read(bdp, HWI_CONTROL_REG, bdp->phy_addr, &ctrl_reg);
- distance = ctrl_reg & HWI_TEST_DISTANCE;
- open_circut = ctrl_reg & HWI_TEST_HIGHZ_PROBLEM;
- short_circut = ctrl_reg & HWI_TEST_LOWZ_PROBLEM;
-
- if ((distance == saved_distance) &&
- (open_circut == saved_open_circut) &&
- (short_circut == saved_short_circut))
- saved_same++;
- else {
- saved_same = 0;
- saved_distance = distance;
- saved_open_circut = open_circut;
- saved_short_circut = short_circut;
- }
- /* If results are the same 3 times */
- if (saved_same == 3) {
- near_end = ((distance * HWI_REGISTER_GRANULARITY) <
- HWI_NEAR_END_BOUNDARY);
- if (open_circut)
- cable_status = (near_end) ?
- E100_CABLE_OPEN_NEAR : E100_CABLE_OPEN_FAR;
- if (short_circut)
- cable_status = (near_end) ?
- E100_CABLE_SHORT_NEAR : E100_CABLE_SHORT_FAR;
- break;
- }
- }
- /* Reset cable test */
- e100_mdi_write(bdp, HWI_CONTROL_REG, bdp->phy_addr, HWI_RESET_ALL_MASK);
- return cable_status;
-}
-
+++ /dev/null
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2003 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-*******************************************************************************/
-
-#ifndef _E100_UCODE_H_
-#define _E100_UCODE_H_
-
-/*
-e100_ucode.h
-
-This file contains the loadable micro code arrays to implement receive
-bundling on the D101 A-step, D101 B-step, D101M (B-step only), D101S,
-D102 B-step, D102 B-step with TCO work around and D102 C-step.
-
-Each controller has its own specific micro code array. The array for one
-controller is totally incompatible with any other controller, and if used
-will most likely cause the controller to lock up and stop responding to
-the driver. Each micro code array has its own parameter offsets (described
-below), and they each have their own version number.
-*/
-
-/*************************************************************************
-* CPUSaver parameters
-*
-* All CPUSaver parameters are 16-bit literals that are part of a
-* "move immediate value" instruction. By changing the value of
-* the literal in the instruction before the code is loaded, the
-* driver can change algorithm.
-*
-* CPUSAVER_DWORD - This is the location of the instruction that loads
-* the dead-man timer with its inital value. By writing a 16-bit
-* value to the low word of this instruction, the driver can change
-* the timer value. The current default is either x600 or x800;
-* experiments show that the value probably should stay within the
-* range of x200 - x1000.
-*
-* CPUSAVER_BUNDLE_MAX_DWORD - This is the location of the instruction
-* that sets the maximum number of frames that will be bundled. In
-* some situations, such as the TCP windowing algorithm, it may be
-* better to limit the growth of the bundle size than let it go as
-* high as it can, because that could cause too much added latency.
-* The default is six, because this is the number of packets in the
-* default TCP window size. A value of 1 would make CPUSaver indicate
-* an interrupt for every frame received. If you do not want to put
-* a limit on the bundle size, set this value to xFFFF.
-*
-* CPUSAVER_MIN_SIZE_DWORD - This is the location of the instruction
-* that contains a bit-mask describing the minimum size frame that
-* will be bundled. The default masks the lower 7 bits, which means
-* that any frame less than 128 bytes in length will not be bundled,
-* but will instead immediately generate an interrupt. This does
-* not affect the current bundle in any way. Any frame that is 128
-* bytes or large will be bundled normally. This feature is meant
-* to provide immediate indication of ACK frames in a TCP environment.
-* Customers were seeing poor performance when a machine with CPUSaver
-* enabled was sending but not receiving. The delay introduced when
-* the ACKs were received was enough to reduce total throughput, because
-* the sender would sit idle until the ACK was finally seen.
-*
-* The current default is 0xFF80, which masks out the lower 7 bits.
-* This means that any frame which is x7F (127) bytes or smaller
-* will cause an immediate interrupt. Because this value must be a
-* bit mask, there are only a few valid values that can be used. To
-* turn this feature off, the driver can write the value xFFFF to the
-* lower word of this instruction (in the same way that the other
-* parameters are used). Likewise, a value of 0xF800 (2047) would
-* cause an interrupt to be generated for every frame, because all
-* standard Ethernet frames are <= 2047 bytes in length.
-*************************************************************************/
-
-#ifndef UCODE_MAX_DWORDS
-#define UCODE_MAX_DWORDS 134
-#endif
-
-/********************************************************/
-/* CPUSaver micro code for the D101A */
-/********************************************************/
-
-/* Version 2.0 */
-
-/* This value is the same for both A and B step of 558. */
-
-#define D101_CPUSAVER_TIMER_DWORD 72
-#define D101_CPUSAVER_BUNDLE_DWORD UCODE_MAX_DWORDS
-#define D101_CPUSAVER_MIN_SIZE_DWORD UCODE_MAX_DWORDS
-
-#define D101_A_RCVBUNDLE_UCODE \
-{\
-0x03B301BB, 0x0046FFFF, 0xFFFFFFFF, 0x051DFFFF, 0xFFFFFFFF, 0xFFFFFFFF, \
-0x000C0001, 0x00101212, 0x000C0008, 0x003801BC, \
-0x00000000, 0x00124818, 0x000C1000, 0x00220809, \
-0x00010200, 0x00124818, 0x000CFFFC, 0x003803B5, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x0024B81D, 0x00130836, 0x000C0001, \
-0x0026081C, 0x0020C81B, 0x00130824, 0x00222819, \
-0x00101213, 0x00041000, 0x003A03B3, 0x00010200, \
-0x00101B13, 0x00238081, 0x00213049, 0x0038003B, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x0024B83E, 0x00130826, 0x000C0001, \
-0x0026083B, 0x00010200, 0x00134824, 0x000C0001, \
-0x00101213, 0x00041000, 0x0038051E, 0x00101313, \
-0x00010400, 0x00380521, 0x00050600, 0x00100824, \
-0x00101310, 0x00041000, 0x00080600, 0x00101B10, \
-0x0038051E, 0x00000000, 0x00000000, 0x00000000 \
-}
-
-/********************************************************/
-/* CPUSaver micro code for the D101B */
-/********************************************************/
-
-/* Version 2.0 */
-
-#define D101_B0_RCVBUNDLE_UCODE \
-{\
-0x03B401BC, 0x0047FFFF, 0xFFFFFFFF, 0x051EFFFF, 0xFFFFFFFF, 0xFFFFFFFF, \
-0x000C0001, 0x00101B92, 0x000C0008, 0x003801BD, \
-0x00000000, 0x00124818, 0x000C1000, 0x00220809, \
-0x00010200, 0x00124818, 0x000CFFFC, 0x003803B6, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x0024B81D, 0x0013082F, 0x000C0001, \
-0x0026081C, 0x0020C81B, 0x00130837, 0x00222819, \
-0x00101B93, 0x00041000, 0x003A03B4, 0x00010200, \
-0x00101793, 0x00238082, 0x0021304A, 0x0038003C, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x0024B83E, 0x00130826, 0x000C0001, \
-0x0026083B, 0x00010200, 0x00134837, 0x000C0001, \
-0x00101B93, 0x00041000, 0x0038051F, 0x00101313, \
-0x00010400, 0x00380522, 0x00050600, 0x00100837, \
-0x00101310, 0x00041000, 0x00080600, 0x00101790, \
-0x0038051F, 0x00000000, 0x00000000, 0x00000000 \
-}
-
-/********************************************************/
-/* CPUSaver micro code for the D101M (B-step only) */
-/********************************************************/
-
-/* Version 2.10.1 */
-
-/* Parameter values for the D101M B-step */
-#define D101M_CPUSAVER_TIMER_DWORD 78
-#define D101M_CPUSAVER_BUNDLE_DWORD 65
-#define D101M_CPUSAVER_MIN_SIZE_DWORD 126
-
-#define D101M_B_RCVBUNDLE_UCODE \
-{\
-0x00550215, 0xFFFF0437, 0xFFFFFFFF, 0x06A70789, 0xFFFFFFFF, 0x0558FFFF, \
-0x000C0001, 0x00101312, 0x000C0008, 0x00380216, \
-0x0010009C, 0x00204056, 0x002380CC, 0x00380056, \
-0x0010009C, 0x00244C0B, 0x00000800, 0x00124818, \
-0x00380438, 0x00000000, 0x00140000, 0x00380555, \
-0x00308000, 0x00100662, 0x00100561, 0x000E0408, \
-0x00134861, 0x000C0002, 0x00103093, 0x00308000, \
-0x00100624, 0x00100561, 0x000E0408, 0x00100861, \
-0x000C007E, 0x00222C21, 0x000C0002, 0x00103093, \
-0x00380C7A, 0x00080000, 0x00103090, 0x00380C7A, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x00244C2D, 0x00010004, 0x00041000, \
-0x003A0437, 0x00044010, 0x0038078A, 0x00000000, \
-0x00100099, 0x00206C7A, 0x0010009C, 0x00244C48, \
-0x00130824, 0x000C0001, 0x00101213, 0x00260C75, \
-0x00041000, 0x00010004, 0x00130826, 0x000C0006, \
-0x002206A8, 0x0013C926, 0x00101313, 0x003806A8, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00080600, 0x00101B10, 0x00050004, 0x00100826, \
-0x00101210, 0x00380C34, 0x00000000, 0x00000000, \
-0x0021155B, 0x00100099, 0x00206559, 0x0010009C, \
-0x00244559, 0x00130836, 0x000C0000, 0x00220C62, \
-0x000C0001, 0x00101B13, 0x00229C0E, 0x00210C0E, \
-0x00226C0E, 0x00216C0E, 0x0022FC0E, 0x00215C0E, \
-0x00214C0E, 0x00380555, 0x00010004, 0x00041000, \
-0x00278C67, 0x00040800, 0x00018100, 0x003A0437, \
-0x00130826, 0x000C0001, 0x00220559, 0x00101313, \
-0x00380559, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00130831, 0x0010090B, 0x00124813, \
-0x000CFF80, 0x002606AB, 0x00041000, 0x00010004, \
-0x003806A8, 0x00000000, 0x00000000, 0x00000000, \
-}
-
-/********************************************************/
-/* CPUSaver micro code for the D101S */
-/********************************************************/
-
-/* Version 1.20.1 */
-
-/* Parameter values for the D101S */
-#define D101S_CPUSAVER_TIMER_DWORD 78
-#define D101S_CPUSAVER_BUNDLE_DWORD 67
-#define D101S_CPUSAVER_MIN_SIZE_DWORD 128
-
-#define D101S_RCVBUNDLE_UCODE \
-{\
-0x00550242, 0xFFFF047E, 0xFFFFFFFF, 0x06FF0818, 0xFFFFFFFF, 0x05A6FFFF, \
-0x000C0001, 0x00101312, 0x000C0008, 0x00380243, \
-0x0010009C, 0x00204056, 0x002380D0, 0x00380056, \
-0x0010009C, 0x00244F8B, 0x00000800, 0x00124818, \
-0x0038047F, 0x00000000, 0x00140000, 0x003805A3, \
-0x00308000, 0x00100610, 0x00100561, 0x000E0408, \
-0x00134861, 0x000C0002, 0x00103093, 0x00308000, \
-0x00100624, 0x00100561, 0x000E0408, 0x00100861, \
-0x000C007E, 0x00222FA1, 0x000C0002, 0x00103093, \
-0x00380F90, 0x00080000, 0x00103090, 0x00380F90, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x00244FAD, 0x00010004, 0x00041000, \
-0x003A047E, 0x00044010, 0x00380819, 0x00000000, \
-0x00100099, 0x00206FFD, 0x0010009A, 0x0020AFFD, \
-0x0010009C, 0x00244FC8, 0x00130824, 0x000C0001, \
-0x00101213, 0x00260FF7, 0x00041000, 0x00010004, \
-0x00130826, 0x000C0006, 0x00220700, 0x0013C926, \
-0x00101313, 0x00380700, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00080600, 0x00101B10, 0x00050004, 0x00100826, \
-0x00101210, 0x00380FB6, 0x00000000, 0x00000000, \
-0x002115A9, 0x00100099, 0x002065A7, 0x0010009A, \
-0x0020A5A7, 0x0010009C, 0x002445A7, 0x00130836, \
-0x000C0000, 0x00220FE4, 0x000C0001, 0x00101B13, \
-0x00229F8E, 0x00210F8E, 0x00226F8E, 0x00216F8E, \
-0x0022FF8E, 0x00215F8E, 0x00214F8E, 0x003805A3, \
-0x00010004, 0x00041000, 0x00278FE9, 0x00040800, \
-0x00018100, 0x003A047E, 0x00130826, 0x000C0001, \
-0x002205A7, 0x00101313, 0x003805A7, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00130831, \
-0x0010090B, 0x00124813, 0x000CFF80, 0x00260703, \
-0x00041000, 0x00010004, 0x00380700 \
-}
-
-/********************************************************/
-/* CPUSaver micro code for the D102 B-step */
-/********************************************************/
-
-/* Version 2.0 */
-/* Parameter values for the D102 B-step */
-#define D102_B_CPUSAVER_TIMER_DWORD 82
-#define D102_B_CPUSAVER_BUNDLE_DWORD 106
-#define D102_B_CPUSAVER_MIN_SIZE_DWORD 70
-
-#define D102_B_RCVBUNDLE_UCODE \
-{\
-0x006F0276, 0x0EF71FFF, 0x0ED30F86, 0x0D250ED9, 0x1FFF1FFF, 0x1FFF04D2, \
-0x00300001, 0x0140D871, 0x00300008, 0x00E00277, \
-0x01406C57, 0x00816073, 0x008700FA, 0x00E00070, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x01406CBA, 0x00807F9A, 0x00901F9A, 0x0024FFFF, \
-0x014B6F6F, 0x0030FFFE, 0x01407172, 0x01496FBA, \
-0x014B6F72, 0x00308000, 0x01406C52, 0x00912EFC, \
-0x00E00EF8, 0x00000000, 0x00000000, 0x00000000, \
-0x00906F8C, 0x00900F8C, 0x00E00F87, 0x00000000, \
-0x00906ED8, 0x01406C55, 0x00E00ED4, 0x00000000, \
-0x01406C51, 0x0080DFC2, 0x01406C52, 0x00815FC2, \
-0x01406C57, 0x00917FCC, 0x00E01FDD, 0x00000000, \
-0x00822D30, 0x01406C51, 0x0080CD26, 0x01406C52, \
-0x00814D26, 0x01406C57, 0x00916D26, 0x014C6FD7, \
-0x00300000, 0x00841FD2, 0x00300001, 0x0140D772, \
-0x00E012B3, 0x014C6F91, 0x0150710B, 0x01496F72, \
-0x0030FF80, 0x00940EDD, 0x00102000, 0x00038400, \
-0x00E00EDA, 0x00000000, 0x00000000, 0x00000000, \
-0x01406C57, 0x00917FE9, 0x00001000, 0x00E01FE9, \
-0x00200600, 0x0140D76F, 0x00138400, 0x01406FD8, \
-0x0140D96F, 0x00E01FDD, 0x00038400, 0x00102000, \
-0x00971FD7, 0x00101000, 0x00050200, 0x00E804D2, \
-0x014C6FD8, 0x00300001, 0x00840D26, 0x0140D872, \
-0x00E00D26, 0x014C6FD9, 0x00300001, 0x0140D972, \
-0x00941FBD, 0x00102000, 0x00038400, 0x014C6FD8, \
-0x00300006, 0x00840EDA, 0x014F71D8, 0x0140D872, \
-0x00E00EDA, 0x01496F50, 0x00E004D3, 0x00000000, \
-}
-
-/********************************************************/
-/* Micro code for the D102 C-step */
-/********************************************************/
-
-/* Parameter values for the D102 C-step */
-#define D102_C_CPUSAVER_TIMER_DWORD 46
-#define D102_C_CPUSAVER_BUNDLE_DWORD 74
-#define D102_C_CPUSAVER_MIN_SIZE_DWORD 54
-
-#define D102_C_RCVBUNDLE_UCODE \
-{ \
-0x00700279, 0x0E6604E2, 0x02BF0CAE, 0x1508150C, 0x15190E5B, 0x0E840F13, \
-0x00E014D8, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014DC, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014F4, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014E0, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014E7, 0x00000000, 0x00000000, 0x00000000, \
-0x00141000, 0x015D6F0D, 0x00E002C0, 0x00000000, \
-0x00200600, 0x00E0150D, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0030FF80, 0x00940E6A, 0x00038200, 0x00102000, \
-0x00E00E67, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00906E65, 0x00800E60, 0x00E00E5D, 0x00000000, \
-0x00300006, 0x00E0151A, 0x00000000, 0x00000000, \
-0x00906F19, 0x00900F19, 0x00E00F14, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x01406CBA, 0x00807FDA, 0x00901FDA, 0x0024FFFF, \
-0x014B6F6F, 0x0030FFFE, 0x01407172, 0x01496FBA, \
-0x014B6F72, 0x00308000, 0x01406C52, 0x00912E89, \
-0x00E00E85, 0x00000000, 0x00000000, 0x00000000 \
-}
-
-/********************************************************/
-/* Micro code for the D102 E-step */
-/********************************************************/
-
-/* Parameter values for the D102 E-step */
-#define D102_E_CPUSAVER_TIMER_DWORD 42
-#define D102_E_CPUSAVER_BUNDLE_DWORD 54
-#define D102_E_CPUSAVER_MIN_SIZE_DWORD 46
-
-#define D102_E_RCVBUNDLE_UCODE \
-{\
-0x007D028F, 0x0E4204F9, 0x14ED0C85, 0x14FA14E9, 0x1FFF1FFF, 0x1FFF1FFF, \
-0x00E014B9, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014BD, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014D5, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014C1, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014C8, 0x00000000, 0x00000000, 0x00000000, \
-0x00200600, 0x00E014EE, 0x00000000, 0x00000000, \
-0x0030FF80, 0x00940E46, 0x00038200, 0x00102000, \
-0x00E00E43, 0x00000000, 0x00000000, 0x00000000, \
-0x00300006, 0x00E014FB, 0x00000000, 0x00000000 \
-}
-
-#endif /* _E100_UCODE_H_ */
iph5526_probe_pci(dev);
err = register_netdev(dev);
if (err < 0) {
- kfree(dev);
+ free_netdev(dev);
printk("iph5526.c: init_fcdev failed for card #%d\n", i+1);
break;
}
/* Initialize the FEC Ethernet on 860T (or ColdFire 5272).
*/
+ /*
+ * XXX: We need to clean up on failure exits here.
+ */
int __init fec_enet_init(struct net_device *dev)
{
- struct fec_enet_private *fep;
+ struct fec_enet_private *fep = dev->priv;
unsigned long mem_addr;
volatile cbd_t *bdp;
cbd_t *cbd_base;
if (found)
return(-ENXIO);
- /* Allocate some private information.
- */
- fep = (struct fec_enet_private *)kmalloc(sizeof(*fep), GFP_KERNEL);
- if (!fep)
- return -ENOMEM;
- memset(fep, 0, sizeof(*fep));
-
/* Create an Ethernet device instance.
*/
fecp = fec_hwp;
}
mem_addr = __get_free_page(GFP_KERNEL);
cbd_base = (cbd_t *)mem_addr;
+ /* XXX: missing check for allocation failure */
fec_uncache(mem_addr);
/* Allocate a page.
*/
mem_addr = __get_free_page(GFP_KERNEL);
+ /* XXX: missing check for allocation failure */
fec_uncache(mem_addr);
fec_request_intrs(dev, fecp);
dev->base_addr = (unsigned long)fecp;
- dev->priv = fep;
-
- ether_setup(dev);
/* The FEC Ethernet specific entries in the device structure. */
dev->open = fec_enet_open;
fecp->fec_mii_speed = fep->phy_speed;
}
-static struct net_device fec_dev = {
- .init = fec_enet_init,
-};
+static struct net_device *fec_dev;
static int __init fec_enet_module_init(void)
{
- if (register_netdev(&fec_dev) != 0)
+ struct net_device *dev;
+ int err;
+
+ dev = alloc_etherdev(sizeof(struct fec_enet_private));
+ if (!dev)
+ return -ENOMEM;
+ err = fec_enet_init(dev);
+ if (err) {
+ free_netdev(dev);
+ return err;
+ }
+
+ if (register_netdev(dev) != 0) {
+ /* XXX: missing cleanup here */
+ free_netdev(dev);
return -EIO;
+ }
+ fec_dev = dev;
return(0);
}
(sixpack_ctrls[i] = (sixpack_ctrl_t *)kmalloc(sizeof(sixpack_ctrl_t),
GFP_KERNEL)) != NULL) {
spp = sixpack_ctrls[i];
- memset(spp, 0, sizeof(sixpack_ctrl_t));
-
- /* Initialize channel control data */
- set_bit(SIXPF_INUSE, &spp->ctrl.flags);
- spp->ctrl.tty = NULL;
- sprintf(spp->dev.name, "sp%d", i);
- spp->dev.base_addr = i;
- spp->dev.priv = (void *) &spp->ctrl;
- spp->dev.next = NULL;
- spp->dev.init = sixpack_init;
}
+ memset(spp, 0, sizeof(sixpack_ctrl_t));
+
+ /* Initialize channel control data */
+ set_bit(SIXPF_INUSE, &spp->ctrl.flags);
+ spp->ctrl.tty = NULL;
+ sprintf(spp->dev.name, "sp%d", i);
+ spp->dev.base_addr = i;
+ spp->dev.priv = (void *) &spp->ctrl;
+ spp->dev.next = NULL;
+ spp->dev.init = sixpack_init;
if (spp != NULL) {
/* register device so that it can be ifconfig'ed */
certain parameters, such as channel access timing, clock mode, and
DMA channel. This is accomplished with a small utility program,
dmascc_cfg, available at
- <http://www.nt.tuwien.ac.at/~kkudielk/Linux/>. Please be sure to get
- at least version 1.27 of dmascc_cfg, as older versions will not
+ <http://cacofonix.nt.tuwien.ac.at/~oe1kib/Linux/>. Please be sure to
+ get at least version 1.27 of dmascc_cfg, as older versions will not
work with the current driver.
config SCC
help
Say Y here if you experience problems with the SCC driver not
working properly; please read
- <file:Documentation/networking/z8530drv.txt> for details. If unsure,
- say N.
+ <file:Documentation/networking/z8530drv.txt> for details.
+
+ If unsure, say N.
config SCC_TRXECHO
bool "support for TRX that feedback the tx signal to rx"
help
Some transmitters feed the transmitted signal back to the receive
line. Say Y here to foil this by explicitly disabling the receiver
- during data transmission. If in doubt, say Y.
+ during data transmission.
+
+ If in doubt, say Y.
config BAYCOM_SER_FDX
tristate "BAYCOM ser12 fullduplex driver for AX.25"
int err;
err = hp100_isa_init();
-
+ if (err && err != -ENODEV)
+ goto out;
#ifdef CONFIG_EISA
- err |= eisa_driver_register(&hp100_eisa_driver);
+ err = eisa_driver_register(&hp100_eisa_driver);
+ if (err && err != -ENODEV)
+ goto out2;
#endif
#ifdef CONFIG_PCI
- err |= pci_module_init(&hp100_pci_driver);
+ err = pci_module_init(&hp100_pci_driver);
+ if (err && err != -ENODEV)
+ goto out3;
#endif
+ out:
return err;
+ out3:
+#ifdef CONFIG_EISA
+ eisa_driver_unregister (&hp100_eisa_driver);
+ out2:
+#endif
+ hp100_isa_cleanup();
+ goto out;
}
information, download the following tar gzip file.
There is a pre-compiled module on
- <http://engsvr.ust.hk/~eetwl95/download/ma600-2.4.x.tar.gz>
+ <http://engsvr.ust.hk/~eetwl95/ma600.html>
config EP7211_IR
tristate "EP7211 I/R support"
Please note that the driver is still experimental. And of course,
you will need both USB and IrDA support in your kernel...
+config SIGMATEL_FIR
+ tristate "SigmaTel STIr4200 bridge (EXPERIMENTAL)"
+ depends on IRDA && USB && EXPERIMENTAL
+ select CRC32
+ ---help---
+ Say Y here if you want to build support for the SigmaTel STIr4200
+ USB IrDA FIR bridge device driver.
+
+ USB bridge based on the SigmaTel STIr4200 don't conform to the
+ IrDA-USB device class specification, and therefore need their
+ own specific driver. Those dongles support SIR and FIR (4Mbps)
+ speeds.
+
+ To compile it as a module, choose M here: the module will be called
+ stir4200.
+
config NSC_FIR
tristate "NSC PC87108/PC87338"
depends on IRDA && ISA
obj-$(CONFIG_IRPORT_SIR) += irport.o
# FIR drivers
obj-$(CONFIG_USB_IRDA) += irda-usb.o
+obj-$(CONFIG_SIGMATEL_FIR) += stir4200.o
obj-$(CONFIG_NSC_FIR) += nsc-ircc.o
obj-$(CONFIG_WINBOND_FIR) += w83977af_ir.o
obj-$(CONFIG_SA1100_FIR) += sa1100_ir.o
--- /dev/null
+/*****************************************************************************
+*
+* Filename: stir4200.c
+* Version: 0.4
+* Description: Irda SigmaTel USB Dongle
+* Status: Experimental
+* Author: Stephen Hemminger <shemminger@osdl.org>
+*
+* Based on earlier driver by Paul Stewart <stewart@parc.com>
+*
+* Copyright (C) 2000, Roman Weissgaerber <weissg@vienna.at>
+* Copyright (C) 2001, Dag Brattli <dag@brattli.net>
+* Copyright (C) 2001, Jean Tourrilhes <jt@hpl.hp.com>
+* Copyright (C) 2004, Stephen Hemminger <shemminger@osdl.org>
+*
+* This program is free software; you can redistribute it and/or modify
+* it under the terms of the GNU General Public License as published by
+* the Free Software Foundation; either version 2 of the License, or
+* (at your option) any later version.
+*
+* This program is distributed in the hope that it will be useful,
+* but WITHOUT ANY WARRANTY; without even the implied warranty of
+* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+* GNU General Public License for more details.
+*
+* You should have received a copy of the GNU General Public License
+* along with this program; if not, write to the Free Software
+* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+*
+*****************************************************************************/
+
+/*
+ * This dongle does no framing, and requires polling to receive the
+ * data. The STIr4200 has bulk in and out endpoints just like
+ * usr-irda devices, but the data it sends and receives is raw; like
+ * irtty, it needs to call the wrap and unwrap functions to add and
+ * remove SOF/BOF and escape characters to/from the frame.
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/time.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/suspend.h>
+#include <linux/slab.h>
+#include <linux/usb.h>
+#include <net/irda/irda.h>
+#include <net/irda/irlap.h>
+#include <net/irda/irda_device.h>
+#include <net/irda/wrapper.h>
+#include <net/irda/crc.h>
+#include <linux/crc32.h>
+
+MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>");
+MODULE_DESCRIPTION("IrDA-USB Dongle Driver for SigmaTel STIr4200");
+MODULE_LICENSE("GPL");
+
+static int qos_mtt_bits = 0x07; /* 1 ms or more */
+module_param(qos_mtt_bits, int, 0);
+MODULE_PARM_DESC(qos_mtt_bits, "Minimum Turn Time");
+
+static int rx_sensitivity = 1; /* FIR 0..4, SIR 0..6 */
+module_param(rx_sensitivity, int, 0);
+MODULE_PARM_DESC(rx_sensitivity, "Set Receiver sensitivity (0-6, 0 is most sensitive)");
+
+static int tx_power = 0; /* 0 = highest ... 3 = lowest */
+module_param(tx_power, int, 0);
+MODULE_PARM_DESC(tx_power, "Set Transmitter power (0-3, 0 is highest power)");
+
+static int rx_interval = 5; /* milliseconds */
+module_param(rx_interval, int, 0);
+MODULE_PARM_DESC(rx_interval, "Receive polling interval (ms)");
+
+#define STIR_IRDA_HEADER 4
+#define CTRL_TIMEOUT 100 /* milliseconds */
+#define TRANSMIT_TIMEOUT 200 /* milliseconds */
+#define STIR_FIFO_SIZE 4096
+#define NUM_RX_URBS 2
+
+enum FirChars {
+ FIR_CE = 0x7d,
+ FIR_XBOF = 0x7f,
+ FIR_EOF = 0x7e,
+};
+
+enum StirRequests {
+ REQ_WRITE_REG = 0x00,
+ REQ_READ_REG = 0x01,
+ REQ_READ_ROM = 0x02,
+ REQ_WRITE_SINGLE = 0x03,
+};
+
+/* Register offsets */
+enum StirRegs {
+ REG_RSVD=0,
+ REG_MODE,
+ REG_PDCLK,
+ REG_CTRL1,
+ REG_CTRL2,
+ REG_FIFOCTL,
+ REG_FIFOLSB,
+ REG_FIFOMSB,
+ REG_DPLL,
+ REG_IRDIG,
+ REG_TEST=15,
+};
+
+enum StirModeMask {
+ MODE_FIR = 0x80,
+ MODE_SIR = 0x20,
+ MODE_ASK = 0x10,
+ MODE_FASTRX = 0x08,
+ MODE_FFRSTEN = 0x04,
+ MODE_NRESET = 0x02,
+ MODE_2400 = 0x01,
+};
+
+enum StirPdclkMask {
+ PDCLK_4000000 = 0x02,
+ PDCLK_115200 = 0x09,
+ PDCLK_57600 = 0x13,
+ PDCLK_38400 = 0x1D,
+ PDCLK_19200 = 0x3B,
+ PDCLK_9600 = 0x77,
+ PDCLK_2400 = 0xDF,
+};
+
+enum StirCtrl1Mask {
+ CTRL1_SDMODE = 0x80,
+ CTRL1_RXSLOW = 0x40,
+ CTRL1_TXPWD = 0x10,
+ CTRL1_RXPWD = 0x08,
+ CTRL1_SRESET = 0x01,
+};
+
+enum StirCtrl2Mask {
+ CTRL2_SPWIDTH = 0x08,
+ CTRL2_REVID = 0x03,
+};
+
+enum StirFifoCtlMask {
+ FIFOCTL_EOF = 0x80,
+ FIFOCTL_UNDER = 0x40,
+ FIFOCTL_OVER = 0x20,
+ FIFOCTL_DIR = 0x10,
+ FIFOCTL_CLR = 0x08,
+ FIFOCTL_EMPTY = 0x04,
+ FIFOCTL_RXERR = 0x02,
+ FIFOCTL_TXERR = 0x01,
+};
+
+enum StirDiagMask {
+ IRDIG_RXHIGH = 0x80,
+ IRDIG_RXLOW = 0x40,
+};
+
+enum StirTestMask {
+ TEST_PLLDOWN = 0x80,
+ TEST_LOOPIR = 0x40,
+ TEST_LOOPUSB = 0x20,
+ TEST_TSTENA = 0x10,
+ TEST_TSTOSC = 0x0F,
+};
+
+enum StirState {
+ STIR_STATE_RECEIVING=0,
+ STIR_STATE_TXREADY,
+};
+
+struct stir_cb {
+ struct usb_device *usbdev; /* init: probe_irda */
+ struct net_device *netdev; /* network layer */
+ struct irlap_cb *irlap; /* The link layer we are binded to */
+ struct net_device_stats stats; /* network statistics */
+ struct qos_info qos;
+ unsigned long state;
+ unsigned speed; /* Current speed */
+
+ wait_queue_head_t thr_wait; /* transmit thread wakeup */
+ struct completion thr_exited;
+ pid_t thr_pid;
+
+ unsigned int tx_bulkpipe;
+ void *tx_data; /* wrapped data out */
+ unsigned tx_len;
+ unsigned tx_newspeed;
+ unsigned tx_mtt;
+
+ unsigned int rx_intpipe;
+ iobuff_t rx_buff; /* receive unwrap state machine */
+ struct timespec rx_time;
+
+ struct urb *rx_urbs[NUM_RX_URBS];
+ void *rx_data[NUM_RX_URBS];
+};
+
+
+/* These are the currently known USB ids */
+static struct usb_device_id dongles[] = {
+ /* SigmaTel, Inc, STIr4200 IrDA/USB Bridge */
+ { USB_DEVICE(0x066f, 0x4200) },
+ { }
+};
+
+MODULE_DEVICE_TABLE(usb, dongles);
+
+static int fifo_txwait(struct stir_cb *stir, unsigned space);
+static void stir_usb_receive(struct urb *urb, struct pt_regs *regs);
+
+/* Send control message to set dongle register */
+static int write_reg(struct stir_cb *stir, __u16 reg, __u8 value)
+{
+ struct usb_device *dev = stir->usbdev;
+
+ pr_debug("%s: write reg %d = 0x%x\n",
+ stir->netdev->name, reg, value);
+ return usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ REQ_WRITE_SINGLE,
+ USB_DIR_OUT|USB_TYPE_VENDOR|USB_RECIP_DEVICE,
+ value, reg, NULL, 0,
+ MSECS_TO_JIFFIES(CTRL_TIMEOUT));
+}
+
+/* Send control message to read multiple registers */
+static inline int read_reg(struct stir_cb *stir, __u16 reg,
+ __u8 *data, __u16 count)
+{
+ struct usb_device *dev = stir->usbdev;
+
+ return usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ REQ_READ_REG,
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0, reg, data, count,
+ MSECS_TO_JIFFIES(CTRL_TIMEOUT));
+}
+
+/*
+ * Prepare a FIR IrDA frame for transmission to the USB dongle. The
+ * FIR transmit frame is documented in the datasheet. It consists of
+ * a two byte 0x55 0xAA sequence, two little-endian length bytes, a
+ * sequence of exactly 16 XBOF bytes of 0x7E, two BOF bytes of 0x7E,
+ * then the data escaped as follows:
+ *
+ * 0x7D -> 0x7D 0x5D
+ * 0x7E -> 0x7D 0x5E
+ * 0x7F -> 0x7D 0x5F
+ *
+ * Then, 4 bytes of little endian (stuffed) FCS follow, then two
+ * trailing EOF bytes of 0x7E.
+ */
+static inline __u8 *stuff_fir(__u8 *p, __u8 c)
+{
+ switch(c) {
+ case 0x7d:
+ case 0x7e:
+ case 0x7f:
+ *p++ = 0x7d;
+ c ^= IRDA_TRANS;
+ /* fall through */
+ default:
+ *p++ = c;
+ }
+ return p;
+}
+
+/* Take raw data in skb and put it wrapped into buf */
+static unsigned wrap_fir_skb(const struct sk_buff *skb, __u8 *buf)
+{
+ __u8 *ptr = buf;
+ __u32 fcs = ~(crc32_le(~0, skb->data, skb->len));
+ __u16 wraplen;
+ int i;
+
+ /* Header */
+ buf[0] = 0x55;
+ buf[1] = 0xAA;
+
+ ptr = buf + STIR_IRDA_HEADER;
+ memset(ptr, 0x7f, 16);
+ ptr += 16;
+
+ /* BOF */
+ *ptr++ = 0x7e;
+ *ptr++ = 0x7e;
+
+ /* Address / Control / Information */
+ for (i = 0; i < skb->len; i++)
+ ptr = stuff_fir(ptr, skb->data[i]);
+
+ /* FCS */
+ ptr = stuff_fir(ptr, fcs & 0xff);
+ ptr = stuff_fir(ptr, (fcs >> 8) & 0xff);
+ ptr = stuff_fir(ptr, (fcs >> 16) & 0xff);
+ ptr = stuff_fir(ptr, (fcs >> 24) & 0xff);
+
+ /* EOFs */
+ *ptr++ = 0x7e;
+ *ptr++ = 0x7e;
+
+ /* Total length, minus the header */
+ wraplen = (ptr - buf) - STIR_IRDA_HEADER;
+ buf[2] = wraplen & 0xff;
+ buf[3] = (wraplen >> 8) & 0xff;
+
+ return wraplen + STIR_IRDA_HEADER;
+}
+
+static unsigned wrap_sir_skb(struct sk_buff *skb, __u8 *buf)
+{
+ __u16 wraplen;
+
+ wraplen = async_wrap_skb(skb, buf + STIR_IRDA_HEADER,
+ STIR_FIFO_SIZE - STIR_IRDA_HEADER);
+ buf[0] = 0x55;
+ buf[1] = 0xAA;
+ buf[2] = wraplen & 0xff;
+ buf[3] = (wraplen >> 8) & 0xff;
+
+ return wraplen + STIR_IRDA_HEADER;
+}
+
+/*
+ * Frame is fully formed in the rx_buff so check crc
+ * and pass up to irlap
+ * setup for next receive
+ */
+static void fir_eof(struct stir_cb *stir)
+{
+ iobuff_t *rx_buff = &stir->rx_buff;
+ int len = rx_buff->len - 4;
+ __u32 fcs;
+ struct sk_buff *nskb;
+
+ if (unlikely(len <= 0)) {
+ pr_debug("%s: short frame len %d\n",
+ stir->netdev->name, len);
+
+ ++stir->stats.rx_errors;
+ ++stir->stats.rx_length_errors;
+ return;
+ }
+
+ fcs = rx_buff->data[len] |
+ rx_buff->data[len+1] << 8 |
+ rx_buff->data[len+2] << 16 |
+ rx_buff->data[len+3] << 24;
+
+ if (unlikely(fcs != ~(crc32_le(~0, rx_buff->data, len)))) {
+ pr_debug("%s: crc error\n", stir->netdev->name);
+ irda_device_set_media_busy(stir->netdev, TRUE);
+ stir->stats.rx_errors++;
+ stir->stats.rx_crc_errors++;
+ return;
+ }
+
+ /* If can't get new buffer, just drop and reuse */
+ nskb = dev_alloc_skb(IRDA_SKB_MAX_MTU);
+ if (unlikely(!nskb))
+ ++stir->stats.rx_dropped;
+ else {
+ struct sk_buff *oskb = rx_buff->skb;
+ skb_reserve(nskb, 1);
+
+ /* Set correct length in socket buffer */
+ skb_put(oskb, len);
+
+ oskb->mac.raw = oskb->data;
+ oskb->protocol = htons(ETH_P_IRDA);
+ oskb->dev = stir->netdev;
+
+ netif_rx(oskb);
+
+ stir->stats.rx_packets++;
+ stir->stats.rx_bytes += len;
+ rx_buff->skb = nskb;
+ rx_buff->head = nskb->data;
+ }
+
+ rx_buff->data = rx_buff->head;
+ rx_buff->len = 0;
+}
+
+/* Unwrap FIR stuffed data and bump it to IrLAP */
+static void stir_fir_chars(struct stir_cb *stir,
+ const __u8 *bytes, int len)
+{
+ iobuff_t *rx_buff = &stir->rx_buff;
+ int i;
+
+ for (i = 0; i < len; i++) {
+ __u8 byte = bytes[i];
+
+ switch(rx_buff->state) {
+ case OUTSIDE_FRAME:
+ /* ignore garbage till start of frame */
+ if (unlikely(byte != FIR_EOF))
+ continue;
+ /* Now receiving frame */
+ rx_buff->state = BEGIN_FRAME;
+ rx_buff->in_frame = TRUE;
+
+ /* Time to initialize receive buffer */
+ rx_buff->data = rx_buff->head;
+ rx_buff->len = 0;
+ continue;
+
+ case LINK_ESCAPE:
+ if (byte == FIR_EOF) {
+ pr_debug("%s: got EOF after escape\n",
+ stir->netdev->name);
+ goto frame_error;
+ }
+ rx_buff->state = INSIDE_FRAME;
+ byte ^= IRDA_TRANS;
+ break;
+
+ case BEGIN_FRAME:
+ /* ignore multiple BOF/EOF */
+ if (byte == FIR_EOF)
+ continue;
+ rx_buff->state = INSIDE_FRAME;
+
+ /* fall through */
+ case INSIDE_FRAME:
+ switch(byte) {
+ case FIR_CE:
+ rx_buff->state = LINK_ESCAPE;
+ continue;
+ case FIR_XBOF:
+ /* 0x7f is not used in this framing */
+ pr_debug("%s: got XBOF without escape\n",
+ stir->netdev->name);
+ goto frame_error;
+ case FIR_EOF:
+ rx_buff->state = OUTSIDE_FRAME;
+ rx_buff->in_frame = FALSE;
+ fir_eof(stir);
+ continue;
+ }
+ break;
+ }
+
+ /* add byte to rx buffer */
+ if (unlikely(rx_buff->len >= rx_buff->truesize)) {
+ pr_debug("%s: fir frame exceeds %d\n",
+ stir->netdev->name, rx_buff->truesize);
+ ++stir->stats.rx_over_errors;
+ goto error_recovery;
+ }
+
+ rx_buff->data[rx_buff->len++] = byte;
+ continue;
+
+ frame_error:
+ ++stir->stats.rx_frame_errors;
+
+ error_recovery:
+ ++stir->stats.rx_errors;
+ irda_device_set_media_busy(stir->netdev, TRUE);
+ rx_buff->state = OUTSIDE_FRAME;
+ rx_buff->in_frame = FALSE;
+ }
+}
+
+/* Unwrap SIR stuffed data and bump it up to IrLAP */
+static void stir_sir_chars(struct stir_cb *stir,
+ const __u8 *bytes, int len)
+{
+ int i;
+
+ for (i = 0; i < len; i++)
+ async_unwrap_char(stir->netdev, &stir->stats,
+ &stir->rx_buff, bytes[i]);
+}
+
+static inline int isfir(u32 speed)
+{
+ return (speed == 4000000);
+}
+
+static inline void unwrap_chars(struct stir_cb *stir,
+ const __u8 *bytes, int length)
+{
+ if (isfir(stir->speed))
+ stir_fir_chars(stir, bytes, length);
+ else
+ stir_sir_chars(stir, bytes, length);
+}
+
+/* Mode parameters for each speed */
+static const struct {
+ unsigned speed;
+ __u8 pdclk;
+} stir_modes[] = {
+ { 2400, PDCLK_2400 },
+ { 9600, PDCLK_9600 },
+ { 19200, PDCLK_19200 },
+ { 38400, PDCLK_38400 },
+ { 57600, PDCLK_57600 },
+ { 115200, PDCLK_115200 },
+ { 4000000, PDCLK_4000000 },
+};
+
+
+/*
+ * Setup chip for speed.
+ * Called at startup to initialize the chip
+ * and on speed changes.
+ *
+ * Note: Write multiple registers doesn't appear to work
+ */
+static int change_speed(struct stir_cb *stir, unsigned speed)
+{
+ int i, err;
+ __u8 mode;
+
+ pr_debug("%s: change speed %d\n", stir->netdev->name, speed);
+ for (i = 0; i < ARRAY_SIZE(stir_modes); ++i) {
+ if (speed == stir_modes[i].speed)
+ goto found;
+ }
+
+ ERROR("%s: invalid speed %d\n", stir->netdev->name, speed);
+ return -EINVAL;
+
+ found:
+ pr_debug("%s: speed change from %d to %d\n",
+ stir->netdev->name, stir->speed, speed);
+
+ /* Make sure any previous Tx is really finished. This happens
+ * when we answer an incomming request ; the ua:rsp and the
+ * speed change are bundled together, so we need to wait until
+ * the packet we just submitted has been sent. Jean II */
+ if (fifo_txwait(stir, 0))
+ return -EIO;
+
+ /* Set clock */
+ err = write_reg(stir, REG_PDCLK, stir_modes[i].pdclk);
+ if (err)
+ goto out;
+
+ mode = MODE_NRESET | MODE_FASTRX;
+ if (isfir(speed))
+ mode |= MODE_FIR | MODE_FFRSTEN;
+ else
+ mode |= MODE_SIR;
+
+ if (speed == 2400)
+ mode |= MODE_2400;
+
+ err = write_reg(stir, REG_MODE, mode);
+ if (err)
+ goto out;
+
+ /* This resets TEMIC style transceiver if any. */
+ err = write_reg(stir, REG_CTRL1,
+ CTRL1_SDMODE | (tx_power & 3) << 1);
+ if (err)
+ goto out;
+
+ err = write_reg(stir, REG_CTRL1, (tx_power & 3) << 1);
+
+ out:
+ stir->speed = speed;
+ return err;
+}
+
+static int stir_reset(struct stir_cb *stir)
+{
+ int err;
+
+ /* reset state */
+ stir->rx_buff.in_frame = FALSE;
+ stir->rx_buff.state = OUTSIDE_FRAME;
+ stir->speed = -1;
+
+ /* Undocumented magic to tweak the DPLL */
+ err = write_reg(stir, REG_DPLL, 0x15);
+ if (err)
+ goto out;
+
+ /* Reset sensitivity */
+ err = write_reg(stir, REG_CTRL2, (rx_sensitivity & 7) << 5);
+ if (err)
+ goto out;
+
+ err = change_speed(stir, 9600);
+ out:
+ return err;
+}
+
+/*
+ * Called from net/core when new frame is available.
+ */
+static int stir_hard_xmit(struct sk_buff *skb, struct net_device *netdev)
+{
+ struct stir_cb *stir = netdev->priv;
+
+ netif_stop_queue(netdev);
+
+ /* the IRDA wrapping routines don't deal with non linear skb */
+ SKB_LINEAR_ASSERT(skb);
+
+ if (unlikely(skb->len) == 0) /* speed change only */
+ stir->tx_len = 0;
+ else if (isfir(stir->speed))
+ stir->tx_len = wrap_fir_skb(skb, stir->tx_data);
+ else
+ stir->tx_len = wrap_sir_skb(skb, stir->tx_data);
+
+ stir->stats.tx_packets++;
+ stir->stats.tx_bytes += skb->len;
+
+ stir->tx_mtt = irda_get_mtt(skb);
+ stir->tx_newspeed = irda_get_next_speed(skb);
+
+ if (!test_and_set_bit(STIR_STATE_TXREADY, &stir->state))
+ wake_up(&stir->thr_wait);
+
+ dev_kfree_skb(skb);
+ return 0;
+}
+
+/*
+ * Wait for the transmit FIFO to have space for next data
+ */
+static int fifo_txwait(struct stir_cb *stir, unsigned space)
+{
+ int err;
+ unsigned count;
+ __u8 regs[3];
+ unsigned long timeout = jiffies + HZ/10;
+
+ for(;;) {
+ /* Read FIFO status and count */
+ err = read_reg(stir, REG_FIFOCTL, regs, 3);
+ if (unlikely(err != 3)) {
+ WARNING("%s: FIFO register read error: %d\n",
+ stir->netdev->name, err);
+ return err;
+ }
+
+ /* is fifo receiving already, or empty */
+ if (!(regs[0] & FIFOCTL_DIR)
+ || (regs[0] & FIFOCTL_EMPTY))
+ return 0;
+
+ if (signal_pending(current))
+ return -EINTR;
+
+ /* shutting down? */
+ if (!netif_running(stir->netdev)
+ || !netif_device_present(stir->netdev))
+ return -ESHUTDOWN;
+
+ count = (unsigned)(regs[2] & 0x1f) << 8 | regs[1];
+
+ pr_debug("%s: fifo status 0x%x count %u\n",
+ stir->netdev->name, regs[0], count);
+
+ /* only waiting for some space */
+ if (space && STIR_FIFO_SIZE - 4 > space + count)
+ return 0;
+
+ if (time_after(jiffies, timeout)) {
+ WARNING("%s: transmit fifo timeout status=0x%x count=%d\n",
+ stir->netdev->name, regs[0], count);
+ ++stir->stats.tx_errors;
+ irda_device_set_media_busy(stir->netdev, TRUE);
+ return -ETIMEDOUT;
+ }
+
+ /* estimate transfer time for remaining chars */
+ wait_ms((count * 8000) / stir->speed);
+ }
+}
+
+
+/* Wait for turnaround delay before starting transmit. */
+static void turnaround_delay(long us, const struct timespec *last)
+{
+ long ticks;
+ struct timespec now = CURRENT_TIME;
+
+ if (us <= 0)
+ return;
+
+ us -= (now.tv_sec - last->tv_sec) * USEC_PER_SEC;
+ us -= (now.tv_nsec - last->tv_nsec) / NSEC_PER_USEC;
+ if (us < 10)
+ return;
+
+ ticks = us / (1000000 / HZ);
+ if (ticks > 0) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(1 + ticks);
+ } else
+ udelay(us);
+}
+
+/*
+ * Start receiver by submitting a request to the receive pipe.
+ * If nothing is available it will return after rx_interval.
+ */
+static void receive_start(struct stir_cb *stir)
+{
+ int i;
+
+ if (test_and_set_bit(STIR_STATE_RECEIVING, &stir->state))
+ return;
+
+ if (fifo_txwait(stir, 0))
+ return;
+
+ for (i = 0; i < NUM_RX_URBS; i++) {
+ struct urb *urb = stir->rx_urbs[i];
+
+ usb_fill_int_urb(urb, stir->usbdev, stir->rx_intpipe,
+ stir->rx_data[i], STIR_FIFO_SIZE,
+ stir_usb_receive, stir, rx_interval);
+
+ if (usb_submit_urb(urb, GFP_KERNEL))
+ urb->status = -EINVAL;
+ }
+
+ if (i == 0) {
+ /* if nothing got queued, then just retry next time */
+ if (net_ratelimit())
+ WARNING("%s: no receive buffers avaiable\n",
+ stir->netdev->name);
+
+ clear_bit(STIR_STATE_RECEIVING, &stir->state);
+ }
+}
+
+/* Stop all pending receive Urb's */
+static void receive_stop(struct stir_cb *stir)
+{
+ int i;
+
+ for (i = 0; i < NUM_RX_URBS; i++) {
+ struct urb *urb = stir->rx_urbs[i];
+ usb_unlink_urb(urb);
+ }
+}
+
+/* Send wrapped data (in tx_data) to device */
+static void stir_send(struct stir_cb *stir)
+{
+ int rc;
+
+ if (test_and_clear_bit(STIR_STATE_RECEIVING, &stir->state)) {
+ receive_stop(stir);
+
+ turnaround_delay(stir->tx_mtt, &stir->rx_time);
+
+ if (stir->rx_buff.in_frame)
+ ++stir->stats.collisions;
+ }
+ else if (fifo_txwait(stir, stir->tx_len))
+ return; /* shutdown or major errors */
+
+ stir->netdev->trans_start = jiffies;
+
+ pr_debug("%s: send %d\n", stir->netdev->name, stir->tx_len);
+ rc = usb_bulk_msg(stir->usbdev,
+ stir->tx_bulkpipe,
+ stir->tx_data, stir->tx_len,
+ NULL, MSECS_TO_JIFFIES(TRANSMIT_TIMEOUT));
+
+ if (unlikely(rc)) {
+ WARNING("%s: usb bulk message failed %d\n",
+ stir->netdev->name, rc);
+ stir->stats.tx_errors++;
+ }
+}
+
+/*
+ * Transmit state machine thread
+ */
+static int stir_transmit_thread(void *arg)
+{
+ struct stir_cb *stir = arg;
+ struct net_device *dev = stir->netdev;
+ DECLARE_WAITQUEUE(wait, current);
+
+ daemonize("%s", dev->name);
+ allow_signal(SIGTERM);
+
+ while (netif_running(dev)
+ && netif_device_present(dev)
+ && !signal_pending(current))
+ {
+ /* make swsusp happy with our thread */
+ if (current->flags & PF_FREEZE) {
+ receive_stop(stir);
+
+ write_reg(stir, REG_CTRL1, CTRL1_TXPWD|CTRL1_RXPWD);
+
+ refrigerator(PF_IOTHREAD);
+
+ stir_reset(stir);
+ }
+
+ /* if something to send? */
+ if (test_and_clear_bit(STIR_STATE_TXREADY, &stir->state)) {
+ unsigned new_speed = stir->tx_newspeed;
+
+ /* Note that we may both send a packet and
+ * change speed in some cases. Jean II */
+
+ if (stir->tx_len != 0)
+ stir_send(stir);
+
+ if (stir->speed != new_speed)
+ change_speed(stir, new_speed);
+
+ netif_wake_queue(stir->netdev);
+ continue;
+ }
+
+ if (irda_device_txqueue_empty(dev))
+ receive_start(stir);
+
+ set_task_state(current, TASK_INTERRUPTIBLE);
+ add_wait_queue(&stir->thr_wait, &wait);
+ if (test_bit(STIR_STATE_TXREADY, &stir->state))
+ __set_task_state(current, TASK_RUNNING);
+ else
+ schedule_timeout(HZ/10);
+ remove_wait_queue(&stir->thr_wait, &wait);
+ }
+
+ complete_and_exit (&stir->thr_exited, 0);
+}
+
+
+/*
+ * Receive wrapped data into rx_data buffer.
+ * This chip doesn't block until data is available, we just have
+ * to read the FIFO perodically (ugh).
+ */
+static void stir_usb_receive(struct urb *urb, struct pt_regs *regs)
+{
+ struct stir_cb *stir = urb->context;
+ int err;
+
+ if (!netif_running(stir->netdev))
+ return;
+
+ switch (urb->status) {
+ case 0:
+ if(urb->actual_length > 0) {
+ pr_debug("%s: receive %d\n",
+ stir->netdev->name, urb->actual_length);
+ unwrap_chars(stir, urb->transfer_buffer,
+ urb->actual_length);
+
+ stir->netdev->last_rx = jiffies;
+ stir->rx_time = CURRENT_TIME;
+ }
+ break;
+
+ case -ECONNRESET: /* killed but pending */
+ case -ENOENT: /* killed but not in use */
+ case -ESHUTDOWN:
+ /* These are normal errors when URB is cancelled */
+ stir->rx_buff.in_frame = FALSE;
+ stir->rx_buff.state = OUTSIDE_FRAME;
+ return;
+
+ default:
+ WARNING("%s: received status %d\n", stir->netdev->name,
+ urb->status);
+ stir->stats.rx_errors++;
+ urb->status = 0;
+ }
+
+ /* kernel thread is stopping receiver don't resubmit */
+ if (!test_bit(STIR_STATE_RECEIVING, &stir->state))
+ return;
+
+ /* resubmit existing urb */
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+
+ /* in case of error, the kernel thread will restart us */
+ if (err) {
+ WARNING("%s: usb receive submit error: %d\n",
+ stir->netdev->name, err);
+ urb->status = -ENOENT;
+ wake_up(&stir->thr_wait);
+ }
+}
+
+
+/*
+ * Function stir_net_open (dev)
+ *
+ * Network device is taken up. Usually this is done by "ifconfig irda0 up"
+ */
+static int stir_net_open(struct net_device *netdev)
+{
+ struct stir_cb *stir = netdev->priv;
+ int i, err;
+ char hwname[16];
+
+ err = stir_reset(stir);
+ if (err)
+ goto err_out1;
+
+ err = -ENOMEM;
+
+ /* Note: Max SIR frame possible is 4273 */
+ stir->tx_data = kmalloc(STIR_FIFO_SIZE, GFP_KERNEL);
+ if (!stir->tx_data) {
+ ERROR("%s(), alloc failed for rxbuf!\n", __FUNCTION__);
+ goto err_out1;
+ }
+
+ /* Initialize for SIR/FIR to copy data directly into skb. */
+ stir->rx_buff.truesize = IRDA_SKB_MAX_MTU;
+ stir->rx_buff.skb = dev_alloc_skb(IRDA_SKB_MAX_MTU);
+ if (!stir->rx_buff.skb) {
+ ERROR("%s(), dev_alloc_skb() failed for rxbuf!\n",
+ __FUNCTION__);
+ goto err_out2;
+ }
+ skb_reserve(stir->rx_buff.skb, 1);
+ stir->rx_buff.head = stir->rx_buff.skb->data;
+ stir->rx_time = CURRENT_TIME;
+
+ /* Allocate N receive buffer's and urbs */
+ for (i = 0; i < NUM_RX_URBS; i++) {
+ stir->rx_urbs[i] = usb_alloc_urb(0, GFP_KERNEL);
+ if (!stir->rx_urbs[i]){
+ ERROR("%s(), usb_alloc_urb failed\n", __FUNCTION__);
+ goto err_out3;
+ }
+
+ stir->rx_data[i] = kmalloc(STIR_FIFO_SIZE, GFP_KERNEL);
+ if (!stir->rx_data) {
+ usb_free_urb(stir->rx_urbs[i]);
+ ERROR("%s(), alloc failed for rxbuf!\n", __FUNCTION__);
+ goto err_out3;
+ }
+ }
+
+ /*
+ * Now that everything should be initialized properly,
+ * Open new IrLAP layer instance to take care of us...
+ * Note : will send immediately a speed change...
+ */
+ sprintf(hwname, "usb#%d", stir->usbdev->devnum);
+ stir->irlap = irlap_open(netdev, &stir->qos, hwname);
+ if (!stir->irlap) {
+ ERROR("%s(): irlap_open failed\n", __FUNCTION__);
+ goto err_out3;
+ }
+
+ /** Start kernel thread for transmit. */
+ stir->thr_pid = kernel_thread(stir_transmit_thread, stir,
+ CLONE_FS|CLONE_FILES);
+ if (stir->thr_pid < 0) {
+ err = stir->thr_pid;
+ WARNING("%s: unable to start kernel thread\n",
+ stir->netdev->name);
+ goto err_out4;
+ }
+
+ netif_start_queue(netdev);
+
+ return 0;
+
+ err_out4:
+ irlap_close(stir->irlap);
+ err_out3:
+ while(--i >= 0) {
+ usb_free_urb(stir->rx_urbs[i]);
+ kfree(stir->rx_data[i]);
+ }
+ kfree_skb(stir->rx_buff.skb);
+ err_out2:
+ kfree(stir->tx_data);
+ err_out1:
+ return err;
+}
+
+/*
+ * Function stir_net_close (stir)
+ *
+ * Network device is taken down. Usually this is done by
+ * "ifconfig irda0 down"
+ */
+static int stir_net_close(struct net_device *netdev)
+{
+ struct stir_cb *stir = netdev->priv;
+ int i;
+
+ /* Stop transmit processing */
+ netif_stop_queue(netdev);
+
+ /* Kill transmit thread */
+ kill_proc(stir->thr_pid, SIGTERM, 1);
+ wait_for_completion(&stir->thr_exited);
+ kfree(stir->tx_data);
+
+ clear_bit(STIR_STATE_RECEIVING, &stir->state);
+ receive_stop(stir);
+
+ for (i = 0; i < NUM_RX_URBS; i++) {
+ usb_free_urb(stir->rx_urbs[i]);
+ kfree(stir->rx_data[i]);
+ }
+ kfree_skb(stir->rx_buff.skb);
+
+ /* Stop and remove instance of IrLAP */
+ if (stir->irlap)
+ irlap_close(stir->irlap);
+
+ stir->irlap = NULL;
+
+ return 0;
+}
+
+/*
+ * IOCTLs : Extra out-of-band network commands...
+ */
+static int stir_net_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct if_irda_req *irq = (struct if_irda_req *) rq;
+ struct stir_cb *stir = dev->priv;
+ int ret = 0;
+
+ switch (cmd) {
+ case SIOCSBANDWIDTH: /* Set bandwidth */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ /* Check if the device is still there */
+ if (netif_device_present(stir->netdev))
+ ret = change_speed(stir, irq->ifr_baudrate);
+ break;
+
+ case SIOCSMEDIABUSY: /* Set media busy */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ /* Check if the IrDA stack is still there */
+ if (netif_running(stir->netdev))
+ irda_device_set_media_busy(stir->netdev, TRUE);
+ break;
+
+ case SIOCGRECEIVING:
+ /* Only approximately true */
+ irq->ifr_receiving = test_bit(STIR_STATE_RECEIVING, &stir->state);
+ break;
+
+ default:
+ ret = -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+/*
+ * Get device stats (for /proc/net/dev and ifconfig)
+ */
+static struct net_device_stats *stir_net_get_stats(struct net_device *dev)
+{
+ struct stir_cb *stir = dev->priv;
+ return &stir->stats;
+}
+
+/*
+ * Parse the various endpoints and find the one we need.
+ *
+ * The endpoint are the pipes used to communicate with the USB device.
+ * The spec defines 2 endpoints of type bulk transfer, one in, and one out.
+ * These are used to pass frames back and forth with the dongle.
+ */
+static int stir_setup_usb(struct stir_cb *stir, struct usb_interface *intf)
+{
+ struct usb_device *usbdev = interface_to_usbdev(intf);
+ const struct usb_host_interface *interface
+ = &intf->altsetting[intf->act_altsetting];
+ const struct usb_endpoint_descriptor *ep_in = NULL;
+ const struct usb_endpoint_descriptor *ep_out = NULL;
+ int i;
+
+ if (interface->desc.bNumEndpoints != 2) {
+ WARNING("%s: expected two endpoints\n", __FUNCTION__);
+ return -ENODEV;
+ }
+
+ for(i = 0; i < interface->desc.bNumEndpoints; i++) {
+ const struct usb_endpoint_descriptor *ep
+ = &interface->endpoint[i].desc;
+
+ if ((ep->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK)
+ == USB_ENDPOINT_XFER_BULK) {
+ /* We need to find an IN and an OUT */
+ if ((ep->bEndpointAddress & USB_ENDPOINT_DIR_MASK) == USB_DIR_IN)
+ ep_in = ep;
+ else
+ ep_out = ep;
+ } else
+ WARNING("%s: unknown endpoint type 0x%x\n",
+ __FUNCTION__, ep->bmAttributes);
+ }
+
+ if (!ep_in || !ep_out)
+ return -EIO;
+
+ stir->tx_bulkpipe = usb_sndbulkpipe(usbdev,
+ ep_out->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+ stir->rx_intpipe = usb_rcvintpipe(usbdev,
+ ep_in->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+ return 0;
+}
+
+/*
+ * This routine is called by the USB subsystem for each new device
+ * in the system. We need to check if the device is ours, and in
+ * this case start handling it.
+ * Note : it might be worth protecting this function by a global
+ * spinlock... Or not, because maybe USB already deal with that...
+ */
+static int stir_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+{
+ struct usb_device *dev = interface_to_usbdev(intf);
+ struct stir_cb *stir = NULL;
+ struct net_device *net;
+ int ret = -ENOMEM;
+
+ /* Allocate network device container. */
+ net = alloc_irdadev(sizeof(*stir));
+ if(!net)
+ goto err_out1;
+
+ SET_MODULE_OWNER(net);
+ SET_NETDEV_DEV(net, &intf->dev);
+ stir = net->priv;
+ stir->netdev = net;
+ stir->usbdev = dev;
+
+ ret = stir_setup_usb(stir, intf);
+ if (ret != 0) {
+ ERROR("%s(), Bogus endpoints...\n", __FUNCTION__);
+ goto err_out2;
+ }
+
+ printk(KERN_INFO "SigmaTel STIr4200 IRDA/USB found at address %d, "
+ "Vendor: %x, Product: %x\n",
+ dev->devnum, dev->descriptor.idVendor,
+ dev->descriptor.idProduct);
+
+ /* Initialize QoS for this device */
+ irda_init_max_qos_capabilies(&stir->qos);
+
+ /* That's the Rx capability. */
+ stir->qos.baud_rate.bits &= IR_2400 | IR_9600 | IR_19200 |
+ IR_38400 | IR_57600 | IR_115200 |
+ (IR_4000000 << 8);
+ stir->qos.min_turn_time.bits &= qos_mtt_bits;
+ irda_qos_bits_to_value(&stir->qos);
+
+ init_completion (&stir->thr_exited);
+ init_waitqueue_head (&stir->thr_wait);
+
+ /* Override the network functions we need to use */
+ net->hard_start_xmit = stir_hard_xmit;
+ net->open = stir_net_open;
+ net->stop = stir_net_close;
+ net->get_stats = stir_net_get_stats;
+ net->do_ioctl = stir_net_ioctl;
+
+ ret = stir_reset(stir);
+ if (ret)
+ goto err_out2;
+
+ ret = register_netdev(net);
+ if (ret != 0)
+ goto err_out2;
+
+ MESSAGE("IrDA: Registered SigmaTel device %s\n", net->name);
+
+ usb_set_intfdata(intf, stir);
+
+ return 0;
+
+err_out2:
+ free_netdev(net);
+err_out1:
+ return ret;
+}
+
+/*
+ * The current device is removed, the USB layer tell us to shut it down...
+ */
+static void stir_disconnect(struct usb_interface *intf)
+{
+ struct stir_cb *stir = usb_get_intfdata(intf);
+ struct net_device *net;
+
+ usb_set_intfdata(intf, NULL);
+ if (!stir)
+ return;
+
+ /* Stop transmitter */
+ net = stir->netdev;
+ netif_device_detach(net);
+
+ /* Remove netdevice */
+ unregister_netdev(net);
+
+ /* No longer attached to USB bus */
+ stir->usbdev = NULL;
+
+ free_netdev(net);
+}
+
+
+/* Power management suspend, so power off the transmitter/receiver */
+static int stir_suspend(struct usb_interface *intf, u32 state)
+{
+ struct stir_cb *stir = usb_get_intfdata(intf);
+
+ netif_device_detach(stir->netdev);
+ return 0;
+}
+
+/* Coming out of suspend, so reset hardware */
+static int stir_resume(struct usb_interface *intf)
+{
+ struct stir_cb *stir = usb_get_intfdata(intf);
+
+ netif_device_attach(stir->netdev);
+
+ /* receiver restarted when send thread wakes up */
+ return 0;
+}
+
+/*
+ * USB device callbacks
+ */
+static struct usb_driver irda_driver = {
+ .owner = THIS_MODULE,
+ .name = "stir4200",
+ .probe = stir_probe,
+ .disconnect = stir_disconnect,
+ .id_table = dongles,
+ .suspend = stir_suspend,
+ .resume = stir_resume,
+};
+
+/*
+ * Module insertion
+ */
+static int __init stir_init(void)
+{
+ if (usb_register(&irda_driver) < 0)
+ return -1;
+
+ MESSAGE("SigmaTel support registered\n");
+ return 0;
+}
+module_init(stir_init);
+
+/*
+ * Module removal
+ */
+static void __exit stir_cleanup(void)
+{
+ /* Deregister the driver and remove all pending instances */
+ usb_deregister(&irda_driver);
+}
+module_exit(stir_cleanup);
/* Index to functions, as function prototypes. */
-extern int netcard_probe(struct net_device *dev);
-
static int netcard_probe1(struct net_device *dev, int ioaddr);
static int net_open(struct net_device *dev);
static int net_send_packet(struct sk_buff *skb, struct net_device *dev);
* If dev->base_addr == 2, allocate space for the device and return success
* (detachable devices only).
*/
-int __init
-netcard_probe(struct net_device *dev)
+static int __init do_netcard_probe(struct net_device *dev)
{
int i;
int base_addr = dev->base_addr;
+ int irq = dev->irq;
SET_MODULE_OWNER(dev);
for (i = 0; netcard_portlist[i]; i++) {
int ioaddr = netcard_portlist[i];
- if (check_region(ioaddr, NETCARD_IO_EXTENT))
- continue;
if (netcard_probe1(dev, ioaddr) == 0)
return 0;
+ dev->irq = irq;
}
return -ENODEV;
}
+
+static void cleanup_card(struct net_device *dev)
+{
+#ifdef jumpered_dma
+ free_dma(dev->dma);
+#endif
+#ifdef jumpered_interrupts
+ free_irq(dev->irq, dev);
+#endif
+ release_region(dev->base_addr, NETCARD_IO_EXTENT);
+}
+
+struct net_device * __init netcard_probe(int unit)
+{
+ struct net_device *dev = alloc_etherdev(sizeof(struct net_local));
+ int err;
+
+ if (!dev)
+ return ERR_PTR(-ENOMEM);
+
+ sprintf(dev->name, "eth%d", unit);
+ netdev_boot_setup_check(dev);
+
+ err = do_netcard_probe(dev);
+ if (err)
+ goto out;
+ err = register_netdev(dev);
+ if (err)
+ goto out1;
+ return dev;
+out1:
+ cleanup_card(dev);
+out:
+ free_netdev(dev);
+ return ERR_PTR(err);
+}
/*
* This is the real probe routine. Linux has a history of friendly device
struct net_local *np;
static unsigned version_printed;
int i;
+ int err = -ENODEV;
+
+ /* Grab the region so that no one else tries to probe our ioports. */
+ if (!request_region(ioaddr, NETCARD_IO_EXTENT, cardname))
+ return -EBUSY;
/*
* For ethernet adaptors the first three octets of the station address
*/
if (inb(ioaddr + 0) != SA_ADDR0
|| inb(ioaddr + 1) != SA_ADDR1
- || inb(ioaddr + 2) != SA_ADDR2) {
- return -ENODEV;
- }
+ || inb(ioaddr + 2) != SA_ADDR2)
+ goto out;
if (net_debug && version_printed++ == 0)
printk(KERN_DEBUG "%s", version);
for (i = 0; i < 6; i++)
printk(" %2.2x", dev->dev_addr[i] = inb(ioaddr + i));
+ err = -EAGAIN;
#ifdef jumpered_interrupts
/*
* If this board has jumpered interrupts, allocate the interrupt
if (irqval) {
printk("%s: unable to get IRQ %d (irqval=%d).\n",
dev->name, dev->irq, irqval);
- return -EAGAIN;
+ goto out;
}
}
#endif /* jumpered interrupt */
if (dev->dma == 0) {
if (request_dma(dev->dma, cardname)) {
printk("DMA %d allocation failed.\n", dev->dma);
- return -EAGAIN;
+ goto out1;
} else
printk(", assigned DMA %d.\n", dev->dma);
} else {
}
if (i <= 0) {
printk("DMA probe failed.\n");
- return -EAGAIN;
+ goto out1;
}
if (request_dma(dev->dma, cardname)) {
printk("probed DMA %d allocation failed.\n", dev->dma);
- return -EAGAIN;
+ goto out1;
}
}
#endif /* jumpered DMA */
- /* Initialize the device structure. */
- if (dev->priv == NULL) {
- dev->priv = kmalloc(sizeof(struct net_local), GFP_KERNEL);
- if (dev->priv == NULL)
- return -ENOMEM;
- }
-
- memset(dev->priv, 0, sizeof(struct net_local));
-
np = (struct net_local *)dev->priv;
spin_lock_init(&np->lock);
- /* Grab the region so that no one else tries to probe our ioports. */
- request_region(ioaddr, NETCARD_IO_EXTENT, cardname);
-
dev->open = net_open;
dev->stop = net_close;
dev->hard_start_xmit = net_send_packet;
dev->tx_timeout = &net_tx_timeout;
dev->watchdog_timeo = MY_TX_TIMEOUT;
-
- /* Fill in the fields of the device structure with ethernet values. */
- ether_setup(dev);
-
return 0;
+out1:
+#ifdef jumpered_interrupts
+ free_irq(dev->irq, dev);
+#endif
+out:
+ release_region(base_addr, NETCARD_IO_EXTENT);
+ return err;
}
static void net_tx_timeout(struct net_device *dev)
#ifdef MODULE
-static struct net_device this_device;
+static struct net_device *this_device;
static int io = 0x300;
static int irq;
static int dma;
int init_module(void)
{
+ struct net_device *dev;
int result;
if (io == 0)
printk(KERN_WARNING "%s: You shouldn't use auto-probing with insmod!\n",
cardname);
+ dev = alloc_etherdev(sizeof(struct net_local));
+ if (!dev)
+ return -ENOMEM;
/* Copy the parameters from insmod into the device structure. */
- this_device.base_addr = io;
- this_device.irq = irq;
- this_device.dma = dma;
- this_device.mem_start = mem;
- this_device.init = netcard_probe;
-
- if ((result = register_netdev(&this_device)) != 0)
- return result;
-
- return 0;
+ dev->base_addr = io;
+ dev->irq = irq;
+ dev->dma = dma;
+ dev->mem_start = mem;
+ if (do_netcard_probe(dev) == 0) {
+ if (register_netdev(dev) == 0)
+ this_device = dev;
+ return 0;
+ }
+ cleanup_card(dev);
+ }
+ free_netdev(dev);
+ return -ENXIO;
}
void
cleanup_module(void)
{
- unregister_netdev(&this_device);
- /*
- * If we don't do this, we can't re-insmod it later.
- * Release irq/dma here, when you have jumpered versions and
- * allocate them in net_probe1().
- */
- /*
- free_irq(this_device.irq, dev);
- free_dma(this_device.dma);
- */
- release_region(this_device.base_addr, NETCARD_IO_EXTENT);
-
- if (this_device.priv)
- kfree(this_device.priv);
+ unregister_netdev(this_device);
+ cleanup_card(this_device);
+ free_netdev(this_device);
}
#endif /* MODULE */
.rebuild_header = eth_rebuild_header,
.flags = IFF_LOOPBACK,
.features = NETIF_F_SG|NETIF_F_FRAGLIST
- |NETIF_F_NO_CSUM|NETIF_F_HIGHDMA|NETIF_F_TSO,
+ |NETIF_F_NO_CSUM|NETIF_F_HIGHDMA,
};
/* Setup and register the of the LOOPBACK device. */
*************************************************************************/
#define DRV_NAME "pcnet32"
-#define DRV_VERSION "1.28"
-#define DRV_RELDATE "02.11.2004"
+#define DRV_VERSION "1.27b"
+#define DRV_RELDATE "01.10.2002"
#define PFX DRV_NAME ": "
static const char *version =
static struct pci_device_id pcnet32_pci_tbl[] = {
{ PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_LANCE_HOME, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
{ PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_LANCE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- /*
- * Adapters that were sold with IBM's RS/6000 or pSeries hardware have
- * the incorrect vendor id.
- */
- { PCI_VENDOR_ID_TRIDENT, PCI_DEVICE_ID_AMD_LANCE, PCI_ANY_ID, PCI_ANY_ID,
- PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, 0 },
{ 0, }
};
-static int pcnet32_debug = 0;
+static int pcnet32_debug = 1;
static int tx_start = 1; /* Mapping -- 0:20, 1:64, 2:128, 3:~220 (depends on chip vers) */
static int pcnet32vlb; /* check for VLB cards ? */
static int options[MAX_UNITS];
static int full_duplex[MAX_UNITS];
-static const char pcnet32_gstrings_test[][ETH_GSTRING_LEN] = {
- "Loopback test (offline)"
-};
-#define PCNET32_TEST_LEN (sizeof(pcnet32_gstrings_test) / ETH_GSTRING_LEN)
/*
* Theory of Operation
*
* clean up and using new mii module
* v1.27b Sep 30 2002 Kent Yoder <yoder1@us.ibm.com>
* Added timer for cable connection state changes.
- * v1.28 11 Feb 2004 Don Fry <brazilnut@us.ibm.com>,
- * Jon Mason <jonmason@us.ibm.com>, Chinmay Albal <albal@in.ibm.com>,
- * Jim Lewis <jklewis@us.ibm.com>.
- * Now uses ethtool_ops, netif_msg_*, and generic_mii_ioctl.
- * Loopback test added. Supports PCI (not PCMCIA) hot remove.
- * Fixes "Bus master arbitration failure" and pci_[un]map_single
- * length errors.
*/
dxsuflo:1, /* disable transmit stop on uflo */
mii:1; /* mii port available */
struct net_device *next;
- struct mii_if_info mii_if;
+ struct mii_if_info mii_if;
struct timer_list watchdog_timer;
- u32 msg_enable; /* debug message level */
};
static void pcnet32_probe_vlbus(void);
static void pcnet32_watchdog(struct net_device *);
static int mdio_read(struct net_device *dev, int phy_id, int reg_num);
static void mdio_write(struct net_device *dev, int phy_id, int reg_num, int val);
-static void pcnet32_restart(struct net_device *dev, unsigned int csr0_bits);
-
-static void pcnet32_ethtool_test(struct net_device *dev,
- struct ethtool_test *eth_test, u64 *data);
-static int pcnet32_loopback_test(struct net_device *dev, uint64_t *data1);
enum pci_flags_bit {
PCI_USES_IO=1, PCI_USES_MEM=2, PCI_USES_MASTER=4,
};
-static int pcnet32_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct pcnet32_private *lp = dev->priv;
- unsigned long flags;
-
- spin_lock_irqsave(&lp->lock, flags);
- mii_ethtool_gset(&lp->mii_if, cmd);
- spin_unlock_irqrestore(&lp->lock, flags);
- return 0;
-}
-
-static int pcnet32_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct pcnet32_private *lp = dev->priv;
- unsigned long flags;
- int r;
-
- spin_lock_irqsave(&lp->lock, flags);
- r = mii_ethtool_sset(&lp->mii_if, cmd);
- spin_unlock_irqrestore(&lp->lock, flags);
- return r;
-}
-
-static void pcnet32_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
-{
- struct pcnet32_private *lp = dev->priv;
-
- strcpy (info->driver, DRV_NAME);
- strcpy (info->version, DRV_VERSION);
- if (lp->pci_dev)
- strcpy (info->bus_info, pci_name(lp->pci_dev));
- else
- sprintf(info->bus_info, "VLB 0x%lx", dev->base_addr);
- info->testinfo_len = PCNET32_TEST_LEN;
-}
-
-static u32 pcnet32_get_link(struct net_device *dev)
-{
- struct pcnet32_private *lp = dev->priv;
- unsigned long flags;
- int r;
-
- spin_lock_irqsave(&lp->lock, flags);
- r = mii_link_ok(&lp->mii_if);
- spin_unlock_irqrestore(&lp->lock, flags);
- return r;
-}
-
-static u32 pcnet32_get_msglevel(struct net_device *dev)
-{
- struct pcnet32_private *lp = dev->priv;
- return lp->msg_enable;
-}
-
-static void pcnet32_set_msglevel(struct net_device *dev, u32 value)
-{
- struct pcnet32_private *lp = dev->priv;
- lp->msg_enable = value;
-}
-
-static int pcnet32_nway_reset(struct net_device *dev)
-{
- struct pcnet32_private *lp = dev->priv;
- unsigned long flags;
- int r;
-
- spin_lock_irqsave(&lp->lock, flags);
- r = mii_nway_restart(&lp->mii_if);
- spin_unlock_irqrestore(&lp->lock, flags);
- return r;
-}
-
-static void pcnet32_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ering)
-{
- struct pcnet32_private *lp = dev->priv;
-
- ering->tx_max_pending = TX_RING_SIZE - 1;
- ering->tx_pending = lp->cur_tx - lp->dirty_tx;
- ering->rx_max_pending = RX_RING_SIZE - 1;
- ering->rx_pending = lp->cur_rx & RX_RING_MOD_MASK;
-}
-
-static void pcnet32_get_strings(struct net_device *dev, u32 stringset, u8 *data)
-{
- memcpy(data, pcnet32_gstrings_test, sizeof(pcnet32_gstrings_test));
-}
-
-static int pcnet32_self_test_count(struct net_device *dev)
-{
- return PCNET32_TEST_LEN;
-}
-
-static void pcnet32_ethtool_test(struct net_device *dev, struct ethtool_test *test, u64 *data)
-{
- struct pcnet32_private *lp = dev->priv;
- int rc;
-
- if (test->flags == ETH_TEST_FL_OFFLINE) {
- rc = pcnet32_loopback_test(dev, data);
- if (rc) {
- if (netif_msg_hw(lp))
- printk(KERN_DEBUG "%s: Loopback test failed.\n", dev->name);
- test->flags |= ETH_TEST_FL_FAILED;
- } else if (netif_msg_hw(lp))
- printk(KERN_DEBUG "%s: Loopback test passed.\n", dev->name);
- } else
- printk(KERN_DEBUG "%s: No tests to run (specify 'Offline' on ethtool).", dev->name);
-} /* end pcnet32_ethtool_test */
-
-static int pcnet32_loopback_test(struct net_device *dev, uint64_t *data1)
-{
- struct pcnet32_private *lp = dev->priv;
- struct pcnet32_access *a = &lp->a; // access to registers
- ulong ioaddr = dev->base_addr; // card base I/O address
- struct sk_buff *skb; // sk buff
- int x, y, i; // counters
- int numbuffs = 4; // number of TX/RX buffers and descs
- u16 status = 0x8300; // TX ring status
- int rc; // return code
- int size; // size of packets
- unsigned char *packet; // source packet data
- static int data_len = 60; // length of source packets data field
- unsigned long flags;
-
- *data1 = 1; // status of test, set to fail in case we abort
- rc = 1; // default to fail
-
- spin_lock_irqsave(&lp->lock, flags);
- lp->a.write_csr(ioaddr, 0, 0x7904);
-
- del_timer_sync(&lp->watchdog_timer);
-
- netif_stop_queue(dev);
-
- pcnet32_restart(dev, 0x0000); // purge & init rings but don't actually restart
-
- lp->a.write_csr(ioaddr, 0, 0x0004); // Set STOP bit
-
- x = a->read_bcr(ioaddr, 32); // set internal loopback in BSR32
- x = x | 0x00000002;
- a->write_bcr(ioaddr, 32, x);
-
- /* Initialize Transmit buffers. */
- size = data_len + 15;
- for (x=0; x<numbuffs; x++) {
- if (!(skb = dev_alloc_skb(size))) { // freed later in pcnet32_purge_tx_ring
- printk(KERN_DEBUG "%s: Cannot allocate skb at line: %d!\n",
- dev->name, __LINE__);
- goto clean_up;
- } else {
- packet = skb->data;
- skb_put(skb, size); // create space for data
- lp->tx_skbuff[x] = skb;
- lp->tx_ring[x].length = le16_to_cpu(-skb->len);
- lp->tx_ring[x].misc = 0x00000000;
-
- // put DA and SA into the skb
- for (i=0; i<12; i++)
- *packet++ = 0xff;
- // type
- *packet++ = 0x08;
- *packet++ = 0x06;
- // packet number
- *packet++ = x;
- // fill packet with data
- for (y=0; y<data_len; y++)
- *packet++ = y;
-
- lp->tx_dma_addr[x] = pci_map_single(lp->pci_dev, skb->data, skb->len, PCI_DMA_TODEVICE);
- lp->tx_ring[x].base = (u32)le32_to_cpu(lp->tx_dma_addr[x]);
- wmb(); /* Make sure owner changes after all others are visible */
- lp->tx_ring[x].status = le16_to_cpu(status);
- }
- }
-
- lp->a.write_csr(ioaddr, 0, 0x0002); // Set STRT bit
- spin_unlock_irqrestore(&lp->lock, flags);
-
- mdelay(50); // wait a bit
-
- spin_lock_irqsave(&lp->lock, flags);
- lp->a.write_csr(ioaddr, 0, 0x0004); // Set STOP bit
-
- if (netif_msg_hw(lp) && netif_msg_pktdata(lp)) {
- printk(KERN_DEBUG "%s: RX loopback packets:\n", dev->name);
-
- for (x=0; x<numbuffs; x++) {
- printk(KERN_DEBUG "%s: Packet %d:\n", dev->name, x);
- skb=lp->rx_skbuff[x];
- for (i=0; i<size; i++) {
- printk("%02x ",*(skb->data+i));
- }
- printk("\n");
- }
- }
-
- x = 0;
- rc = 0;
- while (x<numbuffs && !rc) {
- skb = lp->rx_skbuff[x];
- packet = lp->tx_skbuff[x]->data;
- for (i=0; i<size; i++) {
- if (*(skb->data+i) != packet[i]) {
- if (netif_msg_hw(lp))
- printk(KERN_DEBUG "%s: Error in compare! %2x - %02x %02x\n",
- dev->name, i, *(skb->data+i), packet[i]);
- rc = 1;
- break;
- }
- }
- x++;
- }
- if (!rc) {
- *data1 = 0;
- }
-
-clean_up:
- x = a->read_csr(ioaddr,15) & 0xFFFF;
- a->write_csr(ioaddr, 15, (x & ~0x0044)); // reset bits 6 and 2
-
- x = a->read_bcr(ioaddr, 32); // BCR32
- x = x & ~0x00000002;
- a->write_bcr(ioaddr, 32, x);
-
- pcnet32_restart(dev, 0x0042); // resume normal operation
-
- netif_wake_queue(dev);
-
- mod_timer(&(lp->watchdog_timer), PCNET32_WATCHDOG_TIMEOUT);
-
- /* Clear interrupts, and set interrupt enable. */
- lp->a.write_csr (ioaddr, 0, 0x7940);
- spin_unlock_irqrestore(&lp->lock, flags);
-
- return(rc);
-} /* end pcnet32_loopback_test */
-
-static struct ethtool_ops pcnet32_ethtool_ops = {
- .get_settings = pcnet32_get_settings,
- .set_settings = pcnet32_set_settings,
- .get_drvinfo = pcnet32_get_drvinfo,
- .get_msglevel = pcnet32_get_msglevel,
- .set_msglevel = pcnet32_set_msglevel,
- .nway_reset = pcnet32_nway_reset,
- .get_link = pcnet32_get_link,
- .get_ringparam = pcnet32_get_ringparam,
- .get_tx_csum = ethtool_op_get_tx_csum,
- .get_sg = ethtool_op_get_sg,
- .get_tso = ethtool_op_get_tso,
- .get_strings = pcnet32_get_strings,
- .self_test_count = pcnet32_self_test_count,
- .self_test = pcnet32_ethtool_test,
-};
/* only probes for non-PCI devices, the rest are handled by
* pci_register_driver via pcnet32_probe_pci */
}
chip_version = a->read_csr(ioaddr, 88) | (a->read_csr(ioaddr,89) << 16);
- if (pcnet32_debug & NETIF_MSG_PROBE)
+ if (pcnet32_debug > 2)
printk(KERN_INFO " PCnet chip version is %#x.\n", chip_version);
- if ((chip_version & 0xfff) != 0x003) {
- printk(KERN_INFO PFX "Unsupported chip version.\n");
+ if ((chip_version & 0xfff) != 0x003)
goto err_release_region;
- }
/* initialize variables */
fdx = mii = fset = dxsuflo = ltint = 0;
media &= ~3;
media |= 1;
#endif
- if (pcnet32_debug & NETIF_MSG_PROBE)
+ if (pcnet32_debug > 2)
printk(KERN_DEBUG PFX "media reset to %#x.\n", media);
a->write_bcr(ioaddr, 49, media);
break;
dev = alloc_etherdev(0);
if(!dev) {
- printk(KERN_ERR PFX "Memory allocation failed.\n");
ret = -ENOMEM;
goto err_release_region;
}
dev->base_addr = ioaddr;
/* pci_alloc_consistent returns page-aligned memory, so we do not have to check the alignment */
if ((lp = pci_alloc_consistent(pdev, sizeof(*lp), &lp_dma_addr)) == NULL) {
- printk("\n" KERN_ERR PFX "Consistent memory allocation failed.\n");
ret = -ENOMEM;
goto err_free_netdev;
}
lp->dxsuflo = dxsuflo;
lp->ltint = ltint;
lp->mii = mii;
- lp->msg_enable = pcnet32_debug;
if ((cards_found >= MAX_UNITS) || (options[cards_found] > sizeof(options_mapping)))
lp->options = PCNET32_PORT_ASEL;
else
dev->get_stats = &pcnet32_get_stats;
dev->set_multicast_list = &pcnet32_set_multicast_list;
dev->do_ioctl = &pcnet32_ioctl;
- dev->ethtool_ops = &pcnet32_ethtool_ops;
dev->tx_timeout = pcnet32_tx_timeout;
dev->watchdog_timeo = (5*HZ);
if (register_netdev(dev))
goto err_free_consistent;
- if (pdev)
- pci_set_drvdata(pdev, dev);
- else {
- lp->next = pcnet32_dev;
- pcnet32_dev = dev;
- }
-
+ lp->next = pcnet32_dev;
+ pcnet32_dev = dev;
printk(KERN_INFO "%s: registered as %s\n",dev->name, lp->name);
cards_found++;
return 0;
unsigned long ioaddr = dev->base_addr;
u16 val;
int i;
- int ret;
if (dev->irq == 0 ||
request_irq(dev->irq, &pcnet32_interrupt,
}
/* Check for a valid station address */
- if( !is_valid_ether_addr(dev->dev_addr) ) {
- ret = -EINVAL;
- goto err_free_irq;
- }
+ if( !is_valid_ether_addr(dev->dev_addr) )
+ return -EINVAL;
/* Reset the PCNET32 */
lp->a.reset (ioaddr);
/* switch pcnet32 to 32bit mode */
lp->a.write_bcr (ioaddr, 20, 2);
- if (netif_msg_ifup(lp))
+ if (pcnet32_debug > 1)
printk(KERN_DEBUG "%s: pcnet32_open() irq %d tx/rx rings %#x/%#x init %#x.\n",
dev->name, dev->irq,
(u32) (lp->dma_addr + offsetof(struct pcnet32_private, tx_ring)),
lp->init_block.mode = le16_to_cpu((lp->options & PCNET32_PORT_PORTSEL) << 7);
lp->init_block.filter[0] = 0x00000000;
lp->init_block.filter[1] = 0x00000000;
- if (pcnet32_init_ring(dev)) {
- ret = -ENOMEM;
- goto err_free_ring;
- }
+ if (pcnet32_init_ring(dev))
+ return -ENOMEM;
/* Re-initialize the PCNET32, and start it when done. */
lp->a.write_csr (ioaddr, 1, (lp->dma_addr + offsetof(struct pcnet32_private, init_block)) &0xffff);
*/
lp->a.write_csr (ioaddr, 0, 0x0042);
- if (netif_msg_ifup(lp))
+ if (pcnet32_debug > 2)
printk(KERN_DEBUG "%s: pcnet32 open after %d ticks, init block %#x csr0 %4.4x.\n",
dev->name, i, (u32) (lp->dma_addr + offsetof(struct pcnet32_private, init_block)),
lp->a.read_csr(ioaddr, 0));
return 0; /* Always succeed */
-
-err_free_ring:
- /* free any allocated skbuffs */
- for (i = 0; i < RX_RING_SIZE; i++) {
- lp->rx_ring[i].status = 0;
- if (lp->rx_skbuff[i]) {
- pci_unmap_single(lp->pci_dev, lp->rx_dma_addr[i], PKT_BUF_SZ-2,
- PCI_DMA_FROMDEVICE);
- dev_kfree_skb(lp->rx_skbuff[i]);
- }
- lp->rx_skbuff[i] = NULL;
- lp->rx_dma_addr[i] = 0;
- }
- /*
- * Switch back to 16bit mode to avoid problems with dumb
- * DOS packet driver after a warm reboot
- */
- lp->a.write_bcr (ioaddr, 20, 4);
-
-err_free_irq:
- free_irq(dev->irq, dev);
- return ret;
}
/*
}
if (lp->rx_dma_addr[i] == 0)
- lp->rx_dma_addr[i] = pci_map_single(lp->pci_dev, rx_skbuff->tail, PKT_BUF_SZ-2, PCI_DMA_FROMDEVICE);
+ lp->rx_dma_addr[i] = pci_map_single(lp->pci_dev,
+ rx_skbuff->tail, PKT_BUF_SZ-2, PCI_DMA_FROMDEVICE);
lp->rx_ring[i].base = (u32)le32_to_cpu(lp->rx_dma_addr[i]);
lp->rx_ring[i].buf_length = le16_to_cpu(2-PKT_BUF_SZ);
- wmb(); /* Make sure owner changes after all others are visible */
lp->rx_ring[i].status = le16_to_cpu(0x8000);
}
/* The Tx buffer address is filled in as needed, but we do need to clear
the upper ownership bit. */
for (i = 0; i < TX_RING_SIZE; i++) {
lp->tx_ring[i].base = 0;
- wmb(); /* Make sure owner changes after all others are visible */
lp->tx_ring[i].status = 0;
lp->tx_dma_addr[i] = 0;
}
dev->name, lp->a.read_csr(ioaddr, 0));
lp->a.write_csr (ioaddr, 0, 0x0004);
lp->stats.tx_errors++;
- if (netif_msg_tx_err(lp)) {
+ if (pcnet32_debug > 2) {
int i;
printk(KERN_DEBUG " Ring data dump: dirty_tx %d cur_tx %d%s cur_rx %d.",
lp->dirty_tx, lp->cur_tx, lp->tx_full ? " (full)" : "",
int entry;
unsigned long flags;
- if (netif_msg_tx_queued(lp)) {
+ if (pcnet32_debug > 3) {
printk(KERN_DEBUG "%s: pcnet32_start_xmit() called, csr0 %4.4x.\n",
dev->name, lp->a.read_csr(ioaddr, 0));
}
* interrupt when that option is available to us.
*/
status = 0x8300;
- entry = (lp->cur_tx - lp->dirty_tx) & TX_RING_MOD_MASK;
+ entry = (lp->cur_tx - lp->dirty_tx) & TX_RING_MOD_MASK;
if ((lp->ltint) &&
((entry == TX_RING_SIZE/2) ||
(entry >= TX_RING_SIZE-2)))
dev->trans_start = jiffies;
- if (lp->tx_ring[(entry+1) & TX_RING_MOD_MASK].base == 0) {
+ if (lp->tx_ring[(entry+1) & TX_RING_MOD_MASK].base == 0)
netif_wake_queue(dev);
- } else {
+ else {
lp->tx_full = 1;
netif_stop_queue(dev);
}
must_restart = 0;
- if (netif_msg_intr(lp))
+ if (pcnet32_debug > 5)
printk(KERN_DEBUG "%s: interrupt csr0=%#2.2x new csr=%#2.2x.\n",
dev->name, csr0, lp->a.read_csr (ioaddr, 0));
lp->a.write_csr (ioaddr, 0, 0x7940);
lp->a.write_rap (ioaddr,rap);
- if (netif_msg_intr(lp))
+ if (pcnet32_debug > 4)
printk(KERN_DEBUG "%s: exiting interrupt, csr0=%#4.4x.\n",
dev->name, lp->a.read_csr (ioaddr, 0));
if ((newskb = dev_alloc_skb (PKT_BUF_SZ))) {
skb_reserve (newskb, 2);
skb = lp->rx_skbuff[entry];
- pci_unmap_single(lp->pci_dev, lp->rx_dma_addr[entry], PKT_BUF_SZ-2, PCI_DMA_FROMDEVICE);
+ pci_unmap_single(lp->pci_dev, lp->rx_dma_addr[entry],
+ PKT_BUF_SZ-2, PCI_DMA_FROMDEVICE);
skb_put (skb, pkt_len);
lp->rx_skbuff[entry] = newskb;
newskb->dev = dev;
* of QNX reports that some revs of the 79C965 clear it.
*/
lp->rx_ring[entry].buf_length = le16_to_cpu(2-PKT_BUF_SZ);
- wmb(); /* Make sure owner changes after all others are visible */
lp->rx_ring[entry].status |= le16_to_cpu(0x8000);
entry = (++lp->cur_rx) & RX_RING_MOD_MASK;
}
lp->stats.rx_missed_errors = lp->a.read_csr (ioaddr, 112);
- if (netif_msg_ifdown(lp))
+ if (pcnet32_debug > 1)
printk(KERN_DEBUG "%s: Shutting down ethercard, status was %2.2x.\n",
dev->name, lp->a.read_csr (ioaddr, 0));
for (i = 0; i < RX_RING_SIZE; i++) {
lp->rx_ring[i].status = 0;
if (lp->rx_skbuff[i]) {
- pci_unmap_single(lp->pci_dev, lp->rx_dma_addr[i], PKT_BUF_SZ-2, PCI_DMA_FROMDEVICE);
+ pci_unmap_single(lp->pci_dev, lp->rx_dma_addr[i], PKT_BUF_SZ-2,
+ PCI_DMA_FROMDEVICE);
dev_kfree_skb(lp->rx_skbuff[i]);
}
lp->rx_skbuff[i] = NULL;
lp->a.write_bcr(ioaddr, 33, phyaddr);
}
+static int pcnet32_ethtool_ioctl (struct net_device *dev, void *useraddr)
+{
+ struct pcnet32_private *lp = dev->priv;
+ u32 ethcmd;
+ int phyaddr = 0;
+ int phy_id = 0;
+ unsigned long ioaddr = dev->base_addr;
+
+ if (lp->mii) {
+ phyaddr = lp->a.read_bcr (ioaddr, 33);
+ phy_id = (phyaddr >> 5) & 0x1f;
+ lp->mii_if.phy_id = phy_id;
+ }
+
+ if (copy_from_user (ðcmd, useraddr, sizeof (ethcmd)))
+ return -EFAULT;
+
+ switch (ethcmd) {
+ case ETHTOOL_GDRVINFO: {
+ struct ethtool_drvinfo info = { ETHTOOL_GDRVINFO };
+ strcpy (info.driver, DRV_NAME);
+ strcpy (info.version, DRV_VERSION);
+ if (lp->pci_dev)
+ strcpy (info.bus_info, pci_name(lp->pci_dev));
+ else
+ sprintf(info.bus_info, "VLB 0x%lx", dev->base_addr);
+ if (copy_to_user (useraddr, &info, sizeof (info)))
+ return -EFAULT;
+ return 0;
+ }
+
+ /* get settings */
+ case ETHTOOL_GSET: {
+ struct ethtool_cmd ecmd = { ETHTOOL_GSET };
+ spin_lock_irq(&lp->lock);
+ mii_ethtool_gset(&lp->mii_if, &ecmd);
+ spin_unlock_irq(&lp->lock);
+ if (copy_to_user(useraddr, &ecmd, sizeof(ecmd)))
+ return -EFAULT;
+ return 0;
+ }
+ /* set settings */
+ case ETHTOOL_SSET: {
+ int r;
+ struct ethtool_cmd ecmd;
+ if (copy_from_user(&ecmd, useraddr, sizeof(ecmd)))
+ return -EFAULT;
+ spin_lock_irq(&lp->lock);
+ r = mii_ethtool_sset(&lp->mii_if, &ecmd);
+ spin_unlock_irq(&lp->lock);
+ return r;
+ }
+ /* restart autonegotiation */
+ case ETHTOOL_NWAY_RST: {
+ int r;
+ spin_lock_irq(&lp->lock);
+ r = mii_nway_restart(&lp->mii_if);
+ spin_unlock_irq(&lp->lock);
+ return r;
+ }
+ /* get link status */
+ case ETHTOOL_GLINK: {
+ struct ethtool_value edata = {ETHTOOL_GLINK};
+ spin_lock_irq(&lp->lock);
+ edata.data = mii_link_ok(&lp->mii_if);
+ spin_unlock_irq(&lp->lock);
+ if (copy_to_user(useraddr, &edata, sizeof(edata)))
+ return -EFAULT;
+ return 0;
+ }
+
+ /* get message-level */
+ case ETHTOOL_GMSGLVL: {
+ struct ethtool_value edata = {ETHTOOL_GMSGLVL};
+ edata.data = pcnet32_debug;
+ if (copy_to_user(useraddr, &edata, sizeof(edata)))
+ return -EFAULT;
+ return 0;
+ }
+ /* set message-level */
+ case ETHTOOL_SMSGLVL: {
+ struct ethtool_value edata;
+ if (copy_from_user(&edata, useraddr, sizeof(edata)))
+ return -EFAULT;
+ pcnet32_debug = edata.data;
+ return 0;
+ }
+ default:
+ break;
+ }
+
+ return -EOPNOTSUPP;
+}
+
static int pcnet32_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct pcnet32_private *lp = dev->priv;
int rc;
unsigned long flags;
- /* all non-ethtool ioctls (the SIOC[GS]MIIxxx ioctls) */
- spin_lock_irqsave(&lp->lock, flags);
- rc = generic_mii_ioctl(&lp->mii_if, data, cmd, NULL);
- spin_unlock_irqrestore(&lp->lock, flags);
+ if (cmd == SIOCETHTOOL)
+ return pcnet32_ethtool_ioctl(dev, (void *) rq->ifr_data);
+
+ /* SIOC[GS]MIIxxx ioctls */
+ if (lp->mii) {
+ spin_lock_irqsave(&lp->lock, flags);
+ rc = generic_mii_ioctl(&lp->mii_if, data, cmd, NULL);
+ spin_unlock_irqrestore(&lp->lock, flags);
+ } else {
+ rc = -EOPNOTSUPP;
+ }
return rc;
}
unsigned long flags;
/* Print the link status if it has changed */
- if (lp->mii) {
+ if (lp->mii) {
spin_lock_irqsave(&lp->lock, flags);
mii_check_media (&lp->mii_if, 1, 0);
spin_unlock_irqrestore(&lp->lock, flags);
mod_timer (&(lp->watchdog_timer), PCNET32_WATCHDOG_TIMEOUT);
}
-static void __devexit pcnet32_remove_one(struct pci_dev *pdev)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
-
- if (dev) {
- struct pcnet32_private *lp = dev->priv;
-
- unregister_netdev(dev);
- release_region(dev->base_addr, PCNET32_TOTAL_SIZE);
- pci_free_consistent(lp->pci_dev, sizeof(*lp), lp, lp->dma_addr);
- free_netdev(dev);
- pci_set_drvdata(pdev, NULL);
- }
-}
-
static struct pci_driver pcnet32_driver = {
.name = DRV_NAME,
.probe = pcnet32_probe_pci,
- .remove = __devexit_p(pcnet32_remove_one),
.id_table = pcnet32_pci_tbl,
};
MODULE_PARM(debug, "i");
-MODULE_PARM_DESC(debug, DRV_NAME " debug level");
+MODULE_PARM_DESC(debug, DRV_NAME " debug level (0-6)");
MODULE_PARM(max_interrupt_work, "i");
MODULE_PARM_DESC(max_interrupt_work, DRV_NAME " maximum events handled per interrupt");
MODULE_PARM(rx_copybreak, "i");
module_init(deflate_init);
module_exit(deflate_cleanup);
MODULE_LICENSE("Dual BSD/GPL");
+MODULE_ALIAS("ppp-compress-" __stringify(CI_DEFLATE));
+MODULE_ALIAS("ppp-compress-" __stringify(CI_DEFLATE_DRAFT));
if (copy_from_user(&uprog, (void __user *) arg, sizeof(uprog)))
break;
- err = -ENOMEM;
- len = uprog.len * sizeof(struct sock_filter);
- code = kmalloc(len, GFP_KERNEL);
- if (code == 0)
- break;
- err = -EFAULT;
- if (copy_from_user(code, (void __user *) uprog.filter, len)) {
- kfree(code);
- break;
- }
- err = sk_chk_filter(code, uprog.len);
- if (err) {
- kfree(code);
+ err = -EINVAL;
+ if (uprog.len > BPF_MAXINSNS)
break;
+ err = -ENOMEM;
+ if (uprog.len > 0) {
+ len = uprog.len * sizeof(struct sock_filter);
+ code = kmalloc(len, GFP_KERNEL);
+ if (code == NULL)
+ break;
+ err = -EFAULT;
+ if (copy_from_user(code, (void __user *) uprog.filter, len)) {
+ kfree(code);
+ break;
+ }
+ err = sk_chk_filter(code, uprog.len);
+ if (err) {
+ kfree(code);
+ break;
+ }
}
filtp = (cmd == PPPIOCSPASS)? &ppp->pass_filter: &ppp->active_filter;
ppp_lock(ppp);
EXPORT_SYMBOL(all_channels); /* for debugging */
MODULE_LICENSE("GPL");
MODULE_ALIAS_CHARDEV_MAJOR(PPP_MAJOR);
+MODULE_ALIAS("/dev/ppp");
#include <linux/ppp_channel.h>
#include <linux/ppp_defs.h>
#include <linux/if_ppp.h>
-#include <linux/if_pppvar.h>
#include <linux/notifier.h>
#include <linux/file.h>
#include <linux/proc_fs.h>
* Make certain the data structures used by the controller are aligned
* and DMAble.
*/
+ /*
+ * XXX: that is obviously broken - kfree() won't be happy with us.
+ */
lp = (struct lan_saa9730_private *) (((unsigned long)
kmalloc(sizeof(*lp) + 7,
GFP_DMA | GFP_KERNEL)
out:
if (dev->priv)
kfree(dev->priv);
- free_netdev(dev);
return ret;
}
dev->open = shaper_open;
dev->stop = shaper_close;
- dev->destructor = free_netdev;
dev->hard_start_xmit = shaper_start_xmit;
dev->get_stats = shaper_get_stats;
dev->set_multicast_list = NULL;
SiS190_tx_interrupt(struct net_device *dev, struct sis190_private *tp,
void *ioaddr)
{
- unsigned long dirty_tx, tx_left = 0;
- int entry = tp->cur_tx % NUM_TX_DESC;
+ unsigned long dirty_tx, tx_left;
assert(dev != NULL);
assert(tp != NULL);
tx_left = tp->cur_tx - dirty_tx;
while (tx_left > 0) {
+ int entry = dirty_tx % NUM_TX_DESC;
+
if ((le32_to_cpu(tp->TxDescArray[entry].status) & OWNbit) == 0) {
struct sk_buff *skb;
tp->stats.tx_packets++;
dirty_tx++;
tx_left--;
- entry++;
}
}
i++, mclist = mclist->next) {
unsigned int bit_nr =
sis900_mcast_bitnr(mclist->dmi_addr, revision);
- mc_filter[bit_nr >> 4] |= (1 << bit_nr);
+ mc_filter[bit_nr >> 4] |= (1 << (bit_nr & 0xf));
}
}
*
* Name: lm80.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.6 $
+ * Date: $Date: 2003/05/13 17:26:52 $
* Purpose: Contains all defines for the LM80 Chip
* (National Semiconductor).
*
*
* Name: skaddr.h
* Project: Gigabit Ethernet Adapters, ADDR-Modul
+ * Version: $Revision: 1.29 $
+ * Date: $Date: 2003/05/13 16:57:24 $
* Purpose: Header file for Address Management (MC, UC, Prom).
*
******************************************************************************/
*
* Name: skcsum.h
* Project: GEnesis - SysKonnect SK-NET Gigabit Ethernet (SK-98xx)
+ * Version: $Revision: 1.10 $
+ * Date: $Date: 2003/08/20 13:59:57 $
* Purpose: Store/verify Internet checksum in send/receive packets.
*
******************************************************************************/
*
* Name: skdebug.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.14 $
+ * Date: $Date: 2003/05/13 17:26:00 $
* Purpose: SK specific DEBUG support
*
******************************************************************************/
*
* Name: skdrv1st.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.4 $
+ * Date: $Date: 2003/11/12 14:28:14 $
* Purpose: First header file for driver and all other modules
*
******************************************************************************/
*
* Name: skdrv2nd.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.10 $
+ * Date: $Date: 2003/12/11 16:04:45 $
* Purpose: Second header file for driver and all other modules
*
******************************************************************************/
/* Marvell (0x11ab) */ \
} else if (pdev->vendor == 0x11ab) { \
/* Gigabit Ethernet Adapter (0x4320) */ \
- if ((pdev->device == 0x4320)) { \
+ /* Gigabit Ethernet Adapter (0x4360) */ \
+ /* Gigabit Ethernet Adapter (0x4361) */ \
+ /* Belkin (0x5005) */ \
+ if ((pdev->device == 0x4320) || \
+ (pdev->device == 0x4360) || \
+ (pdev->device == 0x4361) || \
+ (pdev->device == 0x5005)) { \
result = SK_TRUE; \
} \
/* CNet (0x1371) */ \
*
* Name: skerror.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.7 $
+ * Date: $Date: 2003/05/13 17:25:13 $
* Purpose: SK specific Error log support
*
******************************************************************************/
*
* Name: skgedrv.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.10 $
+ * Date: $Date: 2003/07/04 12:25:01 $
* Purpose: Interface with the driver
*
******************************************************************************/
*
* Name: skgehw.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.56 $
+ * Date: $Date: 2003/09/23 09:01:00 $
* Purpose: Defines and Macros for the Gigabit Ethernet Adapter Product Family
*
******************************************************************************/
*
* Name: skhwt.h
* Project: Gigabit Ethernet Adapters, Event Scheduler Module
+ * Version: $Revision: 1.7 $
+ * Date: $Date: 2003/09/16 12:55:08 $
* Purpose: Defines for the hardware timer functions
*
******************************************************************************/
*
* Name: skgei2c.h
* Project: Gigabit Ethernet Adapters, TWSI-Module
+ * Version: $Revision: 1.25 $
+ * Date: $Date: 2003/10/20 09:06:05 $
* Purpose: Special defines for TWSI
*
******************************************************************************/
*
* Name: skgeinit.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.83 $
+ * Date: $Date: 2003/09/16 14:07:37 $
* Purpose: Structures and prototypes for the GE Init Module
*
******************************************************************************/
*
* Name: skgepnm2.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.36 $
+ * Date: $Date: 2003/05/23 12:45:13 $
* Purpose: Defines for Private Network Management Interface
*
****************************************************************************/
*
* Name: skgepnmi.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.62 $
+ * Date: $Date: 2003/08/15 12:31:52 $
* Purpose: Defines for Private Network Management Interface
*
****************************************************************************/
*
* Name: skgesirq.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.30 $
+ * Date: $Date: 2003/07/04 12:34:13 $
* Purpose: SK specific Gigabit Ethernet special IRQ functions
*
******************************************************************************/
*
* Name: ski2c.h
* Project: Gigabit Ethernet Adapters, TWSI-Module
+ * Version: $Revision: 1.35 $
+ * Date: $Date: 2003/10/20 09:06:30 $
* Purpose: Defines to access Voltage and Temperature Sensor
*
******************************************************************************/
*
* Name: skqueue.h
* Project: Gigabit Ethernet Adapters, Event Scheduler Module
+ * Version: $Revision: 1.16 $
+ * Date: $Date: 2003/09/16 12:50:32 $
* Purpose: Defines for the Event queue
*
******************************************************************************/
*
******************************************************************************/
+/*
+ * SKQUEUE.H contains all defines and types for the event queue
+ */
+
#ifndef _SKQUEUE_H_
#define _SKQUEUE_H_
*
* Name: skrlmt.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.37 $
+ * Date: $Date: 2003/04/15 09:43:43 $
* Purpose: Header file for Redundant Link ManagemenT.
*
******************************************************************************/
*
* Name: sktimer.h
* Project: Gigabit Ethernet Adapters, Event Scheduler Module
+ * Version: $Revision: 1.11 $
+ * Date: $Date: 2003/09/16 12:58:18 $
* Purpose: Defines for the timer functions
*
******************************************************************************/
*
* Name: sktypes.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.2 $
+ * Date: $Date: 2003/10/07 08:16:51 $
* Purpose: Define data types for Linux
*
******************************************************************************/
*
* Name: version.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.5 $
+ * Date: $Date: 2003/10/07 08:16:51 $
* Purpose: SK specific Error log support
*
******************************************************************************/
#ifdef lint
static const char SysKonnectFileId[] = "@(#) (C) SysKonnect GmbH.";
static const char SysKonnectBuildNumber[] =
- "@(#)SK-BUILD: 6.22 PL: 01";
+ "@(#)SK-BUILD: 6.23 PL: 01";
#endif /* !defined(lint) */
-#define BOOT_STRING "sk98lin: Network Device Driver v6.22\n" \
+#define BOOT_STRING "sk98lin: Network Device Driver v6.23\n" \
"(C)Copyright 1999-2004 Marvell(R)."
-#define VER_STRING "6.22"
+#define VER_STRING "6.23"
#define DRIVER_FILE_NAME "sk98lin"
-#define DRIVER_REL_DATE "Jan-30-2004"
+#define DRIVER_REL_DATE "Feb-13-2004"
*
* Name: skvpd.h
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.15 $
+ * Date: $Date: 2003/01/13 10:39:38 $
* Purpose: Defines and Macros for VPD handling
*
******************************************************************************/
*
* Name: xmac_ii.h
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.52 $
+ * Date: $Date: 2003/10/02 16:35:50 $
* Purpose: Defines and Macros for Gigabit Ethernet Controller
*
******************************************************************************/
*
* Name: skaddr.c
* Project: Gigabit Ethernet Adapters, ADDR-Module
+ * Version: $Revision: 1.52 $
+ * Date: $Date: 2003/06/02 13:46:15 $
* Purpose: Manage Addresses (Multicast and Unicast) and Promiscuous Mode.
*
******************************************************************************/
*
* Name: skcsum.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.12 $
+ * Date: $Date: 2003/08/20 13:55:53 $
* Purpose: Store/verify Internet checksum in send/receive packets.
*
******************************************************************************/
*
* Name: skdim.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.5 $
+ * Date: $Date: 2003/11/28 12:55:40 $
* Purpose: All functions to maintain interrupt moderation
*
******************************************************************************/
*
* Name: skge.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.45 $
+ * Date: $Date: 2004/02/12 14:41:02 $
* Purpose: The main driver source module
*
******************************************************************************/
SK_BOOL BootStringCount = SK_FALSE;
int retval;
#ifdef CONFIG_PROC_FS
- int proc_root_initialized = 0;
struct proc_dir_entry *pProcFile;
#endif
dev = NULL;
pNet = NULL;
+ /* Don't handle Yukon2 cards at the moment */
+ /* 12-feb-2004 ---- mlindner@syskonnect.de */
+ if (pdev->vendor == 0x11ab) {
+ if ( (pdev->device == 0x4360) || (pdev->device == 0x4361) )
+ continue;
+ }
SK_PCI_ISCOMPLIANT(vendor_flag, pdev);
if (!vendor_flag)
*
* Name: skgehwt.c
* Project: Gigabit Ethernet Adapters, Event Scheduler Module
+ * Version: $Revision: 1.15 $
+ * Date: $Date: 2003/09/16 13:41:23 $
* Purpose: Hardware Timer
*
******************************************************************************/
*
* Name: skgeinit.c
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.97 $
+ * Date: $Date: 2003/10/02 16:45:31 $
* Purpose: Contains functions to initialize the adapter
*
******************************************************************************/
*
******************************************************************************/
-
#include "h/skdrv1st.h"
#include "h/skdrv2nd.h"
*
* Name: skgemib.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.11 $
+ * Date: $Date: 2003/09/15 13:38:12 $
* Purpose: Private Network Management Interface Management Database
*
****************************************************************************/
*
* Name: skgepnmi.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.111 $
+ * Date: $Date: 2003/09/15 13:35:35 $
* Purpose: Private Network Management Interface
*
****************************************************************************/
*
******************************************************************************/
+
#ifndef _lint
static const char SysKonnectFileId[] =
"@(#) $Id: skgepnmi.c,v 1.111 2003/09/15 13:35:35 tschilli Exp $ (C) Marvell.";
*
* Name: skgesirq.c
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.92 $
+ * Date: $Date: 2003/09/16 14:37:07 $
* Purpose: Special IRQ module
*
******************************************************************************/
*
* Name: ski2c.c
* Project: Gigabit Ethernet Adapters, TWSI-Module
+ * Version: $Revision: 1.59 $
+ * Date: $Date: 2003/10/20 09:07:25 $
* Purpose: Functions to access Voltage and Temperature Sensor
*
******************************************************************************/
*
* Name: sklm80.c
* Project: Gigabit Ethernet Adapters, TWSI-Module
+ * Version: $Revision: 1.22 $
+ * Date: $Date: 2003/10/20 09:08:21 $
* Purpose: Functions to access Voltage and Temperature Sensor (LM80)
*
******************************************************************************/
*
* Name: skproc.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.11 $
+ * Date: $Date: 2003/12/11 16:03:57 $
* Purpose: Funktions to display statictic data
*
******************************************************************************/
* The information in this file is provided "AS IS" without warranty.
*
******************************************************************************/
-
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
*
* Name: skqueue.c
* Project: Gigabit Ethernet Adapters, Event Scheduler Module
+ * Version: $Revision: 1.20 $
+ * Date: $Date: 2003/09/16 13:44:00 $
* Purpose: Management of an event queue.
*
******************************************************************************/
*
******************************************************************************/
+
/*
* Event queue and dispatcher
*/
*
* Name: skrlmt.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.69 $
+ * Date: $Date: 2003/04/15 09:39:22 $
* Purpose: Manage links on SK-NET Adapters, esp. redundant ones.
*
******************************************************************************/
*
* Name: sktimer.c
* Project: Gigabit Ethernet Adapters, Event Scheduler Module
+ * Version: $Revision: 1.14 $
+ * Date: $Date: 2003/09/16 13:46:51 $
* Purpose: High level timer functions.
*
******************************************************************************/
*
******************************************************************************/
+
/*
* Event queue and dispatcher
*/
*
* Name: skvpd.c
* Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.37 $
+ * Date: $Date: 2003/01/13 10:42:45 $
* Purpose: Shared software to read and write VPD data
*
******************************************************************************/
*
* Name: skxmac2.c
* Project: Gigabit Ethernet Adapters, Common Modules
+ * Version: $Revision: 1.102 $
+ * Date: $Date: 2003/10/02 16:53:58 $
* Purpose: Contains functions to initialize the MACs and PHYs
*
******************************************************************************/
cluster_start = curr = (gp->rx_new & ~(4 - 1));
count = 0;
kick = -1;
+ wmb();
while (curr != limit) {
curr = NEXT_RX(curr);
if (++count == 4) {
count = 0;
}
}
- if (kick >= 0)
+ if (kick >= 0) {
+ mb();
writel(kick, gp->regs + RXDMA_KICK);
+ }
}
static void gem_rx(struct gem *gp)
if (gem_intme(entry))
ctrl |= TXDCTRL_INTME;
txd->buffer = cpu_to_le64(mapping);
+ wmb();
txd->control_word = cpu_to_le64(ctrl);
entry = NEXT_TX(entry);
} else {
txd = &gp->init_block->txd[entry];
txd->buffer = cpu_to_le64(mapping);
+ wmb();
txd->control_word = cpu_to_le64(this_ctrl | len);
if (gem_intme(entry))
}
txd = &gp->init_block->txd[first_entry];
txd->buffer = cpu_to_le64(first_mapping);
+ wmb();
txd->control_word =
cpu_to_le64(ctrl | TXDCTRL_SOF | intme | first_len);
}
if (netif_msg_tx_queued(gp))
printk(KERN_DEBUG "%s: tx queued, slot %d, skblen %d\n",
dev->name, entry, skb->len);
+ mb();
writel(gp->tx_new, gp->regs + TXDMA_KICK);
spin_unlock_irq(&gp->lock);
gp->rx_skbs[i] = NULL;
}
rxd->status_word = 0;
+ wmb();
rxd->buffer = 0;
}
RX_BUF_ALLOC_SIZE(gp),
PCI_DMA_FROMDEVICE);
rxd->buffer = cpu_to_le64(dma_addr);
+ wmb();
rxd->status_word = cpu_to_le64(RXDCTRL_FRESH(gp));
skb_reserve(skb, RX_OFFSET);
}
struct gem_txd *txd = &gb->txd[i];
txd->control_word = 0;
+ wmb();
txd->buffer = 0;
}
+ wmb();
}
/* Must be invoked under gp->lock. */
*/
static void gem_apple_powerup(struct gem *gp)
{
- u16 cmd;
u32 mif_cfg;
mb();
#define DRV_MODULE_NAME "tg3"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "2.6"
-#define DRV_MODULE_RELDATE "February 3, 2004"
+#define DRV_MODULE_VERSION "2.7"
+#define DRV_MODULE_RELDATE "February 17, 2004"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
}
}
+
+static inline void _tw32_rx_mbox(struct tg3 *tp, u32 off, u32 val)
+{
+ unsigned long mbox = tp->regs + off;
+ writel(val, mbox);
+ if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
+ readl(mbox);
+}
+
+static inline void _tw32_tx_mbox(struct tg3 *tp, u32 off, u32 val)
+{
+ unsigned long mbox = tp->regs + off;
+ writel(val, mbox);
+ if (tp->tg3_flags & TG3_FLAG_TXD_MBOX_HWBUG)
+ writel(val, mbox);
+ if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
+ readl(mbox);
+}
+
+#define tw32_mailbox(reg, val) writel(((val) & 0xffffffff), tp->regs + (reg))
+#define tw32_rx_mbox(reg, val) _tw32_rx_mbox(tp, reg, val)
+#define tw32_tx_mbox(reg, val) _tw32_tx_mbox(tp, reg, val)
+
#define tw32(reg,val) tg3_write_indirect_reg32(tp,(reg),(val))
-#define tw32_mailbox(reg, val) writel(((val) & 0xffffffff), tp->regs + (reg))
#define tw16(reg,val) writew(((val) & 0xffff), tp->regs + (reg))
#define tw8(reg,val) writeb(((val) & 0xff), tp->regs + (reg))
#define tr32(reg) readl(tp->regs + (reg))
return err;
out:
+ if (tp->tg3_flags2 & TG3_FLG2_PHY_ADC_BUG) {
+ tg3_writephy(tp, MII_TG3_AUX_CTRL, 0x0c00);
+ tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x201f);
+ tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x2aaa);
+ tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x000a);
+ tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x0323);
+ tg3_writephy(tp, MII_TG3_AUX_CTRL, 0x0400);
+ }
+ if (tp->tg3_flags2 & TG3_FLG2_PHY_5704_A0_BUG) {
+ tg3_writephy(tp, 0x1c, 0x8d68);
+ tg3_writephy(tp, 0x1c, 0x8d68);
+ }
tg3_phy_set_wirespeed(tp);
return 0;
}
u8 current_duplex;
int i, err;
+ tw32(MAC_EVENT, 0);
+
tw32(MAC_STATUS,
(MAC_STATUS_SYNC_CHANGED |
- MAC_STATUS_CFG_CHANGED));
+ MAC_STATUS_CFG_CHANGED |
+ MAC_STATUS_MI_COMPLETION |
+ MAC_STATUS_LNKSTATE_CHANGED));
tr32(MAC_STATUS);
udelay(40);
/* ACK the status ring. */
tp->rx_rcb_ptr = rx_rcb_ptr;
- tw32_mailbox(MAILBOX_RCVRET_CON_IDX_0 + TG3_64BIT_REG_LOW,
+ tw32_rx_mbox(MAILBOX_RCVRET_CON_IDX_0 + TG3_64BIT_REG_LOW,
(rx_rcb_ptr % TG3_RX_RCB_RING_SIZE(tp)));
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_RCVRET_CON_IDX_0 + TG3_64BIT_REG_LOW);
/* Refill RX ring(s). */
if (work_mask & RXD_OPAQUE_RING_STD) {
sw_idx = tp->rx_std_ptr % TG3_RX_RING_SIZE;
- tw32_mailbox(MAILBOX_RCV_STD_PROD_IDX + TG3_64BIT_REG_LOW,
+ tw32_rx_mbox(MAILBOX_RCV_STD_PROD_IDX + TG3_64BIT_REG_LOW,
sw_idx);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_RCV_STD_PROD_IDX + TG3_64BIT_REG_LOW);
}
if (work_mask & RXD_OPAQUE_RING_JUMBO) {
sw_idx = tp->rx_jumbo_ptr % TG3_RX_JUMBO_RING_SIZE;
- tw32_mailbox(MAILBOX_RCV_JUMBO_PROD_IDX + TG3_64BIT_REG_LOW,
+ tw32_rx_mbox(MAILBOX_RCV_JUMBO_PROD_IDX + TG3_64BIT_REG_LOW,
sw_idx);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_RCV_JUMBO_PROD_IDX + TG3_64BIT_REG_LOW);
}
return received;
/* Packets are ready, update Tx producer idx local and on card. */
if (tp->tg3_flags & TG3_FLAG_HOST_TXDS) {
- tw32_mailbox((MAILBOX_SNDHOST_PROD_IDX_0 +
+ tw32_tx_mbox((MAILBOX_SNDHOST_PROD_IDX_0 +
TG3_64BIT_REG_LOW), entry);
- if (tp->tg3_flags & TG3_FLAG_TXD_MBOX_HWBUG)
- tw32_mailbox((MAILBOX_SNDHOST_PROD_IDX_0 +
- TG3_64BIT_REG_LOW), entry);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_SNDHOST_PROD_IDX_0 +
- TG3_64BIT_REG_LOW);
} else {
/* First, make sure tg3 sees last descriptor fully
* in SRAM.
*/
if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_SNDNIC_PROD_IDX_0 +
- TG3_64BIT_REG_LOW);
+ tr32(MAILBOX_SNDNIC_PROD_IDX_0 + TG3_64BIT_REG_LOW);
- tw32_mailbox((MAILBOX_SNDNIC_PROD_IDX_0 +
+ tw32_tx_mbox((MAILBOX_SNDNIC_PROD_IDX_0 +
TG3_64BIT_REG_LOW), entry);
- if (tp->tg3_flags & TG3_FLAG_TXD_MBOX_HWBUG)
- tw32_mailbox((MAILBOX_SNDNIC_PROD_IDX_0 +
- TG3_64BIT_REG_LOW), entry);
-
- /* Now post the mailbox write itself. */
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_SNDNIC_PROD_IDX_0 +
- TG3_64BIT_REG_LOW);
}
tp->tx_prod = entry;
* the double-write bug tests.
*/
if (tp->tg3_flags & TG3_FLAG_HOST_TXDS) {
- tw32_mailbox((MAILBOX_SNDHOST_PROD_IDX_0 +
+ tw32_tx_mbox((MAILBOX_SNDHOST_PROD_IDX_0 +
TG3_64BIT_REG_LOW), entry);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_SNDHOST_PROD_IDX_0 +
- TG3_64BIT_REG_LOW);
} else {
/* First, make sure tg3 sees last descriptor fully
* in SRAM.
tr32(MAILBOX_SNDNIC_PROD_IDX_0 +
TG3_64BIT_REG_LOW);
- tw32_mailbox((MAILBOX_SNDNIC_PROD_IDX_0 +
+ tw32_tx_mbox((MAILBOX_SNDNIC_PROD_IDX_0 +
TG3_64BIT_REG_LOW), entry);
-
- /* Now post the mailbox write itself. */
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_SNDNIC_PROD_IDX_0 +
- TG3_64BIT_REG_LOW);
}
tp->tx_prod = entry;
if (err)
goto out;
- memset(tp->hw_status, 0, TG3_HW_STATUS_SIZE);
+ if (tp->hw_status)
+ memset(tp->hw_status, 0, TG3_HW_STATUS_SIZE);
+ if (tp->hw_stats)
+ memset(tp->hw_stats, 0, sizeof(struct tg3_hw_stats));
out:
return err;
tp->tx_prod = 0;
tp->tx_cons = 0;
tw32_mailbox(MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW, 0);
- tw32_mailbox(MAILBOX_SNDNIC_PROD_IDX_0 + TG3_64BIT_REG_LOW, 0);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_SNDNIC_PROD_IDX_0 + TG3_64BIT_REG_LOW);
+ tw32_tx_mbox(MAILBOX_SNDNIC_PROD_IDX_0 + TG3_64BIT_REG_LOW, 0);
if (tp->tg3_flags & TG3_FLAG_HOST_TXDS) {
tg3_set_bdinfo(tp, NIC_SRAM_SEND_RCB,
}
tp->rx_rcb_ptr = 0;
- tw32_mailbox(MAILBOX_RCVRET_CON_IDX_0 + TG3_64BIT_REG_LOW, 0);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_RCVRET_CON_IDX_0 + TG3_64BIT_REG_LOW);
+ tw32_rx_mbox(MAILBOX_RCVRET_CON_IDX_0 + TG3_64BIT_REG_LOW, 0);
tg3_set_bdinfo(tp, NIC_SRAM_RCV_RET_RCB,
tp->rx_rcb_mapping,
0);
tp->rx_std_ptr = tp->rx_pending;
- tw32_mailbox(MAILBOX_RCV_STD_PROD_IDX + TG3_64BIT_REG_LOW,
+ tw32_rx_mbox(MAILBOX_RCV_STD_PROD_IDX + TG3_64BIT_REG_LOW,
tp->rx_std_ptr);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_RCV_STD_PROD_IDX + TG3_64BIT_REG_LOW);
- if (tp->tg3_flags & TG3_FLAG_JUMBO_ENABLE)
- tp->rx_jumbo_ptr = tp->rx_jumbo_pending;
- else
- tp->rx_jumbo_ptr = 0;
- tw32_mailbox(MAILBOX_RCV_JUMBO_PROD_IDX + TG3_64BIT_REG_LOW,
+ tp->rx_jumbo_ptr = (tp->tg3_flags & TG3_FLAG_JUMBO_ENABLE) ?
+ tp->rx_jumbo_pending : 0;
+ tw32_rx_mbox(MAILBOX_RCV_JUMBO_PROD_IDX + TG3_64BIT_REG_LOW,
tp->rx_jumbo_ptr);
- if (tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER)
- tr32(MAILBOX_RCV_JUMBO_PROD_IDX + TG3_64BIT_REG_LOW);
/* Initialize MAC address and backoff seed. */
__tg3_set_mac_addr(tp);
(tp->pci_chip_rev_id != CHIPREV_ID_5705_A1)))
tp->tg3_flags2 |= TG3_FLG2_NO_ETH_WIRE_SPEED;
+ if (GET_CHIP_REV(tp->pci_chip_rev_id) == CHIPREV_5703_AX ||
+ GET_CHIP_REV(tp->pci_chip_rev_id) == CHIPREV_5704_AX)
+ tp->tg3_flags2 |= TG3_FLG2_PHY_ADC_BUG;
+ if (tp->pci_chip_rev_id == CHIPREV_ID_5704_A0)
+ tp->tg3_flags2 |= TG3_FLG2_PHY_5704_A0_BUG;
+
/* Only 5701 and later support tagged irq status mode.
* Also, 5788 chips cannot use tagged irq status.
*
for (i = 0; i < TEST_BUFFER_SIZE / sizeof(u32); i++) {
u32 val;
tg3_read_mem(tp, 0x2100 + (i*4), &val);
- if (val != p[i]) {
- printk( KERN_ERR " tg3_test_dma() Card buffer currupted on write! (%d != %d)\n", val, i);
+ if (le32_to_cpu(val) != p[i]) {
+ printk(KERN_ERR " tg3_test_dma() Card buffer corrupted on write! (%d != %d)\n", val, i);
/* ret = -ENODEV here? */
}
p[i] = 0;
static struct pci_dev * __devinit tg3_find_5704_peer(struct tg3 *tp)
{
- struct pci_dev *peer = NULL;
- unsigned int func;
-
- for (func = 0; func < 7; func++) {
- unsigned int devfn = tp->pdev->devfn;
+ struct pci_dev *peer;
+ unsigned int func, devnr = tp->pdev->devfn & ~7;
- devfn &= ~7;
- devfn |= func;
-
- if (devfn == tp->pdev->devfn)
- continue;
- peer = pci_find_slot(tp->pdev->bus->number, devfn);
- if (peer)
+ for (func = 0; func < 8; func++) {
+ peer = pci_get_slot(tp->pdev->bus, devnr | func);
+ if (peer && peer != tp->pdev)
break;
+ pci_dev_put(peer);
}
if (!peer || peer == tp->pdev)
BUG();
+
+ /*
+ * We don't need to keep the refcount elevated; there's no way
+ * to remove one half of this device without removing the other
+ */
+ pci_dev_put(peer);
+
return peer;
}
goto err_out_iounmap;
}
+ /*
+ * Reset chip in case UNDI or EFI driver did not shutdown
+ * DMA self test will enable WDMAC and we'll see (spurious)
+ * pending DMA on the PCI bus at that point.
+ */
+ if ((tr32(HOSTCC_MODE) & HOSTCC_MODE_ENABLE) ||
+ (tr32(WDMAC_MODE) & WDMAC_MODE_ENABLE)) {
+ pci_save_state(tp->pdev, tp->pci_cfg_state);
+ tw32(MEMARB_MODE, MEMARB_MODE_ENABLE);
+ tg3_halt(tp);
+ }
+
err = tg3_test_dma(tp);
if (err) {
printk(KERN_ERR PFX "DMA engine test failed, aborting.\n");
#define CHIPREV_5700_BX 0x71
#define CHIPREV_5700_CX 0x72
#define CHIPREV_5701_AX 0x00
+#define CHIPREV_5703_AX 0x10
+#define CHIPREV_5704_AX 0x20
+#define CHIPREV_5704_BX 0x21
#define GET_METAL_REV(CHIP_REV_ID) ((CHIP_REV_ID) & 0xff)
#define METAL_REV_A0 0x00
#define METAL_REV_A1 0x01
#define TG3_FLAG_ENABLE_ASF 0x00000020
#define TG3_FLAG_5701_REG_WRITE_BUG 0x00000040
#define TG3_FLAG_POLL_SERDES 0x00000080
+#if defined(CONFIG_X86)
#define TG3_FLAG_MBOX_WRITE_REORDER 0x00000100
+#else
+#define TG3_FLAG_MBOX_WRITE_REORDER 0 /* disables code too */
+#endif
#define TG3_FLAG_PCIX_TARGET_HWBUG 0x00000200
#define TG3_FLAG_WOL_SPEED_100MB 0x00000400
#define TG3_FLAG_WOL_ENABLE 0x00000800
#define TG3_FLG2_IS_5788 0x00000008
#define TG3_FLG2_MAX_RXPEND_64 0x00000010
#define TG3_FLG2_TSO_CAPABLE 0x00000020
+#define TG3_FLG2_PHY_ADC_BUG 0x00000040
+#define TG3_FLG2_PHY_5704_A0_BUG 0x00000080
u32 split_mode_max_reqs;
#define SPLIT_MODE_5704_MAX_REQ 3
*/
/* These MUST be on 8 byte boundaries */
xl_priv->xl_tx_ring = kmalloc((sizeof(struct xl_tx_desc) * XL_TX_RING_SIZE) + 7, GFP_DMA | GFP_KERNEL) ;
+ if (xl_priv->xl_tx_ring == NULL) {
+ printk(KERN_WARNING "%s: Not enough memory to allocate rx buffers.\n",
+ dev->name);
+ free_irq(dev->irq,dev);
+ return -ENOMEM;
+ }
xl_priv->xl_rx_ring = kmalloc((sizeof(struct xl_rx_desc) * XL_RX_RING_SIZE) +7, GFP_DMA | GFP_KERNEL) ;
+ if (xl_priv->xl_tx_ring == NULL) {
+ printk(KERN_WARNING "%s: Not enough memory to allocate rx buffers.\n",
+ dev->name);
+ free_irq(dev->irq,dev);
+ kfree(xl_priv->xl_tx_ring);
+ return -ENOMEM;
+ }
memset(xl_priv->xl_tx_ring,0,sizeof(struct xl_tx_desc) * XL_TX_RING_SIZE) ;
memset(xl_priv->xl_rx_ring,0,sizeof(struct xl_rx_desc) * XL_RX_RING_SIZE) ;
xl_freemem(dev) ;
free_irq(dev->irq,dev);
unregister_netdev(dev) ;
- kfree(dev) ;
+ free_netdev(dev) ;
xl_reset(dev) ;
writel(ACK_INTERRUPT | LATCH_ACK, xl_mmio + MMIO_COMMAND) ;
spin_unlock(&xl_priv->xl_lock) ;
{
int err = 0;
-#if CONFIG_PCI
+#ifdef CONFIG_PCI
err = pci_module_init (&de4x5_pci_driver);
#endif
#ifdef CONFIG_EISA
static void __exit de4x5_module_exit (void)
{
-#if CONFIG_PCI
+#ifdef CONFIG_PCI
pci_unregister_driver (&de4x5_pci_driver);
#endif
#ifdef CONFIG_EISA
if (tp->rx_buffers[entry].mapping !=
le32_to_cpu(tp->rx_ring[entry].buffer1)) {
printk(KERN_ERR "%s: Internal fault: The skbuff addresses "
- "do not match in tulip_rx: %08x vs. %08x %p / %p.\n",
+ "do not match in tulip_rx: %08x vs. %llx %p / %p.\n",
dev->name,
le32_to_cpu(tp->rx_ring[entry].buffer1),
- tp->rx_buffers[entry].mapping,
+ (unsigned long long)tp->rx_buffers[entry].mapping,
skb->head, temp);
}
#endif
}
/* Initialize net device. */
-int tun_net_init(struct net_device *dev)
+static void tun_net_init(struct net_device *dev)
{
struct tun_struct *tun = (struct tun_struct *)dev->priv;
- DBG(KERN_INFO "%s: tun_net_init\n", tun->dev->name);
-
switch (tun->flags & TUN_TYPE_MASK) {
case TUN_TUN_DEV:
/* Point-to-Point TUN Device */
ether_setup(dev);
break;
- };
-
- return 0;
+ }
}
/* Character device part */
init_waitqueue_head(&tun->read_wait);
tun->owner = -1;
- dev->init = tun_net_init;
SET_MODULE_OWNER(dev);
dev->open = tun_net_open;
tun->dev = dev;
tun->flags = flags;
+ tun_net_init(dev);
+
if (strchr(dev->name, '%')) {
err = dev_alloc_name(dev, dev->name);
if (err < 0)
config WAN
bool "Wan interfaces support"
---help---
- Wide Area Networks (WANs), such as X.25, frame relay and leased
+ Wide Area Networks (WANs), such as X.25, Frame Relay and leased
lines, are used to interconnect Local Area Networks (LANs) over vast
distances with data transfer rates significantly higher than those
achievable with commonly used asynchronous modem connections.
+
Usually, a quite expensive external device called a `WAN router' is
- needed to connect to a WAN.
+ needed to connect to a WAN. As an alternative, a relatively
+ inexpensive WAN interface card can allow your Linux box to directly
+ connect to a WAN.
- As an alternative, a relatively inexpensive WAN interface card can
- allow your Linux box to directly connect to a WAN. If you have one
- of those cards and wish to use it under Linux, say Y here and also
- to the WAN driver for your card, below.
+ If you have one of those cards and wish to use it under Linux,
+ say Y here and also to the WAN driver for your card.
If unsure, say N.
tristate "Comtrol Hostess SV-11 support"
depends on WAN && ISA && m
help
- This is a network card for low speed synchronous serial links, at
- up to 256Kbps. It supports both PPP and Cisco HDLC.
+ Driver for Comtrol Hostess SV-11 network card which
+ operates on low speed synchronous serial links at up to
+ 256Kbps, supporting PPP and Cisco HDLC.
- At this point, the driver can only be compiled as a module.
+ The driver will be compiled as a module: the
+ module will be called hostess_sv11.
# The COSA/SRP driver has not been tested as non-modular yet.
config COSA
tristate "COSA/SRP sync serial boards support"
depends on WAN && ISA && m
---help---
- This is a driver for COSA and SRP synchronous serial boards. These
- boards allow to connect synchronous serial devices (for example
+ Driver for COSA and SRP synchronous serial boards.
+
+ These boards allow to connect synchronous serial devices (for example
base-band modems, or any other device with the X.21, V.24, V.35 or
V.36 interface) to your Linux box. The cards can work as the
character device, synchronous PPP network device, or the Cisco HDLC
network device.
- To actually use the COSA or SRP board, you will need user-space
- utilities for downloading the firmware to the cards and to set them
- up. Look at the <http://www.fi.muni.cz/~kas/cosa/> for more
- information about the cards (including the pointer to the user-space
- utilities). You can also read the comment at the top of the
- <file:drivers/net/wan/cosa.c> for details about the cards and the driver
- itself.
+ You will need user-space utilities COSA or SRP boards for downloading
+ the firmware to the cards and to set them up. Look at the
+ <http://www.fi.muni.cz/~kas/cosa/> for more information. You can also
+ read the comment at the top of the <file:drivers/net/wan/cosa.c> for
+ details about the cards and the driver itself.
- The driver will be compiled as a module: the module will be called cosa.
+ The driver will be compiled as a module: the
+ module will be called cosa.
#
# COMX drivers
tristate "MultiGate (COMX) synchronous serial boards support"
depends on WAN && (ISA || PCI) && BROKEN
---help---
- Say Y if you want to use any board from the MultiGate (COMX) family.
- These boards are synchronous serial adapters for the PC,
- manufactured by ITConsult-Pro Co, Hungary.
+ Drivers for the PC synchronous serial adapters by
+ ITConsult-Pro Co, Hungary.
- Read <file:Documentation/networking/comx.txt> for help on
- configuring and using COMX interfaces. Further info on these cards
- can be found at <http://www.itc.hu/> or <info@itc.hu>.
+ Read <file:Documentation/networking/comx.txt> for help on configuring
+ and using COMX interfaces. Further info on these cards can be found
+ at <http://www.itc.hu/> or <info@itc.hu>.
- You must say Y to "/proc file system support" (CONFIG_PROC_FS) to
- use this driver.
+ Say Y if you want to use any board from the MultiGate (COMX)
+ family, you must also say Y to "/proc file system support"
+ (CONFIG_PROC_FS) in order to use these drivers.
To compile this driver as a module, choose M here: the
module will be called comx.
tristate "Support for COMX/CMX/HiCOMX boards"
depends on COMX
help
- Hardware driver for the 'CMX', 'COMX' and 'HiCOMX' boards from the
- MultiGate family. Say Y if you have one of these.
+ Driver for the 'CMX', 'COMX' and 'HiCOMX' boards.
You will need additional firmware to use these cards, which are
downloadable from <ftp://ftp.itc.hu/>.
+ Say Y if you have a board like this.
+
To compile this driver as a module, choose M here: the
module will be called comx-hw-comx.
tristate "Support for LoCOMX board"
depends on COMX
help
- Hardware driver for the 'LoCOMX' board from the MultiGate family.
+ Driver for the 'LoCOMX' board.
+
Say Y if you have a board like this.
To compile this driver as a module, choose M here: the
tristate "Support for MixCOM board"
depends on COMX
---help---
- Hardware driver for the 'MixCOM' board from the MultiGate family.
- Say Y if you have a board like this.
+ Driver for the 'MixCOM' board.
If you want to use the watchdog device on this card, you should
select it in the Watchdog Cards section of the Character Devices
driver for the flash ROM of this card is available separately on
<ftp://ftp.itc.hu/>.
+ Say Y if you have a board like this.
+
To compile this driver as a module, choose M here: the
module will be called comx-hw-mixcom.
tristate "Support for MUNICH based boards: SliceCOM, PCICOM (WelCOM)"
depends on COMX
---help---
- Hardware driver for the 'SliceCOM' (channelized E1) and 'PciCOM'
- boards (X21) from the MultiGate family.
+ Driver for the 'SliceCOM' (channelized E1) and 'PciCOM' (X21) boards.
+
+ Read <file:Documentation/networking/slicecom.txt> for help on
+ configuring and using SliceCOM interfaces. Further info on these
+ cards can be found at <http://www.itc.hu> or <info@itc.hu>.
+
+ Say Y if you have a board like this.
To compile this driver as a module, choose M here: the
module will be called comx-hw-munich.
- Read linux/Documentation/networking/slicecom.txt for help on
- configuring and using SliceCOM interfaces. Further info on these cards
- can be found at http://www.itc.hu or <info@itc.hu>.
-
config COMX_PROTO_PPP
tristate "Support for HDLC and syncPPP protocols on MultiGate boards"
depends on COMX
help
- Cisco-HDLC and synchronous PPP protocol driver for all MultiGate
- boards. Say Y if you want to use either protocol on your MultiGate
- boards.
+ Cisco-HDLC and synchronous PPP protocol driver.
- To compile this as a module, choose M here: the module will be called
- comx-proto-ppp.
+ Say Y if you want to use either protocol.
+
+ To compile this as a module, choose M here: the
+ module will be called comx-proto-ppp.
config COMX_PROTO_LAPB
tristate "Support for LAPB protocol on MultiGate boards"
depends on WAN && (COMX!=n && LAPB=m && LAPB || LAPB=y && COMX)
help
- LAPB protocol driver for all MultiGate boards. Say Y if you
- want to use this protocol on your MultiGate boards.
+ LAPB protocol driver.
+
+ Say Y if you want to use this protocol.
- To compile this as a module, choose M here: the module will be called
- comx-proto-lapb.
+ To compile this as a module, choose M here: the
+ module will be called comx-proto-lapb.
config COMX_PROTO_FR
tristate "Support for Frame Relay on MultiGate boards"
depends on COMX
help
- Frame Relay protocol driver for all MultiGate boards. Say Y if you
- want to use this protocol on your MultiGate boards.
+ Frame Relay protocol driver.
+
+ Say Y if you want to use this protocol.
- To compile this as a module, choose M here: the module will be called
- comx-proto-fr.
+ To compile this as a module, choose M here: the
+ module will be called comx-proto-fr.
config DSCC4
tristate "Etinc PCISYNC serial board support"
depends on WAN && PCI && m
help
- This is a driver for Etinc PCISYNC boards based on the Infineon
- (ex. Siemens) DSCC4 chipset. It is supposed to work with the four
- ports card. Take a look at <http://www.cogenit.fr/dscc4/>
- for further informations about the driver and his configuration.
+ Driver for Etinc PCISYNC boards based on the Infineon (ex. Siemens)
+ DSCC4 chipset.
- To compile this driver as a module, choose M here: the module
- will be called dscc4.
+ This is supposed to work with the four port card. Take a look at
+ <http://www.cogenit.fr/dscc4/> for further information about the
+ driver.
+
+ To compile this driver as a module, choose M here: the
+ module will be called dscc4.
config DSCC4_PCISYNC
bool "Etinc PCISYNC features"
bool "Hard reset support"
depends on DSCC4
help
- Various DSCC4 bugs forbid any reliable software reset of the asic.
+ Various DSCC4 bugs forbid any reliable software reset of the ASIC.
As a replacement, some vendors provide a way to assert the PCI #RST
pin of DSCC4 through the GPIO port of the card. If you choose Y,
the driver will make use of this feature before module removal
- (i.e. rmmod).
- The feature is known to be available on Commtech's cards.
- Contact your manufacturer for details.
+ (i.e. rmmod). The feature is known to be available on Commtech's
+ cards. Contact your manufacturer for details.
Say Y if your card supports this feature.
tristate "LanMedia Corp. SSI/V.35, T1/E1, HSSI, T3 boards"
depends on WAN && PCI
---help---
- This is a driver for the following Lan Media family of serial
- boards.
+ Driver for the following Lan Media family of serial boards:
- LMC 1000 board allows you to connect synchronous serial devices (for
- example base-band modems, or any other device with the X.21, V.24,
- V.35 or V.36 interface) to your Linux box.
+ - LMC 1000 board allows you to connect synchronous serial devices
+ (for example base-band modems, or any other device with the X.21,
+ V.24, V.35 or V.36 interface) to your Linux box.
- LMC 1200 with on board DSU board allows you to connect your Linux
+ - LMC 1200 with on board DSU board allows you to connect your Linux
box dirrectly to a T1 or E1 circuit.
- LMC 5200 board provides a HSSI interface capable of running up to
- 52 mbits per second.
+ - LMC 5200 board provides a HSSI interface capable of running up to
+ 52 Mbits per second.
- LMC 5245 board connects directly to a T3 circuit saving the
+ - LMC 5245 board connects directly to a T3 circuit saving the
additional external hardware.
- To change setting such as syncPPP vs cisco HDLC or clock source you
- will need lmcctl. It is available at <ftp://ftp.lanmedia.com/>.
+ To change setting such as syncPPP vs Cisco HDLC or clock source you
+ will need lmcctl. It is available at <ftp://ftp.lanmedia.com/>
+ (broken link).
- To compile this driver as a module, choose M here: the module
- will be called lmc.
+ To compile this driver as a module, choose M here: the
+ module will be called lmc.
# There is no way to detect a Sealevel board. Force it modular
config SEALEVEL_4021
help
This is a driver for the Sealevel Systems ACB 56 serial I/O adapter.
- This driver can only be compiled as a module ( = code which can be
- inserted in and removed from the running kernel whenever you want).
- If you want to do that, say M here. The module will be called
- sealevel.
+ The driver will be compiled as a module: the
+ module will be called sealevel.
config SYNCLINK_SYNCPPP
tristate "SyncLink HDLC/SYNCPPP support"
depends on WAN
help
Enables HDLC/SYNCPPP support for the SyncLink WAN driver.
- Normally the SyncLink WAN driver works with the main PPP
- driver (ppp.c) and pppd program. HDLC/SYNCPPP support allows use
- of the Cisco HDLC/PPP driver (syncppp.c).
- The SyncLink WAN driver (in character devices) must also be enabled.
+
+ Normally the SyncLink WAN driver works with the main PPP driver
+ <file:drivers/net/ppp_generic.c> and pppd program.
+ HDLC/SYNCPPP support allows use of the Cisco HDLC/PPP driver
+ <file:drivers/net/wan/syncppp.c>. The SyncLink WAN driver (in
+ character devices) must also be enabled.
# Generic HDLC
config HDLC
tristate "Generic HDLC layer"
depends on WAN
help
- Say Y to this option if your Linux box contains a WAN card supported
- by this driver and you are planning to connect the box to a WAN
- ( = Wide Area Network). You will need supporting software from
- <http://hq.pm.waw.pl/hdlc/>.
+ Say Y to this option if your Linux box contains a WAN (Wide Area
+ Network) card supported by this driver and you are planning to
+ connect the box to a WAN.
+
+ You will need supporting software from <http://hq.pm.waw.pl/hdlc/>.
Generic HDLC driver currently supports raw HDLC, Cisco HDLC, Frame
Relay, synchronous Point-to-Point Protocol (PPP) and X.25.
- To compile this driver as a module, choose M here: the module
- will be called hdlc.
+ To compile this driver as a module, choose M here: the
+ module will be called hdlc.
- If unsure, say N here.
+ If unsure, say N.
config HDLC_RAW
bool "Raw HDLC support"
depends on HDLC
help
- Say Y to this option if you want generic HDLC driver to support
- raw HDLC over WAN (Wide Area Network) connections.
+ Generic HDLC driver supporting raw HDLC over WAN connections.
- If unsure, say N here.
+ If unsure, say N.
config HDLC_RAW_ETH
bool "Raw HDLC Ethernet device support"
depends on HDLC
help
- Say Y to this option if you want generic HDLC driver to support
- raw HDLC Ethernet device emulation over WAN (Wide Area Network)
- connections.
+ Generic HDLC driver supporting raw HDLC Ethernet device emulation
+ over WAN connections.
+
You will need it for Ethernet over HDLC bridges.
- If unsure, say N here.
+ If unsure, say N.
config HDLC_CISCO
bool "Cisco HDLC support"
depends on HDLC
help
- Say Y to this option if you want generic HDLC driver to support
- Cisco HDLC over WAN (Wide Area Network) connections.
+ Generic HDLC driver supporting Cisco HDLC over WAN connections.
- If unsure, say N here.
+ If unsure, say N.
config HDLC_FR
bool "Frame Relay support"
depends on HDLC
help
- Say Y to this option if you want generic HDLC driver to support
- Frame-Relay protocol over WAN (Wide Area Network) connections.
+ Generic HDLC driver supporting Frame Relay over WAN connections.
- If unsure, say N here.
+ If unsure, say N.
config HDLC_PPP
bool "Synchronous Point-to-Point Protocol (PPP) support"
depends on HDLC
help
- Say Y to this option if you want generic HDLC driver to support
- PPP over WAN (Wide Area Network) connections.
+ Generic HDLC driver supporting PPP over WAN connections.
- If unsure, say N here.
+ If unsure, say N.
config HDLC_X25
bool "X.25 protocol support"
depends on HDLC && (LAPB=m && HDLC=m || LAPB=y)
help
- Say Y to this option if you want generic HDLC driver to support
- X.25 protocol over WAN (Wide Area Network) connections.
+ Generic HDLC driver supporting X.25 over WAN connections.
- If unsure, say N here.
+ If unsure, say N.
comment "X.25/LAPB support is disabled"
depends on WAN && HDLC && (LAPB!=m || HDLC!=m) && LAPB!=y
tristate "Goramo PCI200SYN support"
depends on HDLC && PCI
help
- This driver is for PCI200SYN cards made by Goramo sp. j.
+ Driver for PCI200SYN cards by Goramo sp. j.
+
If you have such a card, say Y here and see
- <http://hq.pm.waw.pl/pub/hdlc/>
+ <http://hq.pm.waw.pl/hdlc/>.
- If you want to compile the driver as a module ( = code which can be
- inserted in and removed from the running kernel whenever you want),
- say M here and read <file:Documentation/modules.txt>. The module
- will be called pci200syn.
+ To compile this as a module, choose M here: the
+ module will be called pci200syn.
- If unsure, say N here.
+ If unsure, say N.
config WANXL
tristate "SBE Inc. wanXL support"
depends on HDLC && PCI
help
- This driver is for wanXL PCI cards made by SBE Inc. If you have
- such a card, say Y here and see <http://hq.pm.waw.pl/pub/hdlc/>.
+ Driver for wanXL PCI cards by SBE Inc.
- If you want to compile the driver as a module ( = code which can be
- inserted in and removed from the running kernel whenever you want),
- say M here and read <file:Documentation/kbuild/modules.txt>. The module
- will be called wanxl.
+ If you have such a card, say Y here and see
+ <http://hq.pm.waw.pl/hdlc/>.
+
+ To compile this as a module, choose M here: the
+ module will be called wanxl.
- If unsure, say N here.
+ If unsure, say N.
config WANXL_BUILD_FIRMWARE
bool "rebuild wanXL firmware"
depends on WANXL
help
- This option allows you to rebuild firmware run by the QUICC
- processor. It requires as68k, ld68k and hexdump programs.
- You should never need this option.
+ Allows you to rebuild firmware run by the QUICC processor.
+ It requires as68k, ld68k and hexdump programs.
- If unsure, say N here.
+ You should never need this option, say N.
config PC300
tristate "Cyclades-PC300 support (RS-232/V.35, X.21, T1/E1 boards)"
depends on HDLC && PCI
---help---
- This is a driver for the Cyclades-PC300 synchronous communication
- boards. These boards provide synchronous serial interfaces to your
+ Driver for the Cyclades-PC300 synchronous communication boards.
+
+ These boards provide synchronous serial interfaces to your
Linux box (interfaces currently available are RS-232/V.35, X.21 and
T1/E1). If you wish to support Multilink PPP, please select the
- option below this one and read the file README.mlppp provided by PC300
+ option later and read the file README.mlppp provided by PC300
package.
- To compile this as a module, choose M here: the module will be
- called pc300.
+ To compile this as a module, choose M here: the module
+ will be called pc300.
- If you haven't heard about it, it's safe to say N.
+ If unsure, say N.
config PC300_MLPPP
bool "Cyclades-PC300 MLPPP support"
depends on PC300 && PPP_MULTILINK && PPP_SYNC_TTY && HDLC_PPP
help
- Say 'Y' to this option if you are planning to use Multilink PPP over the
- PC300 synchronous communication boards.
+ Multilink PPP over the PC300 synchronous communication boards.
comment "Cyclades-PC300 MLPPP support is disabled."
depends on WAN && HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
tristate "SDL RISCom/N2 support"
depends on HDLC && ISA
help
- This driver is for RISCom/N2 single or dual channel ISA cards
- made by SDL Communications Inc. If you have such a card,
- say Y here and see <http://hq.pm.waw.pl/pub/hdlc/>.
+ Driver for RISCom/N2 single or dual channel ISA cards by
+ SDL Communications Inc.
+
+ If you have such a card, say Y here and see
+ <http://hq.pm.waw.pl/hdlc/>.
Note that N2csu and N2dds cards are not supported by this driver.
To compile this driver as a module, choose M here: the module
will be called n2.
- If unsure, say N here.
+ If unsure, say N.
config C101
tristate "Moxa C101 support"
depends on HDLC && ISA
help
- This driver is for C101 SuperSync ISA cards made by Moxa
- Technologies Co., Ltd. If you have such a card,
- say Y here and see <http://hq.pm.waw.pl/pub/hdlc/>
+ Driver for C101 SuperSync ISA cards by Moxa Technologies Co., Ltd.
- To compile this driver as a module, choose M here: the module
- will be called c101.
+ If you have such a card, say Y here and see
+ <http://hq.pm.waw.pl/pub/hdlc/>
+
+ To compile this driver as a module, choose M here: the
+ module will be called c101.
- If unsure, say N here.
+ If unsure, say N.
config FARSYNC
tristate "FarSync T-Series support"
depends on HDLC && PCI
---help---
- This driver supports the FarSync T-Series X.21 (and V.35/V.24) cards
- from FarSite Communications Ltd.
+ Support for the FarSync T-Series X.21 (and V.35/V.24) cards by
+ FarSite Communications Ltd.
+
Synchronous communication is supported on all ports at speeds up to
8Mb/s (128K on V.24) using synchronous PPP, Cisco HDLC, raw HDLC,
Frame Relay or X.25/LAPB.
- To compile this driver as a module, choose M here: the module will be
- called farsync. If you want the module to be automatically loaded
- when the interface is referenced then you should add
- "alias hdlcX farsync" to /etc/modules.conf for each interface, where
- X is 0, 1, 2, ...
+ If you want the module to be automatically loaded when the interface
+ is referenced then you should add "alias hdlcX farsync" to
+ /etc/modprobe.conf for each interface, where X is 0, 1, 2, ..., or
+ simply use "alias hdlc* farsync" to indicate all of them.
+
+ To compile this driver as a module, choose M here: the
+ module will be called farsync.
config DLCI
- tristate "Frame relay DLCI support"
+ tristate "Frame Relay DLCI support"
depends on WAN
---help---
- This is support for the frame relay protocol; frame relay is a fast
- low-cost way to connect to a remote Internet access provider or to
- form a private wide area network. The one physical line from your
- box to the local "switch" (i.e. the entry point to the frame relay
- network, usually at the phone company) can carry several logical
- point-to-point connections to other computers connected to the frame
- relay network. For a general explanation of the protocol, check out
- <http://www.frforum.com/> on the WWW. To use frame relay, you need
- supporting hardware (called FRAD) and certain programs from the
- net-tools package as explained in
+ Support for the Frame Relay protocol.
+
+ Frame Relay is a fast low-cost way to connect to a remote Internet
+ access provider or to form a private wide area network. The one
+ physical line from your box to the local "switch" (i.e. the entry
+ point to the Frame Relay network, usually at the phone company) can
+ carry several logical point-to-point connections to other computers
+ connected to the Frame Relay network. For a general explanation of
+ the protocol, check out <http://www.mplsforum.org/>.
+
+ To use frame relay, you need supporting hardware (called FRAD) and
+ certain programs from the net-tools package as explained in
<file:Documentation/networking/framerelay.txt>.
- To compile this driver as a module, choose M here: the module will be
- called dlci.
+ To compile this driver as a module, choose M here: the
+ module will be called dlci.
config DLCI_COUNT
int "Max open DLCI"
depends on DLCI
default "24"
help
- This is the maximal number of logical point-to-point frame relay
- connections (the identifiers of which are called DCLIs) that
- the driver can handle. The default is probably fine.
+ Maximal number of logical point-to-point frame relay connections
+ (the identifiers of which are called DCLIs) that the driver can
+ handle.
+
+ The default is probably fine.
config DLCI_MAX
int "Max DLCI per device"
depends on DLCI
default "8"
help
- You can specify here how many logical point-to-point frame relay
- connections (the identifiers of which are called DCLIs) should be
- handled by each of your hardware frame relay access devices. Go with
- the default.
+ How many logical point-to-point frame relay connections (the
+ identifiers of which are called DCLIs) should be handled by each
+ of your hardware frame relay access devices.
+
+ Go with the default.
config SDLA
tristate "SDLA (Sangoma S502/S508) support"
depends on DLCI && ISA
help
- Say Y here if you need a driver for the Sangoma S502A, S502E, and
- S508 Frame Relay Access Devices. These are multi-protocol cards, but
- only frame relay is supported by the driver at this time. Please
- read <file:Documentation/networking/framerelay.txt>.
+ Driver for the Sangoma S502A, S502E, and S508 Frame Relay Access
+ Devices.
- To compile this driver as a module, choose M here: the module will be
- called sdla.
+ These are multi-protocol cards, but only Frame Relay is supported
+ by the driver at this time. Please read
+ <file:Documentation/networking/framerelay.txt>.
+
+ To compile this driver as a module, choose M here: the
+ module will be called sdla.
# Wan router core.
config WAN_ROUTER_DRIVERS
bool "WAN router drivers"
depends on WAN && WAN_ROUTER
---help---
- If you have a WAN interface card and you want your Linux box to act
- as a WAN router, thereby connecting you Local Area Network to the
- outside world over the WAN connection, say Y here and then to the
- driver for your card below. In addition, you need to say Y to "Wan
- Router".
+ Connect LAN to WAN via Linux box.
+ Select driver your card and remember to say Y to "Wan Router."
You will need the wan-tools package which is available from
- <ftp://ftp.sangoma.com/>. Read
- <file:Documentation/networking/wan-router.txt> for more information.
+ <ftp://ftp.sangoma.com/>. For more information read:
+ <file:Documentation/networking/wan-router.txt>.
Note that the answer to this question won't directly affect the
kernel: saying N will just cause the configurator to skip all
- the questions about WAN router drivers. If unsure, say N.
+ the questions about WAN router drivers.
+
+ If unsure, say N.
config VENDOR_SANGOMA
tristate "Sangoma WANPIPE(tm) multiprotocol cards"
depends on WAN_ROUTER_DRIVERS && WAN_ROUTER && (PCI || ISA) && BROKEN
---help---
- WANPIPE from Sangoma Technologies Inc. (<http://www.sangoma.com/>)
+ Driver for S514-PCI/ISA Synchronous Data Link Adapters (SDLA).
+
+ WANPIPE from Sangoma Technologies Inc. <http://www.sangoma.com/>
is a family of intelligent multiprotocol WAN adapters with data
- transfer rates up to 4Mbps. They are also known as Synchronous
- Data Link Adapters (SDLA) and are designated as S514-PCI or
- S508-ISA. These cards support
+ transfer rates up to 4Mbps. Cards support:
- X.25, Frame Relay, PPP, Cisco HDLC protocols.
- - API support for protocols like HDLC (LAPB),
- HDLC Streaming, X.25, Frame Relay and BiSync.
+ - API for protocols like HDLC (LAPB), HDLC Streaming, X.25,
+ Frame Relay and BiSync.
- Ethernet Bridging over Frame Relay protocol.
- Async PPP (Modem Dialup)
- If you have one or more of these cards, say M to this option; you
- may then also want to read the file
- <file:Documentation/networking/wanpipe.txt>. The next questions
- will ask you about the protocols you want the driver to support.
+ The next questions will ask you about the protocols you want
+ the driver to support.
- To compile this driver as a module, choose M here: the module will
- be called wanpipe.
+ If you have one or more of these cards, say M to this option;
+ and read <file:Documentation/networking/wanpipe.txt>.
+
+ To compile this driver as a module, choose M here: the
+ module will be called wanpipe.
config WANPIPE_CHDLC
bool "WANPIPE Cisco HDLC support"
depends on VENDOR_SANGOMA
---help---
- Say Y to this option if you are planning to connect a WANPIPE card
- to a leased line using the Cisco HDLC protocol. This now supports
- Dual Port Cisco HDLC on the S514-PCI/S508-ISA cards.
- This support also allows user to build applications using the
- HDLC streaming API.
+ Connect a WANPIPE card to a leased line using the Cisco HDLC.
+
+ - Supports Dual Port Cisco HDLC on the S514-PCI/S508-ISA cards
+ which allows user to build applications using the HDLC streaming API.
- CHDLC Streaming driver also supports MULTILINK PPP
- support that can bind multiple WANPIPE T1 cards into
- a single logical channel.
+ - CHDLC Streaming MULTILINK PPP that can bind multiple WANPIPE T1
+ cards into a single logical channel.
- If you say N, the Cisco HDLC support and
- HDLC streaming API and MULTILINK PPP will not be
- included in the driver.
+ Say Y and the Cisco HDLC support, HDLC streaming API and
+ MULTILINK PPP will be included in the driver.
config WANPIPE_FR
bool "WANPIPE Frame Relay support"
depends on VENDOR_SANGOMA
help
- Say Y to this option if you are planning to connect a WANPIPE card
- to a frame relay network, or use frame relay API to develope
- custom applications over the Frame Relay protocol.
- This feature also contains the Ethernet Bridging over Frame Relay,
- where a WANPIPE frame relay link can be directly connected to the
- Linux kernel bridge. If you say N, the frame relay support will
- not be included in the driver. The Frame Relay option is
- supported on S514-PCI and S508-ISA cards.
+ Connect a WANPIPE card to a Frame Relay network, or use Frame Felay
+ API to develope custom applications.
+
+ Contains the Ethernet Bridging over Frame Relay feature, where
+ a WANPIPE frame relay link can be directly connected to the Linux
+ kernel bridge. The Frame Relay option is supported on S514-PCI
+ and S508-ISA cards.
+
+ Say Y and the Frame Relay support will be included in the driver.
config WANPIPE_X25
bool "WANPIPE X.25 support"
depends on VENDOR_SANGOMA
help
- Say Y to this option if you are planning to connect a WANPIPE card
- to an X.25 network. Note, this feature also includes the X.25 API
- support used to develope custom applications over the X.25 protocol.
- If you say N, the X.25 support will not be included in the driver.
- The X.25 option is supported on S514-PCI and S508-ISA cards.
+ Connect a WANPIPE card to an X.25 network.
+
+ Includes the X.25 API support for custom applications over the
+ X.25 protocol. The X.25 option is supported on S514-PCI and
+ S508-ISA cards.
+
+ Say Y and the X.25 support will be included in the driver.
config WANPIPE_PPP
bool "WANPIPE PPP support"
depends on VENDOR_SANGOMA
help
- Say Y to this option if you are planning to connect a WANPIPE card
- to a leased line using Point-to-Point protocol (PPP). If you say N,
- the PPP support will not be included in the driver. The PPP option
- is supported on S514-PCI/S508-ISA cards.
+ Connect a WANPIPE card to a leased line using Point-to-Point
+ Protocol (PPP).
+
+ The PPP option is supported on S514-PCI/S508-ISA cards.
+
+ Say Y and the PPP support will be included in the driver.
config WANPIPE_MULTPPP
bool "WANPIPE Multi-Port PPP support"
depends on VENDOR_SANGOMA
help
- Say Y to this option if you are planning to connect a WANPIPE card
- to a leased line using Point-to-Point protocol (PPP). Note, the
- MultiPort PPP uses the Linux Kernel SyncPPP protocol over the
- Sangoma HDLC Streaming adapter. In this case each Sangoma adapter
- port can support an independent PPP connection. For example, a
- single Quad-Port PCI adapter can support up to four independent
- PPP links. If you say N,the PPP support will not be included in the
- driver. The PPP option is supported on S514-PCI/S508-ISA cards.
+ Connect a WANPIPE card to a leased line using Point-to-Point
+ Protocol (PPP).
+
+ Uses in-kernel SyncPPP protocol over the Sangoma HDLC Streaming
+ adapter. In this case each Sangoma adapter port can support an
+ independent PPP connection. For example, a single Quad-Port PCI
+ adapter can support up to four independent PPP links. The PPP
+ option is supported on S514-PCI/S508-ISA cards.
+
+ Say Y and the Multi-Port PPP support will be included in the driver.
config CYCLADES_SYNC
tristate "Cyclom 2X(tm) cards (EXPERIMENTAL)"
depends on WAN_ROUTER_DRIVERS && (PCI || ISA)
---help---
- Cyclom 2X from Cyclades Corporation (<http://www.cyclades.com/> and
- <http://www.cyclades.com.br/>) is an intelligent multiprotocol WAN
- adapter with data transfer rates up to 512 Kbps. These cards support
- the X.25 and SNA related protocols. If you have one or more of these
- cards, say Y to this option. The next questions will ask you about
- the protocols you want the driver to support (for now only X.25 is
- supported).
+ Cyclom 2X from Cyclades Corporation <http://www.cyclades.com/> is an
+ intelligent multiprotocol WAN adapter with data transfer rates up to
+ 512 Kbps. These cards support the X.25 and SNA related protocols.
While no documentation is available at this time please grab the
wanconfig tarball in
<ftp://ftp.sangoma.com/>).
Feel free to contact me or the cycsyn-devel mailing list at
- acme@conectiva.com.br and cycsyn-devel@bazar.conectiva.com.br for
- additional details, I hope to have documentation available as soon
- as possible. (Cyclades Brazil is writing the Documentation).
+ <acme@conectiva.com.br> and <cycsyn-devel@bazar.conectiva.com.br> for
+ additional details, I hope to have documentation available as soon as
+ possible. (Cyclades Brazil is writing the Documentation).
+
+ The next questions will ask you about the protocols you want the
+ driver to support (for now only X.25 is supported).
- To compile this driver as a module, choose M here: the module will be
- called cyclomx.
+ If you have one or more of these cards, say Y to this option.
+
+ To compile this driver as a module, choose M here: the
+ module will be called cyclomx.
config CYCLOMX_X25
bool "Cyclom 2X X.25 support (EXPERIMENTAL)"
depends on CYCLADES_SYNC
help
- Say Y to this option if you are planning to connect a Cyclom 2X card
- to an X.25 network.
+ Connect a Cyclom 2X card to an X.25 network.
- If you say N, the X.25 support will not be included in the driver
- (saves about 11 KB of kernel memory).
+ Enabling X.25 support will enlarge your kernel by about 11 kB.
# X.25 network drivers
config LAPBETHER
tristate "LAPB over Ethernet driver (EXPERIMENTAL)"
depends on WAN && LAPB && X25
---help---
- This is a driver for a pseudo device (typically called /dev/lapb0)
- which allows you to open an LAPB point-to-point connection to some
- other computer on your Ethernet network. In order to do this, you
- need to say Y or M to the driver for your Ethernet card as well as
- to "LAPB Data Link Driver".
+ Driver for a pseudo device (typically called /dev/lapb0) which allows
+ you to open an LAPB point-to-point connection to some other computer
+ on your Ethernet network.
- To compile this driver as a module, choose M here: the module
- will be called lapbether. If unsure, say N.
+ In order to do this, you need to say Y or M to the driver for your
+ Ethernet card as well as to "LAPB Data Link Driver".
+
+ To compile this driver as a module, choose M here: the
+ module will be called lapbether.
+
+ If unsure, say N.
config X25_ASY
tristate "X.25 async driver (EXPERIMENTAL)"
depends on WAN && LAPB && X25
---help---
- This is a driver for sending and receiving X.25 frames over regular
- asynchronous serial lines such as telephone lines equipped with
- ordinary modems. Experts should note that this driver doesn't
- currently comply with the asynchronous HDLS framing protocols in
- CCITT recommendation X.25.
+ Send and receive X.25 frames over regular asynchronous serial
+ lines such as telephone lines equipped with ordinary modems.
- To compile this driver as a module, choose M here: the module
- will be called x25_asy. If unsure, say N.
+ Experts should note that this driver doesn't currently comply with
+ the asynchronous HDLS framing protocols in CCITT recommendation X.25.
+
+ To compile this driver as a module, choose M here: the
+ module will be called x25_asy.
+
+ If unsure, say N.
config SBNI
tristate "Granch SBNI12 Leased Line adapter support"
depends on WAN && X86
---help---
- This is a driver for ISA SBNI12-xx cards which are low cost
- alternatives to leased line modems. Say Y if you want to insert
- the driver into the kernel or say M to compile it as a module (the
- module will be called sbni).
+ Driver for ISA SBNI12-xx cards which are low cost alternatives to
+ leased line modems.
You can find more information and last versions of drivers and
utilities at <http://www.granch.ru/>. If you have any question you
- can send email to sbni@granch.ru.
+ can send email to <sbni@granch.ru>.
- Say N if unsure.
+ To compile this driver as a module, choose M here: the
+ module will be called sbni.
+
+ If unsure, say N.
config SBNI_MULTILINE
bool "Multiple line feature support"
depends on SBNI
help
Schedule traffic for some parallel lines, via SBNI12 adapters.
+
If you have two computers connected with two parallel lines it's
possible to increase transfer rate nearly twice. You should have
a program named 'sbniconfig' to configure adapters.
- Say N if unsure.
+ If unsure, say N.
endmenu
typedef struct card_s {
- hdlc_device hdlc; /* HDLC device struct - must be first */
+ struct net_device *dev;
spinlock_t lock; /* TX lock */
u8 *win0base; /* ISA window base address */
u32 phy_winbase; /* ISA physical base address */
static void sca_msci_intr(port_t *port)
{
+ struct net_device *dev = port_to_dev(port);
card_t* card = port_to_card(port);
u8 stat = sca_in(MSCI1_OFFSET + ST1, card); /* read MSCI ST1 status */
sca_out(stat & ST1_UDRN, MSCI0_OFFSET + ST1, card);
if (stat & ST1_UDRN) {
- port->hdlc.stats.tx_errors++; /* TX Underrun error detected */
- port->hdlc.stats.tx_fifo_errors++;
+ struct net_device_stats *stats = hdlc_stats(dev);
+ stats->tx_errors++; /* TX Underrun error detected */
+ stats->tx_fifo_errors++;
}
/* Reset MSCI CDCD status bit - uses ch#2 DCD input */
if (stat & ST1_CDCD)
hdlc_set_carrier(!(sca_in(MSCI1_OFFSET + ST3, card) & ST3_DCD),
- &port->hdlc);
+ dev);
}
static int c101_open(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
int result;
- result = hdlc_open(hdlc);
+ result = hdlc_open(dev);
if (result)
return result;
writeb(1, port->win0base + C101_DTR);
sca_out(0, MSCI1_OFFSET + CTL, port); /* RTS uses ch#2 output */
- sca_open(hdlc);
+ sca_open(dev);
/* DCD is connected to port 2 !@#$%^& - disable MSCI0 CDCD interrupt */
sca_out(IE1_UDRN, MSCI0_OFFSET + IE1, port);
sca_out(IE0_TXINT, MSCI0_OFFSET + IE0, port);
- hdlc_set_carrier(!(sca_in(MSCI1_OFFSET + ST3, port) & ST3_DCD), hdlc);
+ hdlc_set_carrier(!(sca_in(MSCI1_OFFSET + ST3, port) & ST3_DCD), dev);
printk(KERN_DEBUG "0x%X\n", sca_in(MSCI1_OFFSET + ST3, port));
/* enable MSCI1 CDCD interrupt */
static int c101_close(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
- sca_close(hdlc);
+ sca_close(dev);
writeb(0, port->win0base + C101_DTR);
sca_out(CTL_NORTS, MSCI1_OFFSET + CTL, port);
- hdlc_close(hdlc);
+ hdlc_close(dev);
return 0;
}
{
const size_t size = sizeof(sync_serial_settings);
sync_serial_settings new_line, *line = ifr->ifr_settings.ifs_ifsu.sync;
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
#ifdef DEBUG_RINGS
if (cmd == SIOCDEVPRIVATE) {
- sca_dump_rings(hdlc);
+ sca_dump_rings(dev);
printk(KERN_DEBUG "MSCI1: ST: %02x %02x %02x %02x\n",
sca_in(MSCI1_OFFSET + ST0, port),
sca_in(MSCI1_OFFSET + ST1, port),
release_mem_region(card->phy_winbase, C101_MAPPED_RAM_SIZE);
}
+ free_netdev(card->dev);
+
kfree(card);
}
static int __init c101_run(unsigned long irq, unsigned long winbase)
{
struct net_device *dev;
+ hdlc_device *hdlc;
card_t *card;
int result;
}
memset(card, 0, sizeof(card_t));
+ card->dev = alloc_hdlcdev(card);
+ if (!card->dev) {
+ printk(KERN_ERR "c101: unable to allocate memory\n");
+ kfree(card);
+ return -ENOBUFS;
+ }
+
if (request_irq(irq, sca_intr, 0, devname, card)) {
printk(KERN_ERR "c101: could not allocate IRQ\n");
c101_destroy_card(card);
sca_init(card, 0);
- dev = hdlc_to_dev(&card->hdlc);
+ dev = port_to_dev(card);
+ hdlc = dev_to_hdlc(dev);
spin_lock_init(&card->lock);
SET_MODULE_OWNER(dev);
dev->do_ioctl = c101_ioctl;
dev->open = c101_open;
dev->stop = c101_close;
- card->hdlc.attach = sca_attach;
- card->hdlc.xmit = sca_xmit;
+ hdlc->attach = sca_attach;
+ hdlc->xmit = sca_xmit;
card->settings.clock_type = CLOCK_EXT;
- result = register_hdlc_device(&card->hdlc);
+ result = register_hdlc_device(dev);
if (result) {
printk(KERN_WARNING "c101: unable to register hdlc device\n");
c101_destroy_card(card);
return result;
}
+ /* XXX: are we OK with having that done when card is already up? */
+
sca_init_sync_port(card); /* Set up C101 memory */
- hdlc_set_carrier(!(sca_in(MSCI1_OFFSET + ST3, card) & ST3_DCD),
- &card->hdlc);
+ hdlc_set_carrier(!(sca_in(MSCI1_OFFSET + ST3, card) & ST3_DCD), dev);
printk(KERN_INFO "%s: Moxa C101 on IRQ%u,"
" using %u TX + %u RX packets rings\n",
- hdlc_to_name(&card->hdlc), card->irq,
+ dev->name, card->irq,
card->tx_ring_buffers, card->rx_ring_buffers);
*new_card = card;
while (card) {
card_t *ptr = card;
card = card->next_card;
- unregister_hdlc_device(&ptr->hdlc);
+ unregister_hdlc_device(port_to_dev(ptr));
c101_destroy_card(ptr);
}
}
{
frs0 = readb(lbi + FRS0);
fmr2 = readb(lbi + FMR2);
- len += snprintf(page + len, PAGE_SIZE - len, "Controller status:\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "Controller status:\n");
if (frs0 == 0)
- len += snprintf(page + len, PAGE_SIZE - len, "\tNo alarms\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "\tNo alarms\n");
else
{
if (frs0 & FRS0_LOS)
- len += snprintf(page + len, PAGE_SIZE - len, "\tLoss Of Signal\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "\tLoss Of Signal\n");
else
{
if (frs0 & FRS0_AIS)
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tAlarm Indication Signal\n");
else
{
if (frs0 & FRS0_AUXP)
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tAuxiliary Pattern Indication\n");
if (frs0 & FRS0_LFA)
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tLoss of Frame Alignment\n");
else
{
if (frs0 & FRS0_RRA)
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tReceive Remote Alarm\n");
/* You can't set this framing with the /proc interface, but it */
if ((board->framing == SLICECOM_FRAMING_CRC4) &&
(frs0 & FRS0_LMFA))
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tLoss of CRC4 Multiframe Alignment\n");
if (((fmr2 & 0xc0) == 0xc0) && (frs0 & FRS0_NMF))
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tNo CRC4 Multiframe alignment Found after 400 msec\n");
}
}
frs1 = readb(lbi + FRS1);
if (FRS1_XLS & frs1)
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tTransmit Line Short\n");
/* debug Rx ring: DEL: - vagy meghagyni, de akkor legyen kicsit altalanosabb */
}
- len += snprintf(page + len, PAGE_SIZE - len, "Rx ring:\n");
- len += snprintf(page + len, PAGE_SIZE - len, "\trafutott: %d\n", hw->rafutott);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len, "Rx ring:\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "\trafutott: %d\n", hw->rafutott);
+ len += scnprintf(page + len, PAGE_SIZE - len,
"\tlastcheck: %ld, jiffies: %ld\n", board->lastcheck, jiffies);
- len += snprintf(page + len, PAGE_SIZE - len, "\tbase: %08x\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "\tbase: %08x\n",
(u32) virt_to_phys(&hw->rx_desc[0]));
- len += snprintf(page + len, PAGE_SIZE - len, "\trx_desc_ptr: %d\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "\trx_desc_ptr: %d\n",
hw->rx_desc_ptr);
- len += snprintf(page + len, PAGE_SIZE - len, "\trx_desc_ptr: %08x\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "\trx_desc_ptr: %08x\n",
(u32) virt_to_phys(&hw->rx_desc[hw->rx_desc_ptr]));
- len += snprintf(page + len, PAGE_SIZE - len, "\thw_curr_ptr: %08x\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "\thw_curr_ptr: %08x\n",
board->ccb->current_rx_desc[hw->channel]);
for (i = 0; i < RX_DESC_MAX; i++)
- len += snprintf(page + len, PAGE_SIZE - len, "\t%08x %08x %08x %08x\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "\t%08x %08x %08x %08x\n",
*((u32 *) & hw->rx_desc[i] + 0),
*((u32 *) & hw->rx_desc[i] + 1),
*((u32 *) & hw->rx_desc[i] + 2),
if (!board->isx21)
{
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"Interfaces using this board: (channel-group, interface, timeslots)\n");
for (i = 0; i < 32; i++)
{
((struct slicecom_privdata *)((struct comx_channel *)devp->
priv)->HW_privdata)->
timeslots;
- len += snprintf(page + len, PAGE_SIZE - len, "\t%2d %s: ", i,
+ len += scnprintf(page + len, PAGE_SIZE - len, "\t%2d %s: ", i,
devp->name);
for (j = 0; j < 32; j++)
if ((1 << j) & timeslots)
- len += snprintf(page + len, PAGE_SIZE - len, "%d ", j);
- len += snprintf(page + len, PAGE_SIZE - len, "\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "%d ", j);
+ len += scnprintf(page + len, PAGE_SIZE - len, "\n");
}
}
}
- len += snprintf(page + len, PAGE_SIZE - len, "Interrupt work histogram:\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "Interrupt work histogram:\n");
for (i = 0; i < MAX_WORK; i++)
- len += snprintf(page + len, PAGE_SIZE - len, "hist[%2d]: %8u%c", i,
+ len += scnprintf(page + len, PAGE_SIZE - len, "hist[%2d]: %8u%c", i,
board->histogram[i], (i &&
((i + 1) % 4 == 0 ||
i == MAX_WORK - 1)) ? '\n' : ' ');
- len += snprintf(page + len, PAGE_SIZE - len, "Tx ring histogram:\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "Tx ring histogram:\n");
for (i = 0; i < TX_DESC_MAX; i++)
- len += snprintf(page + len, PAGE_SIZE - len, "hist[%2d]: %8u%c", i,
+ len += scnprintf(page + len, PAGE_SIZE - len, "hist[%2d]: %8u%c", i,
hw->tx_ring_hist[i], (i &&
((i + 1) % 4 == 0 ||
i ==
sump[j] += p[j];
}
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"Data in current interval (%d seconds elapsed):\n",
board->elapsed_seconds);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Line Code Violations, %d Path Code Violations, %d E-Bit Errors\n",
curr_int->line_code_violations,
curr_int->path_code_violations, curr_int->e_bit_errors);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Slip Secs, %d Fr Loss Secs, %d Line Err Secs, %d Degraded Mins\n",
curr_int->slip_secs, curr_int->fr_loss_secs,
curr_int->line_err_secs, curr_int->degraded_mins);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Errored Secs, %d Bursty Err Secs, %d Severely Err Secs, %d Unavail Secs\n",
curr_int->errored_secs, curr_int->bursty_err_secs,
curr_int->severely_err_secs, curr_int->unavail_secs);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"Data in Interval 1 (15 minutes):\n");
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Line Code Violations, %d Path Code Violations, %d E-Bit Errors\n",
prev_int->line_code_violations,
prev_int->path_code_violations, prev_int->e_bit_errors);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Slip Secs, %d Fr Loss Secs, %d Line Err Secs, %d Degraded Mins\n",
prev_int->slip_secs, prev_int->fr_loss_secs,
prev_int->line_err_secs, prev_int->degraded_mins);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Errored Secs, %d Bursty Err Secs, %d Severely Err Secs, %d Unavail Secs\n",
prev_int->errored_secs, prev_int->bursty_err_secs,
prev_int->severely_err_secs, prev_int->unavail_secs);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"Data in last 4 intervals (1 hour):\n");
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Line Code Violations, %d Path Code Violations, %d E-Bit Errors\n",
last4.line_code_violations, last4.path_code_violations,
last4.e_bit_errors);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Slip Secs, %d Fr Loss Secs, %d Line Err Secs, %d Degraded Mins\n",
last4.slip_secs, last4.fr_loss_secs, last4.line_err_secs,
last4.degraded_mins);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Errored Secs, %d Bursty Err Secs, %d Severely Err Secs, %d Unavail Secs\n",
last4.errored_secs, last4.bursty_err_secs,
last4.severely_err_secs, last4.unavail_secs);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"Data in last 96 intervals (24 hours):\n");
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Line Code Violations, %d Path Code Violations, %d E-Bit Errors\n",
last96.line_code_violations, last96.path_code_violations,
last96.e_bit_errors);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Slip Secs, %d Fr Loss Secs, %d Line Err Secs, %d Degraded Mins\n",
last96.slip_secs, last96.fr_loss_secs,
last96.line_err_secs, last96.degraded_mins);
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
" %d Errored Secs, %d Bursty Err Secs, %d Severely Err Secs, %d Unavail Secs\n",
last96.errored_secs, last96.bursty_err_secs,
last96.severely_err_secs, last96.unavail_secs);
}
-// len +=snprintf( page + len, PAGE_SIZE - len, "Special events:\n" );
-// len +=snprintf( page + len, PAGE_SIZE - len, "\tstat_pri/missed: %u / %u\n", board->stat_pri_races, board->stat_pri_races_missed );
-// len +=snprintf( page + len, PAGE_SIZE - len, "\tstat_pti/missed: %u / %u\n", board->stat_pti_races, board->stat_pti_races_missed );
+// len +=scnprintf( page + len, PAGE_SIZE - len, "Special events:\n" );
+// len +=scnprintf( page + len, PAGE_SIZE - len, "\tstat_pri/missed: %u / %u\n", board->stat_pri_races, board->stat_pri_races_missed );
+// len +=scnprintf( page + len, PAGE_SIZE - len, "\tstat_pti/missed: %u / %u\n", board->stat_pti_races, board->stat_pti_races_missed );
return len;
}
{
for (i = 0; i < 32; i++)
if ((1 << i) & timeslots)
- len += snprintf(page + len, PAGE_SIZE - len, "%d ", i);
- len += snprintf(page + len, PAGE_SIZE - len, "\n");
+ len += scnprintf(page + len, PAGE_SIZE - len, "%d ", i);
+ len += scnprintf(page + len, PAGE_SIZE - len, "\n");
}
else if (!strcmp(file->name, FILENAME_FRAMING))
{
while (slicecom_framings[i].value &&
slicecom_framings[i].value != board->framing)
i++;
- len += snprintf(page + len, PAGE_SIZE - len, "%s\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "%s\n",
slicecom_framings[i].name);
}
else if (!strcmp(file->name, FILENAME_LINECODE))
while (slicecom_linecodes[i].value &&
slicecom_linecodes[i].value != board->linecode)
i++;
- len += snprintf(page + len, PAGE_SIZE - len, "%s\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "%s\n",
slicecom_linecodes[i].name);
}
else if (!strcmp(file->name, FILENAME_CLOCK_SOURCE))
slicecom_clock_sources[i].value != board->clock_source)
i++;
len +=
- snprintf(page + len, PAGE_SIZE - len, "%s\n",
+ scnprintf(page + len, PAGE_SIZE - len, "%s\n",
slicecom_clock_sources[i].name);
}
else if (!strcmp(file->name, FILENAME_LOOPBACK))
while (slicecom_loopbacks[i].value &&
slicecom_loopbacks[i].value != board->loopback)
i++;
- len += snprintf(page + len, PAGE_SIZE - len, "%s\n",
+ len += scnprintf(page + len, PAGE_SIZE - len, "%s\n",
slicecom_loopbacks[i].name);
}
/* We set permissions to write-only for REG and LBIREG, but root can read them anyway: */
else if (!strcmp(file->name, FILENAME_REG))
{
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"%s: " FILENAME_REG ": write-only file\n", dev->name);
}
else if (!strcmp(file->name, FILENAME_LBIREG))
{
- len += snprintf(page + len, PAGE_SIZE - len,
+ len += scnprintf(page + len, PAGE_SIZE - len,
"%s: " FILENAME_LBIREG ": write-only file\n", dev->name);
}
else
if (!dev || !dev->priv) {
dev_kfree_skb(skb);
} else {
- lapb_data_received(dev->priv, skb);
+ lapb_data_received(dev, skb);
}
}
return -ENODEV;
}
- err = lapb_connect_request(ch);
+ err = lapb_connect_request(dev);
if (ch->debug_flags & DEBUG_COMX_LAPB) {
comx_debug(dev, "%s: lapb opened, error code: %d\n",
comx_debug(dev, "%s: lapb closed\n", dev->name);
}
- lapb_disconnect_request(ch);
+ lapb_disconnect_request(dev);
ch->init_status &= ~LINE_OPEN;
ch->line_status &= ~PROTO_UP;
case 0x00:
break; // transmit
case 0x01:
- lapb_connect_request(ch);
+ lapb_connect_request(dev);
kfree_skb(skb);
return 0;
case 0x02:
- lapb_disconnect_request(ch);
+ lapb_disconnect_request(dev);
default:
kfree_skb(skb);
return 0;
netif_stop_queue(dev);
if ((skb2 = skb_clone(skb, GFP_ATOMIC)) != NULL) {
- lapb_data_request(ch, skb2);
+ lapb_data_request(dev, skb2);
}
return FRAME_ACCEPTED;
int len = 0;
len += sprintf(page + len, "Line status: ");
- if (lapb_getparms(dev->priv, &parms) != LAPB_OK) {
+ if (lapb_getparms(dev, &parms) != LAPB_OK) {
len += sprintf(page + len, "not initialized\n");
return len;
}
struct lapb_parms_struct parms;
int len = 0;
- if (lapb_getparms(dev->priv, &parms)) {
+ if (lapb_getparms(dev, &parms)) {
return -ENODEV;
}
unsigned long parm;
char *page;
- if (lapb_getparms(dev->priv, &parms)) {
+ if (lapb_getparms(dev, &parms)) {
return -ENODEV;
}
parm=simple_strtoul(page,NULL,10);
if (parm > 0 && parm < 100) {
parms.t1=parm;
- lapb_setparms(dev->priv, &parms);
+ lapb_setparms(dev, &parms);
}
} else if (strcmp(entry->name, FILENAME_T2) == 0) {
parm=simple_strtoul(page, NULL, 10);
if (parm > 0 && parm < 100) {
parms.t2=parm;
- lapb_setparms(dev->priv, &parms);
+ lapb_setparms(dev, &parms);
}
} else if (strcmp(entry->name, FILENAME_N2) == 0) {
parm=simple_strtoul(page, NULL, 10);
if (parm > 0 && parm < 100) {
parms.n2=parm;
- lapb_setparms(dev->priv, &parms);
+ lapb_setparms(dev, &parms);
}
} else if (strcmp(entry->name, FILENAME_WINDOW) == 0) {
parms.window = simple_strtoul(page, NULL, 10);
- lapb_setparms(dev->priv, &parms);
+ lapb_setparms(dev, &parms);
} else if (strcmp(entry->name, FILENAME_MODE) == 0) {
if (comx_strcasecmp(page, "dte") == 0) {
parms.mode &= ~(LAPB_DCE | LAPB_DTE);
parms.mode &= ~LAPB_STANDARD;
parms.mode |= LAPB_EXTENDED;
}
- lapb_setparms(dev->priv, &parms);
+ lapb_setparms(dev, &parms);
} else {
printk(KERN_ERR "comxlapb_write_proc: internal error, filename %s\n",
entry->name);
return count;
}
-static void comxlapb_connected(void *token, int reason)
+static void comxlapb_connected(struct net_device *dev, int reason)
{
- struct comx_channel *ch = token;
+ struct comx_channel *ch = dev->priv;
struct proc_dir_entry *comxdir = ch->procdir->subdir;
if (ch->debug_flags & DEBUG_COMX_LAPB) {
comx_status(ch->dev, ch->line_status);
}
-static void comxlapb_disconnected(void *token, int reason)
+static void comxlapb_disconnected(struct net_device *dev, int reason)
{
- struct comx_channel *ch = token;
+ struct comx_channel *ch = dev->priv;
struct proc_dir_entry *comxdir = ch->procdir->subdir;
if (ch->debug_flags & DEBUG_COMX_LAPB) {
comx_status(ch->dev, ch->line_status);
}
-static int comxlapb_data_indication(void *token, struct sk_buff *skb)
+static int comxlapb_data_indication(struct net_device *dev, struct sk_buff *skb)
{
- struct comx_channel *ch = token;
+ struct comx_channel *ch = dev->priv;
if (ch->dev->type == ARPHRD_X25) {
skb_push(skb, 1);
return comx_rx(ch->dev, skb);
}
-static void comxlapb_data_transmit(void *token, struct sk_buff *skb)
+static void comxlapb_data_transmit(struct net_device *dev, struct sk_buff *skb)
{
- struct comx_channel *ch = token;
+ struct comx_channel *ch = dev->priv;
if (ch->HW_send_packet) {
ch->HW_send_packet(ch->dev, skb);
if (ch->debug_flags & DEBUG_COMX_LAPB) {
comx_debug(dev, "%s: unregistering lapb\n", dev->name);
}
- lapb_unregister(dev->priv);
+ lapb_unregister(dev);
remove_proc_entry(FILENAME_T1, ch->procdir);
remove_proc_entry(FILENAME_T2, ch->procdir);
lapbreg.disconnect_indication = comxlapb_disconnected;
lapbreg.data_indication = comxlapb_data_indication;
lapbreg.data_transmit = comxlapb_data_transmit;
- if (lapb_register(dev->priv, &lapbreg)) {
+ if (lapb_register(dev, &lapbreg)) {
return -ENOMEM;
}
if (ch->debug_flags & DEBUG_COMX_LAPB) {
err2:
rtnl_unlock();
- kfree(master);
+ free_netdev(master);
err1:
dev_put(slave);
return(err);
unsigned short encoding;
unsigned short parity;
- hdlc_device hdlc;
+ struct net_device *dev;
sync_serial_settings settings;
u32 __pad __attribute__ ((aligned (4)));
};
static void dscc4_timer(unsigned long);
static void dscc4_tx_timeout(struct net_device *);
static irqreturn_t dscc4_irq(int irq, void *dev_id, struct pt_regs *ptregs);
-static int dscc4_hdlc_attach(hdlc_device *, unsigned short, unsigned short);
+static int dscc4_hdlc_attach(struct net_device *, unsigned short, unsigned short);
static int dscc4_set_iface(struct dscc4_dev_priv *, struct net_device *);
static inline int dscc4_set_quartz(struct dscc4_dev_priv *, int);
#ifdef DSCC4_POLLING
static inline struct dscc4_dev_priv *dscc4_priv(struct net_device *dev)
{
- return list_entry(dev, struct dscc4_dev_priv, hdlc.netdev);
+ return dev_to_hdlc(dev)->priv;
+}
+
+static inline struct net_device *dscc4_to_dev(struct dscc4_dev_priv *p)
+{
+ return p->dev;
}
static void scc_patchl(u32 mask, u32 value, struct dscc4_dev_priv *dpriv,
struct net_device *dev)
{
struct RxFD *rx_fd = dpriv->rx_fd + dpriv->rx_current%RX_RING_SIZE;
- struct net_device_stats *stats = &dpriv->hdlc.stats;
+ struct net_device_stats *stats = hdlc_stats(dev);
struct pci_dev *pdev = dpriv->pci_priv->pdev;
struct sk_buff *skb;
int pkt_len;
root = ppriv->root;
for (i = 0; i < dev_per_card; i++)
- unregister_hdlc_device(&root[i].hdlc);
+ unregister_hdlc_device(dscc4_to_dev(&root[i]));
pci_set_drvdata(pdev, NULL);
+ for (i = 0; i < dev_per_card; i++)
+ free_netdev(root[i].dev);
kfree(root);
kfree(ppriv);
}
}
memset(root, 0, dev_per_card*sizeof(*root));
+ for (i = 0; i < dev_per_card; i++) {
+ root[i].dev = alloc_hdlcdev(root + i);
+ if (!root[i].dev) {
+ while (i--)
+ free_netdev(root[i].dev);
+ goto err_free_dev;
+ }
+ }
+
ppriv = (struct dscc4_pci_priv *) kmalloc(sizeof(*ppriv), GFP_KERNEL);
if (!ppriv) {
printk(KERN_ERR "%s: can't allocate private data\n", DRV_NAME);
- goto err_free_dev;
+ goto err_free_dev2;
}
memset(ppriv, 0, sizeof(struct dscc4_pci_priv));
+ ret = dscc4_set_quartz(root, quartz);
+ if (ret < 0)
+ goto err_free_priv;
+ ppriv->root = root;
+ spin_lock_init(&ppriv->lock);
for (i = 0; i < dev_per_card; i++) {
struct dscc4_dev_priv *dpriv = root + i;
- hdlc_device *hdlc = &dpriv->hdlc;
- struct net_device *d = hdlc_to_dev(hdlc);
+ struct net_device *d = dscc4_to_dev(dpriv);
+ hdlc_device *hdlc = dev_to_hdlc(d);
d->base_addr = ioaddr;
d->init = NULL;
hdlc->xmit = dscc4_start_xmit;
hdlc->attach = dscc4_hdlc_attach;
- ret = register_hdlc_device(hdlc);
- if (ret < 0) {
- printk(KERN_ERR "%s: unable to register\n", DRV_NAME);
- goto err_unregister;
- }
-
dscc4_init_registers(dpriv, d);
dpriv->parity = PARITY_CRC16_PR0_CCITT;
dpriv->encoding = ENCODING_NRZ;
-
+
ret = dscc4_init_ring(d);
+ if (ret < 0)
+ goto err_unregister;
+
+ ret = register_hdlc_device(d);
if (ret < 0) {
- unregister_hdlc_device(hdlc);
+ printk(KERN_ERR "%s: unable to register\n", DRV_NAME);
+ dscc4_release_ring(dpriv);
goto err_unregister;
- }
+ }
}
- ret = dscc4_set_quartz(root, quartz);
- if (ret < 0)
- goto err_unregister;
- ppriv->root = root;
- spin_lock_init(&ppriv->lock);
pci_set_drvdata(pdev, ppriv);
return ret;
err_unregister:
while (--i >= 0) {
dscc4_release_ring(root + i);
- unregister_hdlc_device(&root[i].hdlc);
+ unregister_hdlc_device(dscc4_to_dev(&root[i]));
}
+err_free_priv:
kfree(ppriv);
+err_free_dev2:
+ for (i = 0; i < dev_per_card; i++)
+ free_netdev(root[i].dev);
err_free_dev:
kfree(root);
err_out:
sync_serial_settings *settings = &dpriv->settings;
if (settings->loopback && (settings->clock_type != CLOCK_INT)) {
- struct net_device *dev = hdlc_to_dev(&dpriv->hdlc);
+ struct net_device *dev = dscc4_to_dev(dpriv);
printk(KERN_INFO "%s: loopback requires clock\n", dev->name);
return -1;
static int dscc4_open(struct net_device *dev)
{
struct dscc4_dev_priv *dpriv = dscc4_priv(dev);
- hdlc_device *hdlc = &dpriv->hdlc;
struct dscc4_pci_priv *ppriv;
int ret = -EAGAIN;
if ((dscc4_loopback_check(dpriv) < 0) || !dev->hard_start_xmit)
goto err;
- if ((ret = hdlc_open(hdlc)))
+ if ((ret = hdlc_open(dev)))
goto err;
ppriv = dpriv->pci_priv;
scc_writel(0xffffffff, dpriv, dev, IMR);
scc_patchl(PowerUp | Vis, 0, dpriv, dev, CCR0);
err_out:
- hdlc_close(hdlc);
+ hdlc_close(dev);
err:
return ret;
}
static int dscc4_close(struct net_device *dev)
{
struct dscc4_dev_priv *dpriv = dscc4_priv(dev);
- hdlc_device *hdlc = dev_to_hdlc(dev);
del_timer_sync(&dpriv->timer);
netif_stop_queue(dev);
dpriv->flags |= FakeReset;
- hdlc_close(hdlc);
+ hdlc_close(dev);
return 0;
}
int i, handled = 1;
priv = root->pci_priv;
- dev = hdlc_to_dev(&root->hdlc);
+ dev = dscc4_to_dev(root);
spin_lock_irqsave(&priv->lock, flags);
static inline void dscc4_tx_irq(struct dscc4_pci_priv *ppriv,
struct dscc4_dev_priv *dpriv)
{
- struct net_device *dev = hdlc_to_dev(&dpriv->hdlc);
+ struct net_device *dev = dscc4_to_dev(dpriv);
u32 state;
int cur, loop = 0;
if (state & SccEvt) {
if (state & Alls) {
- struct net_device_stats *stats = &dpriv->hdlc.stats;
+ struct net_device_stats *stats = hdlc_stats(dev);
struct sk_buff *skb;
struct TxFD *tx_fd;
}
if (state & Err) {
printk(KERN_INFO "%s: Tx ERR\n", dev->name);
- dev_to_hdlc(dev)->stats.tx_errors++;
+ hdlc_stats(dev)->tx_errors++;
state &= ~Err;
}
}
static inline void dscc4_rx_irq(struct dscc4_pci_priv *priv,
struct dscc4_dev_priv *dpriv)
{
- struct net_device *dev = hdlc_to_dev(&dpriv->hdlc);
+ struct net_device *dev = dscc4_to_dev(dpriv);
u32 state;
int cur;
if (!(rx_fd->state2 & DataComplete))
break;
if (rx_fd->state2 & FrameAborted) {
- dev_to_hdlc(dev)->stats.rx_over_errors++;
+ hdlc_stats(dev)->rx_over_errors++;
rx_fd->state1 |= Hold;
rx_fd->state2 = 0x00000000;
rx_fd->end = 0xbabeface;
ppriv = pci_get_drvdata(pdev);
root = ppriv->root;
- ioaddr = hdlc_to_dev(&root->hdlc)->base_addr;
+ ioaddr = dscc4_to_dev(root)->base_addr;
dscc4_pci_reset(pdev, ioaddr);
pci_resource_len(pdev, 0));
}
-static int dscc4_hdlc_attach(hdlc_device *hdlc, unsigned short encoding,
+static int dscc4_hdlc_attach(struct net_device *dev, unsigned short encoding,
unsigned short parity)
{
- struct net_device *dev = hdlc_to_dev(hdlc);
struct dscc4_dev_priv *dpriv = dscc4_priv(dev);
if (encoding != ENCODING_NRZ &&
/* Per port (line or channel) information
*/
struct fst_port_info {
- hdlc_device hdlc; /* HDLC device struct - must be first */
+ struct net_device *dev;
struct fst_card_info *card; /* Card we're associated with */
int index; /* Port index on the card */
int hwif; /* Line hardware (lineInterface copy) */
};
/* Convert an HDLC device pointer into a port info pointer and similar */
-#define hdlc_to_port(H) ((struct fst_port_info *)(H))
-#define dev_to_port(D) hdlc_to_port(dev_to_hdlc(D))
-#define port_to_dev(P) hdlc_to_dev(&(P)->hdlc)
+#define dev_to_port(D) (dev_to_hdlc(D)->priv)
+#define port_to_dev(P) ((P)->dev)
/*
int rxp;
unsigned short len;
struct sk_buff *skb;
+ struct net_device *dev = port_to_dev(port);
+ struct net_device_stats *stats = hdlc_stats(dev);
int i;
len );
if ( dmabits != ( RX_STP | RX_ENP ) || len > LEN_RX_BUFFER - 2 )
{
- port->hdlc.stats.rx_errors++;
+ stats->rx_errors++;
/* Update error stats and discard buffer */
if ( dmabits & RX_OFLO )
{
- port->hdlc.stats.rx_fifo_errors++;
+ stats->rx_fifo_errors++;
}
if ( dmabits & RX_CRC )
{
- port->hdlc.stats.rx_crc_errors++;
+ stats->rx_crc_errors++;
}
if ( dmabits & RX_FRAM )
{
- port->hdlc.stats.rx_frame_errors++;
+ stats->rx_frame_errors++;
}
if ( dmabits == ( RX_STP | RX_ENP ))
{
- port->hdlc.stats.rx_length_errors++;
+ stats->rx_length_errors++;
}
/* Discard buffer descriptors until we see the end of packet
{
dbg ( DBG_RX,"intr_rx: can't allocate buffer\n");
- port->hdlc.stats.rx_dropped++;
+ stats->rx_dropped++;
/* Return descriptor to card */
FST_WRB ( card, rxDescrRing[pi][rxp].bits, DMA_OWN );
port->rxpos = rxp;
/* Update stats */
- port->hdlc.stats.rx_packets++;
- port->hdlc.stats.rx_bytes += len;
+ stats->rx_packets++;
+ stats->rx_bytes += len;
/* Push upstream */
skb->mac.raw = skb->data;
- skb->dev = hdlc_to_dev ( &port->hdlc );
+ skb->dev = dev;
skb->protocol = hdlc_type_trans(skb, skb->dev);
netif_rx ( skb );
- port_to_dev ( port )->last_rx = jiffies;
+ dev->last_rx = jiffies;
}
* always load up the entire packet for DMA.
*/
dbg ( DBG_TX,"Tx underflow port %d\n", event & 0x03 );
- port->hdlc.stats.tx_errors++;
- port->hdlc.stats.tx_fifo_errors++;
+ hdlc_stats(port_to_dev(port))->tx_errors++;
+ hdlc_stats(port_to_dev(port))->tx_fifo_errors++;
break;
case INIT_CPLT:
{
int err;
- err = hdlc_open ( dev_to_hdlc ( dev ));
+ err = hdlc_open (dev);
if ( err )
return err;
{
netif_stop_queue ( dev );
fst_closeport ( dev_to_port ( dev ));
- hdlc_close ( dev_to_hdlc ( dev ));
+ hdlc_close ( dev );
return 0;
}
static int
-fst_attach ( hdlc_device *hdlc, unsigned short encoding, unsigned short parity )
+fst_attach ( struct net_device *dev, unsigned short encoding, unsigned short parity )
{
/* Setting currently fixed in FarSync card so we check and forget */
if ( encoding != ENCODING_NRZ || parity != PARITY_CRC16_PR1_CCITT )
fst_tx_timeout ( struct net_device *dev )
{
struct fst_port_info *port;
+ struct net_device_stats *stats = hdlc_stats(dev);
dbg ( DBG_INTR | DBG_TX,"tx_timeout\n");
port = dev_to_port ( dev );
- port->hdlc.stats.tx_errors++;
- port->hdlc.stats.tx_aborted_errors++;
+ stats->tx_errors++;
+ stats->tx_aborted_errors++;
if ( port->txcnt > 0 )
fst_issue_cmd ( port, ABORTTX );
static int
fst_start_xmit ( struct sk_buff *skb, struct net_device *dev )
{
+ struct net_device_stats *stats = hdlc_stats(dev);
struct fst_card_info *card;
struct fst_port_info *port;
unsigned char dmabits;
if ( ! netif_carrier_ok ( dev ))
{
dev_kfree_skb ( skb );
- port->hdlc.stats.tx_errors++;
- port->hdlc.stats.tx_carrier_errors++;
+ stats->tx_errors++;
+ stats->tx_carrier_errors++;
return 0;
}
dbg ( DBG_TX,"Packet too large %d vs %d\n", skb->len,
LEN_TX_BUFFER );
dev_kfree_skb ( skb );
- port->hdlc.stats.tx_errors++;
+ stats->tx_errors++;
return 0;
}
spin_unlock_irqrestore ( &card->card_lock, flags );
dbg ( DBG_TX,"Out of Tx buffers\n");
dev_kfree_skb ( skb );
- port->hdlc.stats.tx_errors++;
+ stats->tx_errors++;
return 0;
}
if ( ++port->txpos >= NUM_TX_BUFFER )
FST_WRW ( card, txDescrRing[pi][txp].bcnt, cnv_bcnt ( skb->len ));
FST_WRB ( card, txDescrRing[pi][txp].bits, DMA_OWN | TX_STP | TX_ENP );
- port->hdlc.stats.tx_packets++;
- port->hdlc.stats.tx_bytes += skb->len;
+ stats->tx_packets++;
+ stats->tx_bytes += skb->len;
dev_kfree_skb ( skb );
{
int i;
int err;
- struct net_device *dev;
/* We're working on a number of ports based on the card ID. If the
* firmware detects something different later (should never happen)
* we'll have to revise it in some way then.
*/
- for ( i = 0 ; i < card->nports ; i++ )
- {
- card->ports[i].card = card;
- card->ports[i].index = i;
- card->ports[i].run = 0;
-
- dev = hdlc_to_dev ( &card->ports[i].hdlc );
-
- /* Fill in the net device info */
- /* Since this is a PCI setup this is purely
- * informational. Give them the buffer addresses
- * and basic card I/O.
- */
- dev->mem_start = card->phys_mem
- + BUF_OFFSET ( txBuffer[i][0][0]);
- dev->mem_end = card->phys_mem
- + BUF_OFFSET ( txBuffer[i][NUM_TX_BUFFER][0]);
- dev->base_addr = card->pci_conf;
- dev->irq = card->irq;
-
- dev->tx_queue_len = FST_TX_QUEUE_LEN;
- dev->open = fst_open;
- dev->stop = fst_close;
- dev->do_ioctl = fst_ioctl;
- dev->watchdog_timeo = FST_TX_TIMEOUT;
- dev->tx_timeout = fst_tx_timeout;
- card->ports[i].hdlc.attach = fst_attach;
- card->ports[i].hdlc.xmit = fst_start_xmit;
-
- if (( err = register_hdlc_device ( &card->ports[i].hdlc )) < 0 )
- {
+ for ( i = 0 ; i < card->nports ; i++ ) {
+ err = register_hdlc_device(card->ports[i].dev);
+ if (err < 0) {
+ int j;
printk_err ("Cannot register HDLC device for port %d"
" (errno %d)\n", i, -err );
+ for (j = i; j < card->nports; j++) {
+ free_netdev(card->ports[j].dev);
+ card->ports[j].dev = NULL;
+ }
card->nports = i;
break;
}
}
- spin_lock_init ( &card->card_lock );
-
printk ( KERN_INFO "%s-%s: %s IRQ%d, %d ports\n",
- hdlc_to_dev(&card->ports[0].hdlc)->name,
- hdlc_to_dev(&card->ports[card->nports-1].hdlc)->name,
+ port_to_dev(&card->ports[0])->name,
+ port_to_dev(&card->ports[card->nports-1])->name,
type_strings[card->type], card->irq, card->nports );
}
static int firsttime_done = 0;
struct fst_card_info *card;
int err = 0;
+ int i;
if ( ! firsttime_done )
{
card->state = FST_UNINIT;
+ spin_lock_init ( &card->card_lock );
+
+ for ( i = 0 ; i < card->nports ; i++ ) {
+ struct net_device *dev = alloc_hdlcdev(&card->ports[i]);
+ hdlc_device *hdlc;
+ if (!dev) {
+ while (i--)
+ free_netdev(card->ports[i].dev);
+ printk_err ("FarSync: out of memory\n");
+ goto error_free_card;
+ }
+ card->ports[i].dev = dev;
+ card->ports[i].card = card;
+ card->ports[i].index = i;
+ card->ports[i].run = 0;
+
+ hdlc = dev_to_hdlc(dev);
+
+ /* Fill in the net device info */
+ /* Since this is a PCI setup this is purely
+ * informational. Give them the buffer addresses
+ * and basic card I/O.
+ */
+ dev->mem_start = card->phys_mem
+ + BUF_OFFSET ( txBuffer[i][0][0]);
+ dev->mem_end = card->phys_mem
+ + BUF_OFFSET ( txBuffer[i][NUM_TX_BUFFER][0]);
+ dev->base_addr = card->pci_conf;
+ dev->irq = card->irq;
+
+ dev->tx_queue_len = FST_TX_QUEUE_LEN;
+ dev->open = fst_open;
+ dev->stop = fst_close;
+ dev->do_ioctl = fst_ioctl;
+ dev->watchdog_timeo = FST_TX_TIMEOUT;
+ dev->tx_timeout = fst_tx_timeout;
+ hdlc->attach = fst_attach;
+ hdlc->xmit = fst_start_xmit;
+ }
+
dbg ( DBG_PCI,"type %d nports %d irq %d\n", card->type,
card->nports, card->irq );
dbg ( DBG_PCI,"conf %04x mem %08x ctlmem %08x\n",
printk_err ("Unable to get config I/O @ 0x%04X\n",
card->pci_conf );
err = -ENODEV;
- goto error_free_card;
+ goto error_free_ports;
}
if ( ! request_mem_region ( card->phys_mem, FST_MEMSIZE,"Shared RAM"))
{
error_release_io:
release_region ( card->pci_conf, 0x80 );
+error_free_ports:
+ for (i = 0; i < card->nports; i++)
+ free_netdev(card->ports[i].dev);
error_free_card:
kfree ( card );
return err;
for ( i = 0 ; i < card->nports ; i++ )
{
- unregister_hdlc_device ( &card->ports[i].hdlc );
+ struct net_device *dev = port_to_dev(&card->ports[i]);
+ unregister_hdlc_device(dev);
}
fst_disable_intr ( card );
release_mem_region ( card->phys_mem, FST_MEMSIZE );
release_region ( card->pci_conf, 0x80 );
+ for (i = 0; i < card->nports; i++)
+ free_netdev(card->ports[i].dev);
+
kfree ( card );
}
#define writea(value, ptr) writel(value, ptr)
#endif
+static inline struct net_device *port_to_dev(port_t *port)
+{
+ return port->dev;
+}
+
static inline int sca_intr_status(card_t *card)
{
u8 result = 0;
return result;
}
-
-
-static inline port_t* hdlc_to_port(hdlc_device *hdlc)
-{
- return (port_t*)hdlc;
-}
-
-
-
static inline port_t* dev_to_port(struct net_device *dev)
{
- return hdlc_to_port(dev_to_hdlc(dev));
+ return dev_to_hdlc(dev)->priv;
}
-
-
static inline u16 next_desc(port_t *port, u16 desc, int transmit)
{
return (desc + 1) % (transmit ? port_to_card(port)->tx_ring_buffers
}
hdlc_set_carrier(!(sca_in(get_msci(port) + ST3, card) & ST3_DCD),
- &port->hdlc);
+ port_to_dev(port));
}
sca_out(stat & (ST1_UDRN | ST1_CDCD), msci + ST1, card);
if (stat & ST1_UDRN) {
- port->hdlc.stats.tx_errors++; /* TX Underrun error detected */
- port->hdlc.stats.tx_fifo_errors++;
+ struct net_device_stats *stats = hdlc_stats(port_to_dev(port));
+ stats->tx_errors++; /* TX Underrun error detected */
+ stats->tx_fifo_errors++;
}
if (stat & ST1_CDCD)
hdlc_set_carrier(!(sca_in(msci + ST3, card) & ST3_DCD),
- &port->hdlc);
+ port_to_dev(port));
}
#endif
static inline void sca_rx(card_t *card, port_t *port, pkt_desc *desc, u16 rxin)
{
+ struct net_device *dev = port_to_dev(port);
+ struct net_device_stats *stats = hdlc_stats(dev);
struct sk_buff *skb;
u16 len;
u32 buff;
len = readw(&desc->len);
skb = dev_alloc_skb(len);
if (!skb) {
- port->hdlc.stats.rx_dropped++;
+ stats->rx_dropped++;
return;
}
#endif
skb_put(skb, len);
#ifdef DEBUG_PKT
- printk(KERN_DEBUG "%s RX(%i):", hdlc_to_name(&port->hdlc), skb->len);
+ printk(KERN_DEBUG "%s RX(%i):", dev->name, skb->len);
debug_frame(skb);
#endif
- port->hdlc.stats.rx_packets++;
- port->hdlc.stats.rx_bytes += skb->len;
+ stats->rx_packets++;
+ stats->rx_bytes += skb->len;
skb->mac.raw = skb->data;
- skb->dev = hdlc_to_dev(&port->hdlc);
+ skb->dev = dev;
skb->dev->last_rx = jiffies;
- skb->protocol = hdlc_type_trans(skb, hdlc_to_dev(&port->hdlc));
+ skb->protocol = hdlc_type_trans(skb, dev);
netif_rx(skb);
}
u16 dmac = get_dmac_rx(port);
card_t *card = port_to_card(port);
u8 stat = sca_in(DSR_RX(phy_node(port)), card); /* read DMA Status */
- struct net_device_stats *stats = &port->hdlc.stats;
+ struct net_device_stats *stats = hdlc_stats(port_to_dev(port));
/* Reset DSR status bits */
sca_out((stat & (DSR_EOT | DSR_EOM | DSR_BOF | DSR_COF)) | DSR_DWE,
/* Transmit DMA interrupt service */
static inline void sca_tx_intr(port_t *port)
{
+ struct net_device *dev = port_to_dev(port);
+ struct net_device_stats *stats = hdlc_stats(dev);
u16 dmac = get_dmac_tx(port);
card_t* card = port_to_card(port);
u8 stat;
break; /* Transmitter is/will_be sending this frame */
desc = desc_address(port, port->txlast, 1);
- port->hdlc.stats.tx_packets++;
- port->hdlc.stats.tx_bytes += readw(&desc->len);
+ stats->tx_packets++;
+ stats->tx_bytes += readw(&desc->len);
writeb(0, &desc->stat); /* Free descriptor */
port->txlast = next_desc(port, port->txlast, 1);
}
- netif_wake_queue(hdlc_to_dev(&port->hdlc));
+ netif_wake_queue(dev);
spin_unlock(&port->lock);
}
-static void sca_open(hdlc_device *hdlc)
+static void sca_open(struct net_device *dev)
{
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
card_t* card = port_to_card(port);
u16 msci = get_msci(port);
u8 md0, md2;
- all DMA interrupts
*/
- hdlc_set_carrier(!(sca_in(msci + ST3, card) & ST3_DCD), hdlc);
+ hdlc_set_carrier(!(sca_in(msci + ST3, card) & ST3_DCD), dev);
#ifdef __HD64570_H
/* MSCI TX INT and RX INT A IRQ enable */
sca_out(CMD_TX_ENABLE, msci + CMD, card);
sca_out(CMD_RX_ENABLE, msci + CMD, card);
- netif_start_queue(hdlc_to_dev(hdlc));
+ netif_start_queue(dev);
}
-static void sca_close(hdlc_device *hdlc)
+static void sca_close(struct net_device *dev)
{
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
card_t* card = port_to_card(port);
/* reset channel */
- netif_stop_queue(hdlc_to_dev(hdlc));
+ netif_stop_queue(dev);
sca_out(CMD_RESET, get_msci(port) + CMD, port_to_card(port));
#ifdef __HD64570_H
/* disable MSCI interrupts */
-static int sca_attach(hdlc_device *hdlc, unsigned short encoding,
+static int sca_attach(struct net_device *dev, unsigned short encoding,
unsigned short parity)
{
if (encoding != ENCODING_NRZ &&
parity != PARITY_CRC16_PR1_CCITT)
return -EINVAL;
- hdlc_to_port(hdlc)->encoding = encoding;
- hdlc_to_port(hdlc)->parity = parity;
+ dev_to_port(dev)->encoding = encoding;
+ dev_to_port(dev)->parity = parity;
return 0;
}
#ifdef DEBUG_RINGS
-static void sca_dump_rings(hdlc_device *hdlc)
+static void sca_dump_rings(struct net_device *dev)
{
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
card_t *card = port_to_card(port);
u16 cnt;
#if !defined(PAGE0_ALWAYS_MAPPED) && !defined(ALL_PAGES_ALWAYS_MAPPED)
static int sca_xmit(struct sk_buff *skb, struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
card_t *card = port_to_card(port);
pkt_desc *desc;
u32 buff, len;
}
#ifdef DEBUG_PKT
- printk(KERN_DEBUG "%s TX(%i):", hdlc_to_name(hdlc), skb->len);
+ printk(KERN_DEBUG "%s TX(%i):", dev->name, skb->len);
debug_frame(skb);
#endif
desc = desc_address(port, port->txin + 1, 1);
if (readb(&desc->stat)) /* allow 1 packet gap */
- netif_stop_queue(hdlc_to_dev(&port->hdlc));
+ netif_stop_queue(dev);
spin_unlock_irq(&port->lock);
-static void cisco_keepalive_send(hdlc_device *hdlc, u32 type,
+static void cisco_keepalive_send(struct net_device *dev, u32 type,
u32 par1, u32 par2)
{
struct sk_buff *skb;
if (!skb) {
printk(KERN_WARNING
"%s: Memory squeeze on cisco_keepalive_send()\n",
- hdlc_to_name(hdlc));
+ dev->name);
return;
}
skb_reserve(skb, 4);
- cisco_hard_header(skb, hdlc_to_dev(hdlc), CISCO_KEEPALIVE,
- NULL, NULL, 0);
+ cisco_hard_header(skb, dev, CISCO_KEEPALIVE, NULL, NULL, 0);
data = (cisco_packet*)skb->tail;
data->type = htonl(type);
skb_put(skb, sizeof(cisco_packet));
skb->priority = TC_PRIO_CONTROL;
- skb->dev = hdlc_to_dev(hdlc);
+ skb->dev = dev;
skb->nh.raw = skb->data;
dev_queue_xmit(skb);
static int cisco_rx(struct sk_buff *skb)
{
- hdlc_device *hdlc = dev_to_hdlc(skb->dev);
+ struct net_device *dev = skb->dev;
+ hdlc_device *hdlc = dev_to_hdlc(dev);
hdlc_header *data = (hdlc_header*)skb->data;
cisco_packet *cisco_data;
struct in_device *in_dev;
skb->len != sizeof(hdlc_header) + CISCO_BIG_PACKET_LEN) {
printk(KERN_INFO "%s: Invalid length of Cisco "
"control packet (%d bytes)\n",
- hdlc_to_name(hdlc), skb->len);
+ dev->name, skb->len);
goto rx_error;
}
switch(ntohl (cisco_data->type)) {
case CISCO_ADDR_REQ: /* Stolen from syncppp.c :-) */
- in_dev = hdlc_to_dev(hdlc)->ip_ptr;
+ in_dev = dev->ip_ptr;
addr = 0;
mask = ~0; /* is the mask correct? */
struct in_ifaddr **ifap = &in_dev->ifa_list;
while (*ifap != NULL) {
- if (strcmp(hdlc_to_name(hdlc),
+ if (strcmp(dev->name,
(*ifap)->ifa_label) == 0) {
addr = (*ifap)->ifa_local;
mask = (*ifap)->ifa_mask;
ifap = &(*ifap)->ifa_next;
}
- cisco_keepalive_send(hdlc, CISCO_ADDR_REPLY,
+ cisco_keepalive_send(dev, CISCO_ADDR_REPLY,
addr, mask);
}
dev_kfree_skb_any(skb);
case CISCO_ADDR_REPLY:
printk(KERN_INFO "%s: Unexpected Cisco IP address "
- "reply\n", hdlc_to_name(hdlc));
+ "reply\n", dev->name);
goto rx_error;
case CISCO_KEEPALIVE_REQ:
days = hrs / 24; hrs -= days * 24;
printk(KERN_INFO "%s: Link up (peer "
"uptime %ud%uh%um%us)\n",
- hdlc_to_name(hdlc), days, hrs,
+ dev->name, days, hrs,
min, sec);
}
hdlc->state.cisco.up = 1;
} /* switch(keepalive type) */
} /* switch(protocol) */
- printk(KERN_INFO "%s: Unsupported protocol %x\n", hdlc_to_name(hdlc),
+ printk(KERN_INFO "%s: Unsupported protocol %x\n", dev->name,
data->protocol);
dev_kfree_skb_any(skb);
return NET_RX_DROP;
static void cisco_timer(unsigned long arg)
{
- hdlc_device *hdlc = (hdlc_device*)arg;
+ struct net_device *dev = (struct net_device *)arg;
+ hdlc_device *hdlc = dev_to_hdlc(dev);
if (hdlc->state.cisco.up && jiffies - hdlc->state.cisco.last_poll >=
hdlc->state.cisco.settings.timeout * HZ) {
hdlc->state.cisco.up = 0;
- printk(KERN_INFO "%s: Link down\n", hdlc_to_name(hdlc));
- if (netif_carrier_ok(&hdlc->netdev))
- netif_carrier_off(&hdlc->netdev);
+ printk(KERN_INFO "%s: Link down\n", dev->name);
+ if (netif_carrier_ok(dev))
+ netif_carrier_off(dev);
}
- cisco_keepalive_send(hdlc, CISCO_KEEPALIVE_REQ,
+ cisco_keepalive_send(dev, CISCO_KEEPALIVE_REQ,
++hdlc->state.cisco.txseq,
hdlc->state.cisco.rxseq);
hdlc->state.cisco.timer.expires = jiffies +
-static void cisco_start(hdlc_device *hdlc)
+static void cisco_start(struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
hdlc->state.cisco.last_poll = 0;
hdlc->state.cisco.up = 0;
hdlc->state.cisco.txseq = hdlc->state.cisco.rxseq = 0;
init_timer(&hdlc->state.cisco.timer);
hdlc->state.cisco.timer.expires = jiffies + HZ; /*First poll after 1s*/
hdlc->state.cisco.timer.function = cisco_timer;
- hdlc->state.cisco.timer.data = (unsigned long)hdlc;
+ hdlc->state.cisco.timer.data = (unsigned long)dev;
add_timer(&hdlc->state.cisco.timer);
}
-static void cisco_stop(hdlc_device *hdlc)
+static void cisco_stop(struct net_device *dev)
{
- del_timer_sync(&hdlc->state.cisco.timer);
- if (netif_carrier_ok(&hdlc->netdev))
- netif_carrier_off(&hdlc->netdev);
+ del_timer_sync(&dev_to_hdlc(dev)->state.cisco.timer);
+ if (netif_carrier_ok(dev))
+ netif_carrier_off(dev);
}
-int hdlc_cisco_ioctl(hdlc_device *hdlc, struct ifreq *ifr)
+int hdlc_cisco_ioctl(struct net_device *dev, struct ifreq *ifr)
{
cisco_proto *cisco_s = ifr->ifr_settings.ifs_ifsu.cisco;
const size_t size = sizeof(cisco_proto);
cisco_proto new_settings;
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
int result;
switch (ifr->ifr_settings.type) {
new_settings.timeout < 2)
return -EINVAL;
- result=hdlc->attach(hdlc, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
+ result=hdlc->attach(dev, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
if (result)
return result;
}
-static inline pvc_device* add_pvc(hdlc_device *hdlc, u16 dlci)
+static inline pvc_device* add_pvc(struct net_device *dev, u16 dlci)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
pvc_device *pvc, **pvc_p = &hdlc->state.fr.first_pvc;
while (*pvc_p) {
memset(pvc, 0, sizeof(pvc_device));
pvc->dlci = dlci;
- pvc->master = hdlc;
+ pvc->master = dev;
pvc->next = *pvc_p; /* Put it in the chain */
*pvc_p = pvc;
return pvc;
{
pvc_device *pvc = dev_to_pvc(dev);
- if ((hdlc_to_dev(pvc->master)->flags & IFF_UP) == 0)
+ if ((pvc->master->flags & IFF_UP) == 0)
return -EIO; /* Master must be UP in order to activate PVC */
if (pvc->open_count++ == 0) {
- if (pvc->master->state.fr.settings.lmi == LMI_NONE)
- pvc->state.active = pvc->master->carrier;
+ hdlc_device *hdlc = dev_to_hdlc(pvc->master);
+ if (hdlc->state.fr.settings.lmi == LMI_NONE)
+ pvc->state.active = hdlc->carrier;
pvc_carrier(pvc->state.active, pvc);
- pvc->master->state.fr.dce_changed = 1;
+ hdlc->state.fr.dce_changed = 1;
}
return 0;
}
pvc_device *pvc = dev_to_pvc(dev);
if (--pvc->open_count == 0) {
- if (pvc->master->state.fr.settings.lmi == LMI_NONE)
+ hdlc_device *hdlc = dev_to_hdlc(pvc->master);
+ if (hdlc->state.fr.settings.lmi == LMI_NONE)
pvc->state.active = 0;
- if (pvc->master->state.fr.settings.dce) {
- pvc->master->state.fr.dce_changed = 1;
+ if (hdlc->state.fr.settings.dce) {
+ hdlc->state.fr.dce_changed = 1;
pvc->state.active = 0;
}
}
}
info.dlci = pvc->dlci;
- memcpy(info.master, hdlc_to_name(pvc->master), IFNAMSIZ);
+ memcpy(info.master, pvc->master->name, IFNAMSIZ);
if (copy_to_user(ifr->ifr_settings.ifs_ifsu.fr_pvc_info,
&info, sizeof(info)))
return -EFAULT;
static inline struct net_device_stats *pvc_get_stats(struct net_device *dev)
{
- return (struct net_device_stats *)
- ((char *)dev + sizeof(struct net_device));
+ return netdev_priv(dev);
}
stats->tx_packets++;
if (pvc->state.fecn) /* TX Congestion counter */
stats->tx_compressed++;
- skb->dev = hdlc_to_dev(pvc->master);
+ skb->dev = pvc->master;
dev_queue_xmit(skb);
return 0;
}
static inline void fr_log_dlci_active(pvc_device *pvc)
{
printk(KERN_INFO "%s: DLCI %d [%s%s%s]%s %s\n",
- hdlc_to_name(pvc->master),
+ pvc->master->name,
pvc->dlci,
pvc->main ? pvc->main->name : "",
pvc->main && pvc->ether ? " " : "",
-static void fr_lmi_send(hdlc_device *hdlc, int fullrep)
+static void fr_lmi_send(struct net_device *dev, int fullrep)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
struct sk_buff *skb;
pvc_device *pvc = hdlc->state.fr.first_pvc;
int len = (hdlc->state.fr.settings.lmi == LMI_ANSI) ? LMI_ANSI_LENGTH
len += hdlc->state.fr.dce_pvc_count * (2 + stat_len);
if (len > HDLC_MAX_MRU) {
printk(KERN_WARNING "%s: Too many PVCs while sending "
- "LMI full report\n", hdlc_to_name(hdlc));
+ "LMI full report\n", dev->name);
return;
}
}
skb = dev_alloc_skb(len);
if (!skb) {
printk(KERN_WARNING "%s: Memory squeeze on fr_lmi_send()\n",
- hdlc_to_name(hdlc));
+ dev->name);
return;
}
memset(skb->data, 0, len);
skb_put(skb, i);
skb->priority = TC_PRIO_CONTROL;
- skb->dev = hdlc_to_dev(hdlc);
+ skb->dev = dev;
skb->nh.raw = skb->data;
dev_queue_xmit(skb);
-static void fr_set_link_state(int reliable, hdlc_device *hdlc)
+static void fr_set_link_state(int reliable, struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
pvc_device *pvc = hdlc->state.fr.first_pvc;
hdlc->state.fr.reliable = reliable;
if (reliable) {
- if (!netif_carrier_ok(&hdlc->netdev))
- netif_carrier_on(&hdlc->netdev);
+ if (!netif_carrier_ok(dev))
+ netif_carrier_on(dev);
hdlc->state.fr.n391cnt = 0; /* Request full status */
hdlc->state.fr.dce_changed = 1;
}
}
} else {
- if (netif_carrier_ok(&hdlc->netdev))
- netif_carrier_off(&hdlc->netdev);
+ if (netif_carrier_ok(dev))
+ netif_carrier_off(dev);
while (pvc) { /* Deactivate all PVCs */
pvc_carrier(0, pvc);
static void fr_timer(unsigned long arg)
{
- hdlc_device *hdlc = (hdlc_device*)arg;
+ struct net_device *dev = (struct net_device *)arg;
+ hdlc_device *hdlc = dev_to_hdlc(dev);
int i, cnt = 0, reliable;
u32 list;
if (hdlc->state.fr.request) {
if (hdlc->state.fr.reliable)
printk(KERN_INFO "%s: No LMI status reply "
- "received\n", hdlc_to_name(hdlc));
+ "received\n", dev->name);
hdlc->state.fr.last_errors |= 1;
}
}
if (hdlc->state.fr.reliable != reliable) {
- printk(KERN_INFO "%s: Link %sreliable\n", hdlc_to_name(hdlc),
+ printk(KERN_INFO "%s: Link %sreliable\n", dev->name,
reliable ? "" : "un");
- fr_set_link_state(reliable, hdlc);
+ fr_set_link_state(reliable, dev);
}
if (hdlc->state.fr.settings.dce)
if (hdlc->state.fr.n391cnt)
hdlc->state.fr.n391cnt--;
- fr_lmi_send(hdlc, hdlc->state.fr.n391cnt == 0);
+ fr_lmi_send(dev, hdlc->state.fr.n391cnt == 0);
hdlc->state.fr.request = 1;
hdlc->state.fr.timer.expires = jiffies +
-static int fr_lmi_recv(hdlc_device *hdlc, struct sk_buff *skb)
+static int fr_lmi_recv(struct net_device *dev, struct sk_buff *skb)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
int stat_len;
pvc_device *pvc;
int reptype = -1, error, no_ram;
if (skb->len < ((hdlc->state.fr.settings.lmi == LMI_ANSI)
? LMI_ANSI_LENGTH : LMI_LENGTH)) {
- printk(KERN_INFO "%s: Short LMI frame\n", hdlc_to_name(hdlc));
+ printk(KERN_INFO "%s: Short LMI frame\n", dev->name);
return 1;
}
if (skb->data[5] != (!hdlc->state.fr.settings.dce ?
LMI_STATUS : LMI_STATUS_ENQUIRY)) {
printk(KERN_INFO "%s: LMI msgtype=%x, Not LMI status %s\n",
- hdlc_to_name(hdlc), skb->data[2],
+ dev->name, skb->data[2],
hdlc->state.fr.settings.dce ? "enquiry" : "reply");
return 1;
}
((hdlc->state.fr.settings.lmi == LMI_CCITT)
? LMI_CCITT_REPTYPE : LMI_REPTYPE)) {
printk(KERN_INFO "%s: Not a report type=%x\n",
- hdlc_to_name(hdlc), skb->data[i]);
+ dev->name, skb->data[i]);
return 1;
}
i++;
((hdlc->state.fr.settings.lmi == LMI_CCITT)
? LMI_CCITT_ALIVE : LMI_ALIVE)) {
printk(KERN_INFO "%s: Unsupported status element=%x\n",
- hdlc_to_name(hdlc), skb->data[i]);
+ dev->name, skb->data[i]);
return 1;
}
i++;
if (hdlc->state.fr.settings.dce) {
if (reptype != LMI_FULLREP && reptype != LMI_INTEGRITY) {
printk(KERN_INFO "%s: Unsupported report type=%x\n",
- hdlc_to_name(hdlc), reptype);
+ dev->name, reptype);
return 1;
}
}
hdlc->state.fr.dce_changed = 0;
}
- fr_lmi_send(hdlc, reptype == LMI_FULLREP ? 1 : 0);
+ fr_lmi_send(dev, reptype == LMI_FULLREP ? 1 : 0);
return 0;
}
if (skb->data[i] != ((hdlc->state.fr.settings.lmi == LMI_CCITT)
? LMI_CCITT_PVCSTAT : LMI_PVCSTAT)) {
printk(KERN_WARNING "%s: Invalid PVCSTAT ID: %x\n",
- hdlc_to_name(hdlc), skb->data[i]);
+ dev->name, skb->data[i]);
return 1;
}
i++;
if (skb->data[i] != stat_len) {
printk(KERN_WARNING "%s: Invalid PVCSTAT length: %x\n",
- hdlc_to_name(hdlc), skb->data[i]);
+ dev->name, skb->data[i]);
return 1;
}
i++;
dlci = status_to_dlci(skb->data + i, &active, &new);
- pvc = add_pvc(hdlc, dlci);
+ pvc = add_pvc(dev, dlci);
if (!pvc && !no_ram) {
printk(KERN_WARNING
"%s: Memory squeeze on fr_lmi_recv()\n",
- hdlc_to_name(hdlc));
+ dev->name);
no_ram = 1;
}
static int fr_rx(struct sk_buff *skb)
{
- hdlc_device *hdlc = dev_to_hdlc(skb->dev);
+ struct net_device *ndev = skb->dev;
+ hdlc_device *hdlc = dev_to_hdlc(ndev);
fr_hdr *fh = (fr_hdr*)skb->data;
u8 *data = skb->data;
u16 dlci;
goto rx_error; /* LMI packet with no LMI? */
if (data[3] == LMI_PROTO) {
- if (fr_lmi_recv(hdlc, skb))
+ if (fr_lmi_recv(ndev, skb))
goto rx_error;
else {
/* No request pending */
}
printk(KERN_INFO "%s: Received non-LMI frame with LMI DLCI\n",
- hdlc_to_name(hdlc));
+ ndev->name);
goto rx_error;
}
if (!pvc) {
#ifdef DEBUG_PKT
printk(KERN_INFO "%s: No PVC for received frame's DLCI %d\n",
- hdlc_to_name(hdlc), dlci);
+ ndev->name, dlci);
#endif
dev_kfree_skb_any(skb);
return NET_RX_DROP;
if (pvc->state.fecn != fh->fecn) {
#ifdef DEBUG_ECN
- printk(KERN_DEBUG "%s: DLCI %d FECN O%s\n", hdlc_to_name(pvc),
+ printk(KERN_DEBUG "%s: DLCI %d FECN O%s\n", ndev->name,
dlci, fh->fecn ? "N" : "FF");
#endif
pvc->state.fecn ^= 1;
if (pvc->state.becn != fh->becn) {
#ifdef DEBUG_ECN
- printk(KERN_DEBUG "%s: DLCI %d BECN O%s\n", hdlc_to_name(pvc),
+ printk(KERN_DEBUG "%s: DLCI %d BECN O%s\n", ndev->name,
dlci, fh->becn ? "N" : "FF");
#endif
pvc->state.becn ^= 1;
default:
printk(KERN_INFO "%s: Unsupported protocol, OUI=%x "
- "PID=%x\n", hdlc_to_name(hdlc), oui, pid);
+ "PID=%x\n", ndev->name, oui, pid);
dev_kfree_skb_any(skb);
return NET_RX_DROP;
}
} else {
printk(KERN_INFO "%s: Unsupported protocol, NLPID=%x "
- "length = %i\n", hdlc_to_name(hdlc), data[3], skb->len);
+ "length = %i\n", ndev->name, data[3], skb->len);
dev_kfree_skb_any(skb);
return NET_RX_DROP;
}
-static void fr_start(hdlc_device *hdlc)
+static void fr_start(struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
#ifdef DEBUG_LINK
printk(KERN_DEBUG "fr_start\n");
#endif
if (hdlc->state.fr.settings.lmi != LMI_NONE) {
- if (netif_carrier_ok(&hdlc->netdev))
- netif_carrier_off(&hdlc->netdev);
+ if (netif_carrier_ok(dev))
+ netif_carrier_off(dev);
hdlc->state.fr.last_poll = 0;
hdlc->state.fr.reliable = 0;
hdlc->state.fr.dce_changed = 1;
/* First poll after 1 s */
hdlc->state.fr.timer.expires = jiffies + HZ;
hdlc->state.fr.timer.function = fr_timer;
- hdlc->state.fr.timer.data = (unsigned long)hdlc;
+ hdlc->state.fr.timer.data = (unsigned long)dev;
add_timer(&hdlc->state.fr.timer);
} else
- fr_set_link_state(1, hdlc);
+ fr_set_link_state(1, dev);
}
-static void fr_stop(hdlc_device *hdlc)
+static void fr_stop(struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
#ifdef DEBUG_LINK
printk(KERN_DEBUG "fr_stop\n");
#endif
if (hdlc->state.fr.settings.lmi != LMI_NONE)
del_timer_sync(&hdlc->state.fr.timer);
- fr_set_link_state(0, hdlc);
+ fr_set_link_state(0, dev);
}
-static void fr_close(hdlc_device *hdlc)
+static void fr_close(struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
pvc_device *pvc = hdlc->state.fr.first_pvc;
while (pvc) { /* Shutdown all PVCs for this FRAD */
}
}
+static void dlci_setup(struct net_device *dev)
+{
+ dev->type = ARPHRD_DLCI;
+ dev->flags = IFF_POINTOPOINT;
+ dev->hard_header_len = 10;
+ dev->addr_len = 2;
+}
-
-static int fr_add_pvc(hdlc_device *hdlc, unsigned int dlci, int type)
+static int fr_add_pvc(struct net_device *master, unsigned int dlci, int type)
{
+ hdlc_device *hdlc = dev_to_hdlc(master);
pvc_device *pvc = NULL;
struct net_device *dev;
int result, used;
if (type == ARPHRD_ETHER)
prefix = "pvceth%d";
- if ((pvc = add_pvc(hdlc, dlci)) == NULL) {
+ if ((pvc = add_pvc(master, dlci)) == NULL) {
printk(KERN_WARNING "%s: Memory squeeze on fr_add_pvc()\n",
- hdlc_to_name(hdlc));
+ master->name);
return -ENOBUFS;
}
used = pvc_is_used(pvc);
- dev = kmalloc(sizeof(struct net_device) +
- sizeof(struct net_device_stats), GFP_KERNEL);
+ if (type == ARPHRD_ETHER)
+ dev = alloc_netdev(sizeof(struct net_device_stats),
+ "pvceth%d", ether_setup);
+ else
+ dev = alloc_netdev(sizeof(struct net_device_stats),
+ "pvc%d", dlci_setup);
+
if (!dev) {
printk(KERN_WARNING "%s: Memory squeeze on fr_pvc()\n",
- hdlc_to_name(hdlc));
+ master->name);
delete_unused_pvcs(hdlc);
return -ENOBUFS;
}
- memset(dev, 0, sizeof(struct net_device) +
- sizeof(struct net_device_stats));
if (type == ARPHRD_ETHER) {
- ether_setup(dev);
memcpy(dev->dev_addr, "\x00\x01", 2);
get_random_bytes(dev->dev_addr + 2, ETH_ALEN - 2);
} else {
- dev->type = ARPHRD_DLCI;
- dev->flags = IFF_POINTOPOINT;
- dev->hard_header_len = 10;
- dev->addr_len = 2;
*(u16*)dev->dev_addr = htons(dlci);
dlci_to_q922(dev->broadcast, dlci);
}
dev->tx_queue_len = 0;
dev->priv = pvc;
- result = dev_alloc_name(dev, prefix);
+ result = dev_alloc_name(dev, dev->name);
if (result < 0) {
- kfree(dev);
+ free_netdev(dev);
delete_unused_pvcs(hdlc);
return result;
}
if (register_netdevice(dev) != 0) {
- kfree(dev);
+ free_netdev(dev);
delete_unused_pvcs(hdlc);
return -EIO;
}
if (dev->flags & IFF_UP)
return -EBUSY; /* PVC in use */
- unregister_netdevice(dev); /* the destructor will kfree(dev) */
+ unregister_netdevice(dev); /* the destructor will free_netdev(dev) */
*get_dev_p(pvc, type) = NULL;
if (!pvc_is_used(pvc)) {
while (pvc) {
pvc_device *next = pvc->next;
- if (pvc->main) /* the destructor will kfree(main + ether) */
+ /* destructors will free_netdev() main and ether */
+ if (pvc->main)
unregister_netdevice(pvc->main);
if (pvc->ether)
-int hdlc_fr_ioctl(hdlc_device *hdlc, struct ifreq *ifr)
+int hdlc_fr_ioctl(struct net_device *dev, struct ifreq *ifr)
{
fr_proto *fr_s = ifr->ifr_settings.ifs_ifsu.fr;
const size_t size = sizeof(fr_proto);
fr_proto new_settings;
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
fr_proto_pvc pvc;
int result;
new_settings.dce != 1))
return -EINVAL;
- result=hdlc->attach(hdlc, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
+ result=hdlc->attach(dev, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
if (result)
return result;
if (ifr->ifr_settings.type == IF_PROTO_FR_ADD_PVC ||
ifr->ifr_settings.type == IF_PROTO_FR_ADD_ETH_PVC)
- return fr_add_pvc(hdlc, pvc.dlci, result);
+ return fr_add_pvc(dev, pvc.dlci, result);
else
return fr_del_pvc(hdlc, pvc.dlci, result);
}
static struct net_device_stats *hdlc_get_stats(struct net_device *dev)
{
- return &dev_to_hdlc(dev)->stats;
+ return hdlc_stats(dev);
}
-void hdlc_set_carrier(int on, hdlc_device *hdlc)
+void hdlc_set_carrier(int on, struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
on = on ? 1 : 0;
#ifdef DEBUG_LINK
if (hdlc->carrier == on)
goto carrier_exit; /* no change in DCD line level */
- printk(KERN_INFO "%s: carrier %s\n", hdlc_to_name(hdlc),
+ printk(KERN_INFO "%s: carrier %s\n", dev->name,
on ? "ON" : "off");
hdlc->carrier = on;
if (hdlc->carrier) {
if (hdlc->proto.start)
- hdlc->proto.start(hdlc);
- else if (!netif_carrier_ok(&hdlc->netdev))
- netif_carrier_on(&hdlc->netdev);
+ hdlc->proto.start(dev);
+ else if (!netif_carrier_ok(dev))
+ netif_carrier_on(dev);
} else { /* no carrier */
if (hdlc->proto.stop)
- hdlc->proto.stop(hdlc);
- else if (netif_carrier_ok(&hdlc->netdev))
- netif_carrier_off(&hdlc->netdev);
+ hdlc->proto.stop(dev);
+ else if (netif_carrier_ok(dev))
+ netif_carrier_off(dev);
}
carrier_exit:
/* Must be called by hardware driver when HDLC device is being opened */
-int hdlc_open(hdlc_device *hdlc)
+int hdlc_open(struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
#ifdef DEBUG_LINK
printk(KERN_DEBUG "hdlc_open carrier %i open %i\n",
hdlc->carrier, hdlc->open);
return -ENOSYS; /* no protocol attached */
if (hdlc->proto.open) {
- int result = hdlc->proto.open(hdlc);
+ int result = hdlc->proto.open(dev);
if (result)
return result;
}
if (hdlc->carrier) {
if (hdlc->proto.start)
- hdlc->proto.start(hdlc);
- else if (!netif_carrier_ok(&hdlc->netdev))
- netif_carrier_on(&hdlc->netdev);
+ hdlc->proto.start(dev);
+ else if (!netif_carrier_ok(dev))
+ netif_carrier_on(dev);
- } else if (netif_carrier_ok(&hdlc->netdev))
- netif_carrier_off(&hdlc->netdev);
+ } else if (netif_carrier_ok(dev))
+ netif_carrier_off(dev);
hdlc->open = 1;
/* Must be called by hardware driver when HDLC device is being closed */
-void hdlc_close(hdlc_device *hdlc)
+void hdlc_close(struct net_device *dev)
{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
#ifdef DEBUG_LINK
printk(KERN_DEBUG "hdlc_close carrier %i open %i\n",
hdlc->carrier, hdlc->open);
hdlc->open = 0;
if (hdlc->carrier && hdlc->proto.stop)
- hdlc->proto.stop(hdlc);
+ hdlc->proto.stop(dev);
spin_unlock_irq(&hdlc->state_lock);
if (hdlc->proto.close)
- hdlc->proto.close(hdlc);
+ hdlc->proto.close(dev);
}
#ifndef CONFIG_HDLC_RAW
-#define hdlc_raw_ioctl(hdlc, ifr) -ENOSYS
+#define hdlc_raw_ioctl(dev, ifr) -ENOSYS
#endif
#ifndef CONFIG_HDLC_RAW_ETH
-#define hdlc_raw_eth_ioctl(hdlc, ifr) -ENOSYS
+#define hdlc_raw_eth_ioctl(dev, ifr) -ENOSYS
#endif
#ifndef CONFIG_HDLC_PPP
-#define hdlc_ppp_ioctl(hdlc, ifr) -ENOSYS
+#define hdlc_ppp_ioctl(dev, ifr) -ENOSYS
#endif
#ifndef CONFIG_HDLC_CISCO
-#define hdlc_cisco_ioctl(hdlc, ifr) -ENOSYS
+#define hdlc_cisco_ioctl(dev, ifr) -ENOSYS
#endif
#ifndef CONFIG_HDLC_FR
-#define hdlc_fr_ioctl(hdlc, ifr) -ENOSYS
+#define hdlc_fr_ioctl(dev, ifr) -ENOSYS
#endif
#ifndef CONFIG_HDLC_X25
-#define hdlc_x25_ioctl(hdlc, ifr) -ENOSYS
+#define hdlc_x25_ioctl(dev, ifr) -ENOSYS
#endif
}
switch(proto) {
- case IF_PROTO_HDLC: return hdlc_raw_ioctl(hdlc, ifr);
- case IF_PROTO_HDLC_ETH: return hdlc_raw_eth_ioctl(hdlc, ifr);
- case IF_PROTO_PPP: return hdlc_ppp_ioctl(hdlc, ifr);
- case IF_PROTO_CISCO: return hdlc_cisco_ioctl(hdlc, ifr);
- case IF_PROTO_FR: return hdlc_fr_ioctl(hdlc, ifr);
- case IF_PROTO_X25: return hdlc_x25_ioctl(hdlc, ifr);
+ case IF_PROTO_HDLC: return hdlc_raw_ioctl(dev, ifr);
+ case IF_PROTO_HDLC_ETH: return hdlc_raw_eth_ioctl(dev, ifr);
+ case IF_PROTO_PPP: return hdlc_ppp_ioctl(dev, ifr);
+ case IF_PROTO_CISCO: return hdlc_cisco_ioctl(dev, ifr);
+ case IF_PROTO_FR: return hdlc_fr_ioctl(dev, ifr);
+ case IF_PROTO_X25: return hdlc_x25_ioctl(dev, ifr);
default: return -EINVAL;
}
}
+static void hdlc_setup(struct net_device *dev)
+{
+ hdlc_device *hdlc = dev_to_hdlc(dev);
+
+ dev->get_stats = hdlc_get_stats;
+ dev->change_mtu = hdlc_change_mtu;
+ dev->mtu = HDLC_MAX_MTU;
+
+ dev->type = ARPHRD_RAWHDLC;
+ dev->hard_header_len = 16;
+
+ dev->flags = IFF_POINTOPOINT | IFF_NOARP;
+
+ hdlc->proto.id = -1;
+ hdlc->proto.detach = NULL;
+ hdlc->carrier = 1;
+ hdlc->open = 0;
+ spin_lock_init(&hdlc->state_lock);
+}
+struct net_device *alloc_hdlcdev(void *priv)
+{
+ struct net_device *dev;
+ dev = alloc_netdev(sizeof(hdlc_device), "hdlc%d", hdlc_setup);
+ if (dev)
+ dev_to_hdlc(dev)->priv = priv;
+ return dev;
+}
-int register_hdlc_device(hdlc_device *hdlc)
+int register_hdlc_device(struct net_device *dev)
{
int result;
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
dev->get_stats = hdlc_get_stats;
dev->change_mtu = hdlc_change_mtu;
-void unregister_hdlc_device(hdlc_device *hdlc)
+void unregister_hdlc_device(struct net_device *dev)
{
rtnl_lock();
- hdlc_proto_detach(hdlc);
- unregister_netdevice(hdlc_to_dev(hdlc));
+ hdlc_proto_detach(dev_to_hdlc(dev));
+ unregister_netdevice(dev);
rtnl_unlock();
}
EXPORT_SYMBOL(hdlc_close);
EXPORT_SYMBOL(hdlc_set_carrier);
EXPORT_SYMBOL(hdlc_ioctl);
+EXPORT_SYMBOL(alloc_hdlcdev);
EXPORT_SYMBOL(register_hdlc_device);
EXPORT_SYMBOL(unregister_hdlc_device);
#include <linux/hdlc.h>
-static int ppp_open(hdlc_device *hdlc)
+static int ppp_open(struct net_device *dev)
{
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
void *old_ioctl;
int result;
-static void ppp_close(hdlc_device *hdlc)
+static void ppp_close(struct net_device *dev)
{
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
sppp_close(dev);
sppp_detach(dev);
-int hdlc_ppp_ioctl(hdlc_device *hdlc, struct ifreq *ifr)
+int hdlc_ppp_ioctl(struct net_device *dev, struct ifreq *ifr)
{
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
int result;
switch (ifr->ifr_settings.type) {
/* no settable parameters */
- result=hdlc->attach(hdlc, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
+ result=hdlc->attach(dev, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
if (result)
return result;
-int hdlc_raw_ioctl(hdlc_device *hdlc, struct ifreq *ifr)
+int hdlc_raw_ioctl(struct net_device *dev, struct ifreq *ifr)
{
raw_hdlc_proto *raw_s = ifr->ifr_settings.ifs_ifsu.raw_hdlc;
const size_t size = sizeof(raw_hdlc_proto);
raw_hdlc_proto new_settings;
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
int result;
switch (ifr->ifr_settings.type) {
if (new_settings.parity == PARITY_DEFAULT)
new_settings.parity = PARITY_CRC16_PR1_CCITT;
- result = hdlc->attach(hdlc, new_settings.encoding,
+ result = hdlc->attach(dev, new_settings.encoding,
new_settings.parity);
if (result)
return result;
int len = skb->len;
if (skb_tailroom(skb) < pad)
if (pskb_expand_head(skb, 0, pad, GFP_ATOMIC)) {
- dev_to_hdlc(dev)->stats.tx_dropped++;
+ hdlc_stats(dev)->tx_dropped++;
dev_kfree_skb(skb);
return 0;
}
}
-int hdlc_raw_eth_ioctl(hdlc_device *hdlc, struct ifreq *ifr)
+int hdlc_raw_eth_ioctl(struct net_device *dev, struct ifreq *ifr)
{
raw_hdlc_proto *raw_s = ifr->ifr_settings.ifs_ifsu.raw_hdlc;
const size_t size = sizeof(raw_hdlc_proto);
raw_hdlc_proto new_settings;
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
int result;
void *old_ch_mtu;
int old_qlen;
if (new_settings.parity == PARITY_DEFAULT)
new_settings.parity = PARITY_CRC16_PR1_CCITT;
- result = hdlc->attach(hdlc, new_settings.encoding,
+ result = hdlc->attach(dev, new_settings.encoding,
new_settings.parity);
if (result)
return result;
/* These functions are callbacks called by LAPB layer */
-static void x25_connect_disconnect(void *token, int reason, int code)
+static void x25_connect_disconnect(struct net_device *dev, int reason, int code)
{
- hdlc_device *hdlc = token;
struct sk_buff *skb;
unsigned char *ptr;
if ((skb = dev_alloc_skb(1)) == NULL) {
- printk(KERN_ERR "%s: out of memory\n", hdlc_to_name(hdlc));
+ printk(KERN_ERR "%s: out of memory\n", dev->name);
return;
}
ptr = skb_put(skb, 1);
*ptr = code;
- skb->dev = hdlc_to_dev(hdlc);
+ skb->dev = dev;
skb->protocol = htons(ETH_P_X25);
skb->mac.raw = skb->data;
skb->pkt_type = PACKET_HOST;
-static void x25_connected(void *token, int reason)
+static void x25_connected(struct net_device *dev, int reason)
{
- x25_connect_disconnect(token, reason, 1);
+ x25_connect_disconnect(dev, reason, 1);
}
-static void x25_disconnected(void *token, int reason)
+static void x25_disconnected(struct net_device *dev, int reason)
{
- x25_connect_disconnect(token, reason, 2);
+ x25_connect_disconnect(dev, reason, 2);
}
-static int x25_data_indication(void *token, struct sk_buff *skb)
+static int x25_data_indication(struct net_device *dev, struct sk_buff *skb)
{
- hdlc_device *hdlc = token;
unsigned char *ptr;
skb_push(skb, 1);
ptr = skb->data;
*ptr = 0;
- skb->dev = hdlc_to_dev(hdlc);
+ skb->dev = dev;
skb->protocol = htons(ETH_P_X25);
skb->mac.raw = skb->data;
skb->pkt_type = PACKET_HOST;
-static void x25_data_transmit(void *token, struct sk_buff *skb)
+static void x25_data_transmit(struct net_device *dev, struct sk_buff *skb)
{
- hdlc_device *hdlc = token;
- hdlc->xmit(skb, hdlc_to_dev(hdlc)); /* Ignore return value :-( */
+ hdlc_device *hdlc = dev_to_hdlc(dev);
+ hdlc->xmit(skb, dev); /* Ignore return value :-( */
}
static int x25_xmit(struct sk_buff *skb, struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
int result;
switch (skb->data[0]) {
case 0: /* Data to be transmitted */
skb_pull(skb, 1);
- if ((result = lapb_data_request(hdlc, skb)) != LAPB_OK)
+ if ((result = lapb_data_request(dev, skb)) != LAPB_OK)
dev_kfree_skb(skb);
return 0;
case 1:
- if ((result = lapb_connect_request(hdlc))!= LAPB_OK) {
+ if ((result = lapb_connect_request(dev))!= LAPB_OK) {
if (result == LAPB_CONNECTED)
/* Send connect confirm. msg to level 3 */
- x25_connected(hdlc, 0);
+ x25_connected(dev, 0);
else
printk(KERN_ERR "%s: LAPB connect request "
"failed, error code = %i\n",
- hdlc_to_name(hdlc), result);
+ dev->name, result);
}
break;
case 2:
- if ((result = lapb_disconnect_request(hdlc)) != LAPB_OK) {
+ if ((result = lapb_disconnect_request(dev)) != LAPB_OK) {
if (result == LAPB_NOTCONNECTED)
/* Send disconnect confirm. msg to level 3 */
- x25_disconnected(hdlc, 0);
+ x25_disconnected(dev, 0);
else
printk(KERN_ERR "%s: LAPB disconnect request "
"failed, error code = %i\n",
- hdlc_to_name(hdlc), result);
+ dev->name, result);
}
break;
-static int x25_open(hdlc_device *hdlc)
+static int x25_open(struct net_device *dev)
{
struct lapb_register_struct cb;
int result;
cb.data_indication = x25_data_indication;
cb.data_transmit = x25_data_transmit;
- result = lapb_register(hdlc, &cb);
+ result = lapb_register(dev, &cb);
if (result != LAPB_OK)
return result;
return 0;
-static void x25_close(hdlc_device *hdlc)
+static void x25_close(struct net_device *dev)
{
- lapb_unregister(hdlc);
+ lapb_unregister(dev);
}
return NET_RX_DROP;
}
- if (lapb_data_received(hdlc, skb) == LAPB_OK)
+ if (lapb_data_received(skb->dev, skb) == LAPB_OK)
return NET_RX_SUCCESS;
hdlc->stats.rx_errors++;
-int hdlc_x25_ioctl(hdlc_device *hdlc, struct ifreq *ifr)
+int hdlc_x25_ioctl(struct net_device *dev, struct ifreq *ifr)
{
- struct net_device *dev = hdlc_to_dev(hdlc);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
int result;
switch (ifr->ifr_settings.type) {
if(dev->flags & IFF_UP)
return -EBUSY;
- result=hdlc->attach(hdlc, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
+ result=hdlc->attach(dev, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT);
if (result)
return result;
*/
netif_start_queue(d);
- MOD_INC_USE_COUNT;
return 0;
}
z8530_sync_txdma_close(d, &sv11->sync.chanA);
break;
}
- MOD_DEC_USE_COUNT;
return 0;
}
return 0;
}
+static void sv11_setup(struct net_device *dev)
+{
+ dev->open = hostess_open;
+ dev->stop = hostess_close;
+ dev->hard_start_xmit = hostess_queue_xmit;
+ dev->get_stats = hostess_get_stats;
+ dev->do_ioctl = hostess_ioctl;
+ dev->neigh_setup = hostess_neigh_setup_dev;
+}
+
/*
* Description block for a Comtrol Hostess SV11 card
*/
memset(sv, 0, sizeof(*sv));
sv->if_ptr=&sv->netdev;
- sv->netdev.dev=(struct net_device *)kmalloc(sizeof(struct net_device), GFP_KERNEL);
+ sv->netdev.dev = alloc_netdev(0, "hdlc%d", sv11_setup);
if(!sv->netdev.dev)
goto fail2;
+ SET_MODULE_OWNER(sv->netdev.dev);
+
dev=&sv->sync;
/*
d->base_addr = iobase;
d->irq = irq;
d->priv = sv;
- d->init = NULL;
-
- d->open = hostess_open;
- d->stop = hostess_close;
- d->hard_start_xmit = hostess_queue_xmit;
- d->get_stats = hostess_get_stats;
- d->set_multicast_list = NULL;
- d->do_ioctl = hostess_ioctl;
- d->neigh_setup = hostess_neigh_setup_dev;
- d->set_mac_address = NULL;
if(register_netdev(d))
{
printk(KERN_ERR "%s: unable to register device.\n",
d->name);
- goto fail;
- }
+ sppp_detach(d);
+ goto dmafail2;
+ }
z8530_describe(dev, "I/O", iobase);
dev->active=1;
fail:
free_irq(irq, dev);
fail1:
- kfree(sv->netdev.dev);
+ free_netdev(sv->netdev.dev);
fail2:
kfree(sv);
fail3:
static void sv11_shutdown(struct sv11_device *dev)
{
sppp_detach(dev->netdev.dev);
- z8530_shutdown(&dev->sync);
unregister_netdev(dev->netdev.dev);
+ z8530_shutdown(&dev->sync);
free_irq(dev->sync.irq, dev);
if(dma)
{
free_dma(dev->sync.chanA.txdma);
}
release_region(dev->sync.chanA.ctrlio-1, 8);
+ free_netdev(dev->netdev.dev);
+ kfree(dev);
}
#ifdef MODULE
skb_pull(skb, 2); /* Remove the length bytes */
skb_trim(skb, len); /* Set the length of the data */
- if ((err = lapb_data_received(lapbeth, skb)) != LAPB_OK) {
+ if ((err = lapb_data_received(lapbeth->axdev, skb)) != LAPB_OK) {
printk(KERN_DEBUG "lapbether: lapb_data_received err - %d\n", err);
goto drop_unlock;
}
return 0;
}
-static int lapbeth_data_indication(void *token, struct sk_buff *skb)
+static int lapbeth_data_indication(struct net_device *dev, struct sk_buff *skb)
{
- struct lapbethdev *lapbeth = (struct lapbethdev *)token;
unsigned char *ptr;
skb_push(skb, 1);
ptr = skb->data;
*ptr = 0x00;
- skb->dev = lapbeth->axdev;
+ skb->dev = dev;
skb->protocol = htons(ETH_P_X25);
skb->mac.raw = skb->data;
skb->pkt_type = PACKET_HOST;
*/
static int lapbeth_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct lapbethdev *lapbeth = (struct lapbethdev *)dev->priv;
int err = -ENODEV;
/*
err = 0;
break;
case 0x01:
- if ((err = lapb_connect_request(lapbeth)) != LAPB_OK)
+ if ((err = lapb_connect_request(dev)) != LAPB_OK)
printk(KERN_ERR "lapbeth: lapb_connect_request "
"error: %d\n", err);
goto drop_ok;
case 0x02:
- if ((err = lapb_disconnect_request(lapbeth)) != LAPB_OK)
+ if ((err = lapb_disconnect_request(dev)) != LAPB_OK)
printk(KERN_ERR "lapbeth: lapb_disconnect_request "
"err: %d\n", err);
/* Fall thru */
skb_pull(skb, 1);
- if ((err = lapb_data_request(lapbeth, skb)) != LAPB_OK) {
+ if ((err = lapb_data_request(dev, skb)) != LAPB_OK) {
printk(KERN_ERR "lapbeth: lapb_data_request error - %d\n", err);
err = -ENOMEM;
goto drop;
goto out;
}
-static void lapbeth_data_transmit(void *token, struct sk_buff *skb)
+static void lapbeth_data_transmit(struct net_device *ndev, struct sk_buff *skb)
{
- struct lapbethdev *lapbeth = (struct lapbethdev *)token;
+ struct lapbethdev *lapbeth = ndev->priv;
unsigned char *ptr;
struct net_device *dev;
int size = skb->len;
dev_queue_xmit(skb);
}
-static void lapbeth_connected(void *token, int reason)
+static void lapbeth_connected(struct net_device *dev, int reason)
{
- struct lapbethdev *lapbeth = (struct lapbethdev *)token;
unsigned char *ptr;
struct sk_buff *skb = dev_alloc_skb(1);
ptr = skb_put(skb, 1);
*ptr = 0x01;
- skb->dev = lapbeth->axdev;
+ skb->dev = dev;
skb->protocol = htons(ETH_P_X25);
skb->mac.raw = skb->data;
skb->pkt_type = PACKET_HOST;
netif_rx(skb);
}
-static void lapbeth_disconnected(void *token, int reason)
+static void lapbeth_disconnected(struct net_device *dev, int reason)
{
- struct lapbethdev *lapbeth = (struct lapbethdev *)token;
unsigned char *ptr;
struct sk_buff *skb = dev_alloc_skb(1);
ptr = skb_put(skb, 1);
*ptr = 0x02;
- skb->dev = lapbeth->axdev;
+ skb->dev = dev;
skb->protocol = htons(ETH_P_X25);
skb->mac.raw = skb->data;
skb->pkt_type = PACKET_HOST;
*/
static int lapbeth_open(struct net_device *dev)
{
- struct lapbethdev *lapbeth;
int err;
- lapbeth = (struct lapbethdev *)dev->priv;
- if ((err = lapb_register(lapbeth, &lapbeth_callbacks)) != LAPB_OK) {
+ if ((err = lapb_register(dev, &lapbeth_callbacks)) != LAPB_OK) {
printk(KERN_ERR "lapbeth: lapb_register error - %d\n", err);
return -ENODEV;
}
static int lapbeth_close(struct net_device *dev)
{
- struct lapbethdev *lapbeth = (struct lapbethdev *)dev->priv;
int err;
netif_stop_queue(dev);
- if ((err = lapb_unregister(lapbeth)) != LAPB_OK)
+ if ((err = lapb_unregister(dev)) != LAPB_OK)
printk(KERN_ERR "lapbeth: lapb_unregister error - %d\n", err);
return 0;
return rc;
fail:
dev_put(dev);
+ free_netdev(ndev);
kfree(lapbeth);
goto out;
}
typedef struct port_s {
- hdlc_device hdlc; /* HDLC device struct - must be first */
+ struct net_device *dev;
struct card_s *card;
spinlock_t lock; /* TX lock */
sync_serial_settings settings;
static int n2_open(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
int io = port->card->io;
u8 mcr = inb(io + N2_MCR) | (port->phy_node ? TX422_PORT1:TX422_PORT0);
int result;
- result = hdlc_open(hdlc);
+ result = hdlc_open(dev);
if (result)
return result;
outb(inb(io + N2_PCR) | PCR_ENWIN, io + N2_PCR); /* open window */
outb(inb(io + N2_PSR) | PSR_DMAEN, io + N2_PSR); /* enable dma */
- sca_open(hdlc);
+ sca_open(dev);
n2_set_iface(port);
return 0;
}
static int n2_close(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
int io = port->card->io;
u8 mcr = inb(io+N2_MCR) | (port->phy_node ? TX422_PORT1 : TX422_PORT0);
- sca_close(hdlc);
+ sca_close(dev);
mcr |= port->phy_node ? DTR_PORT1 : DTR_PORT0; /* set DTR OFF */
outb(mcr, io + N2_MCR);
- hdlc_close(hdlc);
+ hdlc_close(dev);
return 0;
}
{
const size_t size = sizeof(sync_serial_settings);
sync_serial_settings new_line, *line = ifr->ifr_settings.ifs_ifsu.sync;
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
#ifdef DEBUG_RINGS
if (cmd == SIOCDEVPRIVATE) {
- sca_dump_rings(hdlc);
+ sca_dump_rings(dev);
return 0;
}
#endif
int cnt;
for (cnt = 0; cnt < 2; cnt++)
- if (card->ports[cnt].card)
- unregister_hdlc_device(&card->ports[cnt].hdlc);
+ if (card->ports[cnt].card) {
+ struct net_device *dev = port_to_dev(&card->ports[cnt]);
+ unregister_hdlc_device(dev);
+ }
if (card->irq)
free_irq(card->irq, card);
if (card->io)
release_region(card->io, N2_IOPORTS);
+ if (card->ports[0].dev)
+ free_netdev(card->ports[0].dev);
+ if (card->ports[1].dev)
+ free_netdev(card->ports[1].dev);
kfree(card);
}
}
memset(card, 0, sizeof(card_t));
+ card->ports[0].dev = alloc_hdlcdev(&card->ports[0]);
+ card->ports[1].dev = alloc_hdlcdev(&card->ports[1]);
+ if (!card->ports[0].dev || !card->ports[1].dev) {
+ printk(KERN_ERR "n2: unable to allocate memory\n");
+ n2_destroy_card(card);
+ return -ENOMEM;
+ }
+
if (!request_region(io, N2_IOPORTS, devname)) {
printk(KERN_ERR "n2: I/O port region in use\n");
n2_destroy_card(card);
sca_init(card, 0);
for (cnt = 0; cnt < 2; cnt++) {
port_t *port = &card->ports[cnt];
- struct net_device *dev = hdlc_to_dev(&port->hdlc);
+ struct net_device *dev = port_to_dev(port);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
if ((cnt == 0 && !valid0) || (cnt == 1 && !valid1))
continue;
dev->do_ioctl = n2_ioctl;
dev->open = n2_open;
dev->stop = n2_close;
- port->hdlc.attach = sca_attach;
- port->hdlc.xmit = sca_xmit;
+ hdlc->attach = sca_attach;
+ hdlc->xmit = sca_xmit;
port->settings.clock_type = CLOCK_EXT;
+ port->card = card;
- if (register_hdlc_device(&port->hdlc)) {
+ if (register_hdlc_device(dev)) {
printk(KERN_WARNING "n2: unable to register hdlc "
"device\n");
+ port->card = NULL;
n2_destroy_card(card);
return -ENOBUFS;
}
- port->card = card;
sca_init_sync_port(port); /* Set up SCA memory */
printk(KERN_INFO "%s: RISCom/N2 node %d\n",
- hdlc_to_name(&port->hdlc), port->phy_node);
+ dev->name, port->phy_node);
}
*new_card = card;
uclong line_off;
#ifdef __KERNEL__
char name[16];
- hdlc_device *hdlc;
+ struct net_device *dev;
void *private;
struct sk_buff *tx_skb;
void tx_dma_stop(pc300_t *, int);
void rx_dma_stop(pc300_t *, int);
int cpc_queue_xmit(struct sk_buff *, struct net_device *);
-void cpc_net_rx(hdlc_device *);
+void cpc_net_rx(struct net_device *);
void cpc_sca_status(pc300_t *, int);
int cpc_change_mtu(struct net_device *, int);
int cpc_ioctl(struct net_device *, struct ifreq *, int);
static uclong detect_ram(pc300_t *);
static void plx_init(pc300_t *);
static void cpc_trace(struct net_device *, struct sk_buff *, char);
-static int cpc_attach(hdlc_device *, unsigned short, unsigned short);
+static int cpc_attach(struct net_device *, unsigned short, unsigned short);
#ifdef CONFIG_PC300_MLPPP
void cpc_tty_init(pc300dev_t * dev);
pc300dev_t *d = (pc300dev_t *) dev->priv;
pc300ch_t *chan = (pc300ch_t *) d->chan;
pc300_t *card = (pc300_t *) chan->card;
- struct net_device_stats *stats = &d->hdlc->stats;
+ struct net_device_stats *stats = hdlc_stats(dev);
int ch = chan->channel;
uclong flags;
ucchar ilar;
pc300dev_t *d = (pc300dev_t *) dev->priv;
pc300ch_t *chan = (pc300ch_t *) d->chan;
pc300_t *card = (pc300_t *) chan->card;
- struct net_device_stats *stats = &d->hdlc->stats;
+ struct net_device_stats *stats = hdlc_stats(dev);
int ch = chan->channel;
uclong flags;
#ifdef PC300_DEBUG_TX
return 0;
}
-void cpc_net_rx(hdlc_device * hdlc)
+void cpc_net_rx(struct net_device *dev)
{
- struct net_device *dev = hdlc_to_dev(hdlc);
pc300dev_t *d = (pc300dev_t *) dev->priv;
pc300ch_t *chan = (pc300ch_t *) d->chan;
pc300_t *card = (pc300_t *) chan->card;
- struct net_device_stats *stats = &d->hdlc->stats;
+ struct net_device_stats *stats = hdlc_stats(dev);
int ch = chan->channel;
#ifdef PC300_DEBUG_RX
int i;
pc300_t *card = (pc300_t *)chan->card;
int ch = chan->channel;
volatile pcsca_bd_t * ptdescr;
- struct net_device_stats *stats = &dev->hdlc->stats;
+ struct net_device_stats *stats = hdlc_stats(dev->dev);
/* Clean up descriptors from previous transmission */
ptdescr = (pcsca_bd_t *)(card->hw.rambase +
} else {
#endif
/* Tell the upper layer we are ready to transmit more packets */
- netif_wake_queue((struct net_device*)dev->hdlc);
+ netif_wake_queue(dev->dev);
#ifdef CONFIG_PC300_MLPPP
}
#endif
for (ch = 0; ch < card->hw.nchan; ch++) {
pc300ch_t *chan = &card->chan[ch];
pc300dev_t *d = &chan->d;
- hdlc_device *hdlc = d->hdlc;
- struct net_device *dev = hdlc_to_dev(hdlc);
+ struct net_device *dev = d->dev;
+ hdlc_device *hdlc = dev_to_hdlc(dev);
spin_lock(&card->card_lock);
if ((cpc_readb(scabase + DSR_RX(ch)) & DSR_DE)) {
rx_dma_stop(card, ch);
}
- cpc_net_rx(hdlc);
+ cpc_net_rx(dev);
/* Discard invalid frames */
hdlc->stats.rx_errors++;
hdlc->stats.rx_over_errors++;
/* verify if driver is TTY */
cpc_tty_receive(d);
} else {
- cpc_net_rx(hdlc);
+ cpc_net_rx(dev);
}
#else
- cpc_net_rx(hdlc);
+ cpc_net_rx(dev);
#endif
if (card->hw.type == PC300_TE) {
cpc_writeb(card->hw.falcbase +
static struct net_device_stats *cpc_get_stats(struct net_device *dev)
{
- pc300dev_t *d = (pc300dev_t *) dev->priv;
-
- if (d)
- return &d->hdlc->stats;
- else
- return NULL;
+ return hdlc_stats(dev);
}
static int clock_rate_calc(uclong rate, uclong clock, int *br_io)
return 0;
}
-static int cpc_attach(hdlc_device * hdlc, unsigned short encoding,
+static int cpc_attach(struct net_device *dev, unsigned short encoding,
unsigned short parity)
{
- struct net_device * dev = hdlc_to_dev(hdlc);
pc300dev_t *d = (pc300dev_t *)dev->priv;
pc300ch_t *chan = (pc300ch_t *)d->chan;
pc300_t *card = (pc300_t *)chan->card;
d->if_ptr = &hdlc->state.ppp.pppdev;
}
- result = hdlc_open(hdlc);
+ result = hdlc_open(dev);
if (hdlc->proto.id == IF_PROTO_PPP) {
dev->priv = d;
}
cpc_closech(d);
CPC_UNLOCK(card, flags);
- hdlc_close(hdlc);
+ hdlc_close(dev);
if (hdlc->proto.id == IF_PROTO_PPP) {
d->if_ptr = NULL;
}
d->line_on = 0;
d->line_off = 0;
- d->hdlc = (hdlc_device *) kmalloc(sizeof(hdlc_device), GFP_KERNEL);
- if (d->hdlc == NULL)
+ dev = alloc_hdlcdev(NULL);
+ if (dev == NULL)
continue;
- memset(d->hdlc, 0, sizeof(hdlc_device));
- hdlc = d->hdlc;
+ hdlc = dev_to_hdlc(dev);
hdlc->xmit = cpc_queue_xmit;
hdlc->attach = cpc_attach;
-
- dev = hdlc_to_dev(hdlc);
-
+ d->dev = dev;
dev->mem_start = card->hw.ramphys;
dev->mem_end = card->hw.ramphys + card->hw.ramsize - 1;
dev->irq = card->hw.irq;
dev->change_mtu = cpc_change_mtu;
dev->do_ioctl = cpc_ioctl;
- if (register_hdlc_device(hdlc) == 0) {
+ if (register_hdlc_device(dev) == 0) {
dev->priv = d; /* We need 'priv', hdlc doesn't */
printk("%s: Cyclades-PC300/", dev->name);
switch (card->hw.type) {
} else {
printk ("Dev%d on card(0x%08lx): unable to allocate i/f name.\n",
i + 1, card->hw.ramphys);
- *(dev->name) = 0;
- kfree(d->hdlc);
+ free_netdev(dev);
continue;
}
}
cpc_readw(card->hw.plxbase + card->hw.intctl_reg) & ~(0x0040));
for (i = 0; i < card->hw.nchan; i++) {
- unregister_hdlc_device(card->chan[i].d.hdlc);
+ unregister_hdlc_device(card->chan[i].d.dev);
}
iounmap((void *) card->hw.plxbase);
iounmap((void *) card->hw.scabase);
iounmap((void *) card->hw.falcbase);
release_mem_region(card->hw.falcphys, card->hw.falcsize);
}
+ for (i = 0; i < card->hw.nchan; i++)
+ if (card->chan[i].d.dev);
+ free_netdev(card->chan[i].d.dev);
if (card->hw.irq)
free_irq(card->hw.irq, card);
kfree(card);
unsigned long flags;
CPC_TTY_DBG("%s-tty: Clear signal %x\n",
- ((struct net_device*)(pc300dev->hdlc))->name, signal);
+ pc300dev->dev->name, signal);
CPC_TTY_LOCK(card, flags);
cpc_writeb(card->hw.scabase + M_REG(CTL,ch),
cpc_readb(card->hw.scabase+M_REG(CTL,ch))& signal);
unsigned long flags;
CPC_TTY_DBG("%s-tty: Set signal %x\n",
- ((struct net_device*)(pc300dev->hdlc))->name, signal);
+ pc300dev->dev->name, signal);
CPC_TTY_LOCK(card, flags);
cpc_writeb(card->hw.scabase + M_REG(CTL,ch),
cpc_readb(card->hw.scabase+M_REG(CTL,ch))& ~signal);
st_cpc_tty_area * cpc_tty;
/* hdlcX - X=interface number */
- port = ((struct net_device*)(pc300dev->hdlc))->name[4] - '0';
+ port = pc300dev->dev->name[4] - '0';
if (port >= CPC_TTY_NPORTS) {
printk("%s-tty: invalid interface selected (0-%i): %i",
- ((struct net_device*)(pc300dev->hdlc))->name,
+ pc300dev->dev->name,
CPC_TTY_NPORTS-1,port);
return;
}
if (cpc_tty_cnt == 0) { /* first TTY connection -> register driver */
CPC_TTY_DBG("%s-tty: driver init, major:%i, minor range:%i=%i\n",
- ((struct net_device*)(pc300dev->hdlc))->name,
+ pc300dev->dev->name,
CPC_TTY_MAJOR, CPC_TTY_MINOR_START,
CPC_TTY_MINOR_START+CPC_TTY_NPORTS);
/* initialize tty driver struct */
/* register the TTY driver */
if (tty_register_driver(&serial_drv)) {
printk("%s-tty: Failed to register serial driver! ",
- ((struct net_device*)(pc300dev->hdlc))->name);
+ pc300dev->dev->name);
return;
}
if (cpc_tty->state != CPC_TTY_ST_IDLE) {
CPC_TTY_DBG("%s-tty: TTY port %i, already in use.\n",
- ((struct net_device*)(pc300dev->hdlc))->name,port);
+ pc300dev->dev->name, port);
return;
}
pc300dev->cpc_tty = (void *)cpc_tty;
- aux = strlen(((struct net_device*)(pc300dev->hdlc))->name);
- memcpy(cpc_tty->name,((struct net_device*)(pc300dev->hdlc))->name,aux);
+ aux = strlen(pc300dev->dev->name);
+ memcpy(cpc_tty->name, pc300dev->dev->name, aux);
memcpy(&cpc_tty->name[aux], "-tty", 5);
- cpc_open((struct net_device *)pc300dev->hdlc);
+ cpc_open(pc300dev->dev);
cpc_tty_signal_off(pc300dev, CTL_DTR);
CPC_TTY_DBG("%s: Initializing TTY Sync Driver, tty major#%d minor#%i\n",
(from_user)?"from user" : "from kernel",count);
pc300chan = (pc300ch_t *)((pc300dev_t*)cpc_tty->pc300dev)->chan;
- stats = &((pc300dev_t*)cpc_tty->pc300dev)->hdlc->stats;
+ stats = hdlc_stats(((pc300dev_t*)cpc_tty->pc300dev)->dev);
card = (pc300_t *) pc300chan->card;
ch = pc300chan->channel;
pc300_t *card = (pc300_t *)pc300chan->card;
int ch = pc300chan->channel;
volatile pcsca_bd_t * ptdescr;
- struct net_device_stats *stats = &pc300dev->hdlc->stats;
+ struct net_device_stats *stats = hdlc_stats(pc300dev->dev);
int rx_len, rx_aux;
volatile unsigned char status;
unsigned short first_bd = pc300chan->rx_first_bd;
pc300ch_t *chan = (pc300ch_t *)dev->chan;
pc300_t *card = (pc300_t *)chan->card;
int ch = chan->channel;
- struct net_device_stats *stats = &dev->hdlc->stats;
+ struct net_device_stats *stats = hdlc_stats(dev->dev);
unsigned long flags;
volatile pcsca_bd_t * ptdescr;
int i, nchar;
if ((skb = dev_alloc_skb(10 + len)) == NULL) {
/* out of memory */
- CPC_TTY_DBG("%s: tty_trace - out of memory\n",
- ((struct net_device *)(dev->hdlc))->name);
+ CPC_TTY_DBG("%s: tty_trace - out of memory\n", dev->dev->name);
return;
}
skb_put (skb, 10 + len);
- skb->dev = (struct net_device *) dev->hdlc;
+ skb->dev = dev->dev;
skb->protocol = htons(ETH_P_CUST);
skb->mac.raw = skb->data;
skb->pkt_type = PACKET_HOST;
skb->len = 10 + len;
- memcpy(skb->data,((struct net_device *)(dev->hdlc))->name,5);
+ memcpy(skb->data,dev->dev->name,5);
skb->data[5] = '[';
skb->data[6] = rxtx;
skb->data[7] = ']';
int res;
if ((cpc_tty= (st_cpc_tty_area *) pc300dev->cpc_tty) == 0) {
- CPC_TTY_DBG("%s: interface is not TTY\n",
- ((struct net_device *)(pc300dev->hdlc))->name);
+ CPC_TTY_DBG("%s: interface is not TTY\n", pc300dev->dev->name);
return;
}
CPC_TTY_DBG("%s: cpc_tty_unregister_service", cpc_tty->name);
if (cpc_tty->pc300dev != pc300dev) {
CPC_TTY_DBG("%s: invalid tty ptr=%s\n",
- ((struct net_device *)(pc300dev->hdlc))->name, cpc_tty->name);
+ pc300dev->dev->name, cpc_tty->name);
return;
}
typedef struct port_s {
- hdlc_device hdlc; /* HDLC device struct - must be first */
+ struct net_device *dev;
struct card_s *card;
spinlock_t lock; /* TX lock */
sync_serial_settings settings;
static int pci200_open(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
- int result = hdlc_open(hdlc);
+ int result = hdlc_open(dev);
if (result)
return result;
- sca_open(hdlc);
+ sca_open(dev);
pci200_set_iface(port);
sca_flush(port_to_card(port));
return 0;
static int pci200_close(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- sca_close(hdlc);
+ sca_close(dev);
sca_flush(port_to_card(dev_to_port(dev)));
- hdlc_close(hdlc);
+ hdlc_close(dev);
return 0;
}
{
const size_t size = sizeof(sync_serial_settings);
sync_serial_settings new_line, *line = ifr->ifr_settings.ifs_ifsu.sync;
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
#ifdef DEBUG_RINGS
if (cmd == SIOCDEVPRIVATE) {
- sca_dump_rings(hdlc);
+ sca_dump_rings(dev);
return 0;
}
#endif
card_t *card = pci_get_drvdata(pdev);
for(i = 0; i < 2; i++)
- if (card->ports[i].card)
- unregister_hdlc_device(&card->ports[i].hdlc);
+ if (card->ports[i].card) {
+ struct net_device *dev = port_to_dev(&card->ports[i]);
+ unregister_hdlc_device(dev);
+ }
if (card->irq)
free_irq(card->irq, card);
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
+ if (card->ports[0].dev)
+ free_netdev(card->ports[0].dev);
+ if (card->ports[1].dev)
+ free_netdev(card->ports[1].dev);
kfree(card);
}
}
memset(card, 0, sizeof(card_t));
pci_set_drvdata(pdev, card);
+ card->ports[0].dev = alloc_hdlcdev(&card->ports[0]);
+ card->ports[1].dev = alloc_hdlcdev(&card->ports[1]);
+ if (!card->ports[0].dev || !card->ports[1].dev) {
+ printk(KERN_ERR "pci200syn: unable to allocate memory\n");
+ pci200_pci_remove_one(pdev);
+ return -ENOMEM;
+ }
pci_read_config_byte(pdev, PCI_REVISION_ID, &rev_id);
if (pci_resource_len(pdev, 0) != PCI200SYN_PLX_SIZE ||
for(i = 0; i < 2; i++) {
port_t *port = &card->ports[i];
- struct net_device *dev = hdlc_to_dev(&port->hdlc);
+ struct net_device *dev = port_to_dev(port);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
port->phy_node = i;
spin_lock_init(&port->lock);
dev->do_ioctl = pci200_ioctl;
dev->open = pci200_open;
dev->stop = pci200_close;
- port->hdlc.attach = sca_attach;
- port->hdlc.xmit = sca_xmit;
+ hdlc->attach = sca_attach;
+ hdlc->xmit = sca_xmit;
port->settings.clock_type = CLOCK_EXT;
- if(register_hdlc_device(&port->hdlc)) {
+ port->card = card;
+ if(register_hdlc_device(dev)) {
printk(KERN_ERR "pci200syn: unable to register hdlc "
"device\n");
+ port->card = NULL;
pci200_pci_remove_one(pdev);
return -ENOBUFS;
}
- port->card = card;
sca_init_sync_port(port); /* Set up SCA memory */
printk(KERN_INFO "%s: PCI200SYN node %d\n",
- hdlc_to_name(&port->hdlc), port->phy_node);
+ dev->name, port->phy_node);
}
sca_flush(card);
static void __init sbni_devsetup(struct net_device *dev)
{
ether_setup( dev );
- dev->init = &sbni_init;
dev->open = &sbni_open;
dev->stop = &sbni_close;
dev->hard_start_xmit = &sbni_start_xmit;
sprintf(dev->name, "sbni%d", unit);
netdev_boot_setup_check(dev);
+ err = sbni_init(dev);
+ if (err) {
+ free_netdev(dev);
+ return err;
+ }
+
err = register_netdev(dev);
if (err) {
+ release_region( dev->base_addr, SBNI_IO_EXTENT );
free_netdev(dev);
return err;
}
/* Avoid already found cards from previous calls */
if( !request_region( pci_ioaddr, SBNI_IO_EXTENT, dev->name ) ) {
pci_read_config_word( pdev, PCI_SUBSYSTEM_ID, &subsys );
- if( subsys != 2 || /* Dual adapter is present */
- check_region( pci_ioaddr += 4, SBNI_IO_EXTENT ) )
+
+ if (subsys != 2)
+ continue;
+
+ /* Dual adapter is present */
+ if (!request_region(pci_ioaddr += 4, SBNI_IO_EXTENT,
+ dev->name ) )
continue;
}
pci_irq_line );
/* avoiding re-enable dual adapters */
- if( (pci_ioaddr & 7) == 0 && pci_enable_device( pdev ) )
+ if( (pci_ioaddr & 7) == 0 && pci_enable_device( pdev ) ) {
+ release_region( pci_ioaddr, SBNI_IO_EXTENT );
return -EIO;
+ }
if( sbni_probe1( dev, pci_ioaddr, pci_irq_line ) )
return 0;
}
init_module( void )
{
struct net_device *dev;
+ int err;
while( num < SBNI_MAX_NUM_CARDS ) {
dev = alloc_netdev(sizeof(struct net_local),
"sbni%d", sbni_devsetup);
- if( !dev) {
- printk( KERN_ERR "sbni: unable to allocate device!\n" );
- return -ENOMEM;
- }
+ if( !dev)
+ break;
sprintf( dev->name, "sbni%d", num );
+ err = sbni_init(dev);
+ if (err) {
+ free_netdev(dev);
+ break;
+ }
+
if( register_netdev( dev ) ) {
- kfree( dev );
+ release_region( dev->base_addr, SBNI_IO_EXTENT );
+ free_netdev( dev );
break;
}
}
struct frad_local *flp;
int i;
char byte;
+ unsigned base;
+ int err = -EINVAL;
flp = dev->priv;
if (i == sizeof(valid_port) / sizeof(int))
return(-EINVAL);
- dev->base_addr = map->base_addr;
- if (!request_region(dev->base_addr, SDLA_IO_EXTENTS, dev->name)){
+ if (!request_region(map->base_addr, SDLA_IO_EXTENTS, dev->name)){
printk(KERN_WARNING "SDLA: io-port 0x%04lx in use \n", dev->base_addr);
return(-EINVAL);
}
+ base = map->base_addr;
+
/* test for card types, S502A, S502E, S507, S508 */
/* these tests shut down the card completely, so clear the state */
flp->type = SDLA_UNKNOWN;
flp->state = 0;
for(i=1;i<SDLA_IO_EXTENTS;i++)
- if (inb(dev->base_addr + i) != 0xFF)
+ if (inb(base + i) != 0xFF)
break;
- if (i == SDLA_IO_EXTENTS)
- {
- outb(SDLA_HALT, dev->base_addr + SDLA_REG_Z80_CONTROL);
- if ((inb(dev->base_addr + SDLA_S502_STS) & 0x0F) == 0x08)
- {
- outb(SDLA_S502E_INTACK, dev->base_addr + SDLA_REG_CONTROL);
- if ((inb(dev->base_addr + SDLA_S502_STS) & 0x0F) == 0x0C)
- {
- outb(SDLA_HALT, dev->base_addr + SDLA_REG_CONTROL);
+ if (i == SDLA_IO_EXTENTS) {
+ outb(SDLA_HALT, base + SDLA_REG_Z80_CONTROL);
+ if ((inb(base + SDLA_S502_STS) & 0x0F) == 0x08) {
+ outb(SDLA_S502E_INTACK, base + SDLA_REG_CONTROL);
+ if ((inb(base + SDLA_S502_STS) & 0x0F) == 0x0C) {
+ outb(SDLA_HALT, base + SDLA_REG_CONTROL);
flp->type = SDLA_S502E;
+ goto got_type;
}
}
}
- if (flp->type == SDLA_UNKNOWN)
- {
- for(byte=inb(dev->base_addr),i=0;i<SDLA_IO_EXTENTS;i++)
- if (inb(dev->base_addr + i) != byte)
- break;
+ for(byte=inb(base),i=0;i<SDLA_IO_EXTENTS;i++)
+ if (inb(base + i) != byte)
+ break;
- if (i == SDLA_IO_EXTENTS)
- {
- outb(SDLA_HALT, dev->base_addr + SDLA_REG_CONTROL);
- if ((inb(dev->base_addr + SDLA_S502_STS) & 0x7E) == 0x30)
- {
- outb(SDLA_S507_ENABLE, dev->base_addr + SDLA_REG_CONTROL);
- if ((inb(dev->base_addr + SDLA_S502_STS) & 0x7E) == 0x32)
- {
- outb(SDLA_HALT, dev->base_addr + SDLA_REG_CONTROL);
- flp->type = SDLA_S507;
- }
+ if (i == SDLA_IO_EXTENTS) {
+ outb(SDLA_HALT, base + SDLA_REG_CONTROL);
+ if ((inb(base + SDLA_S502_STS) & 0x7E) == 0x30) {
+ outb(SDLA_S507_ENABLE, base + SDLA_REG_CONTROL);
+ if ((inb(base + SDLA_S502_STS) & 0x7E) == 0x32) {
+ outb(SDLA_HALT, base + SDLA_REG_CONTROL);
+ flp->type = SDLA_S507;
+ goto got_type;
}
}
}
- if (flp->type == SDLA_UNKNOWN)
- {
- outb(SDLA_HALT, dev->base_addr + SDLA_REG_CONTROL);
- if ((inb(dev->base_addr + SDLA_S508_STS) & 0x3F) == 0x00)
- {
- outb(SDLA_S508_INTEN, dev->base_addr + SDLA_REG_CONTROL);
- if ((inb(dev->base_addr + SDLA_S508_STS) & 0x3F) == 0x10)
- {
- outb(SDLA_HALT, dev->base_addr + SDLA_REG_CONTROL);
- flp->type = SDLA_S508;
- }
+ outb(SDLA_HALT, base + SDLA_REG_CONTROL);
+ if ((inb(base + SDLA_S508_STS) & 0x3F) == 0x00) {
+ outb(SDLA_S508_INTEN, base + SDLA_REG_CONTROL);
+ if ((inb(base + SDLA_S508_STS) & 0x3F) == 0x10) {
+ outb(SDLA_HALT, base + SDLA_REG_CONTROL);
+ flp->type = SDLA_S508;
+ goto got_type;
}
}
- if (flp->type == SDLA_UNKNOWN)
- {
- outb(SDLA_S502A_HALT, dev->base_addr + SDLA_REG_CONTROL);
- if (inb(dev->base_addr + SDLA_S502_STS) == 0x40)
- {
- outb(SDLA_S502A_START, dev->base_addr + SDLA_REG_CONTROL);
- if (inb(dev->base_addr + SDLA_S502_STS) == 0x40)
- {
- outb(SDLA_S502A_INTEN, dev->base_addr + SDLA_REG_CONTROL);
- if (inb(dev->base_addr + SDLA_S502_STS) == 0x44)
- {
- outb(SDLA_S502A_START, dev->base_addr + SDLA_REG_CONTROL);
- flp->type = SDLA_S502A;
- }
+ outb(SDLA_S502A_HALT, base + SDLA_REG_CONTROL);
+ if (inb(base + SDLA_S502_STS) == 0x40) {
+ outb(SDLA_S502A_START, base + SDLA_REG_CONTROL);
+ if (inb(base + SDLA_S502_STS) == 0x40) {
+ outb(SDLA_S502A_INTEN, base + SDLA_REG_CONTROL);
+ if (inb(base + SDLA_S502_STS) == 0x44) {
+ outb(SDLA_S502A_START, base + SDLA_REG_CONTROL);
+ flp->type = SDLA_S502A;
+ goto got_type;
}
}
}
- if (flp->type == SDLA_UNKNOWN)
- {
- printk(KERN_NOTICE "%s: Unknown card type\n", dev->name);
- return(-ENODEV);
- }
+ printk(KERN_NOTICE "%s: Unknown card type\n", dev->name);
+ err = -ENODEV;
+ goto fail;
- switch(dev->base_addr)
- {
+got_type:
+ switch(base) {
case 0x270:
case 0x280:
case 0x380:
case 0x390:
- if ((flp->type != SDLA_S508) && (flp->type != SDLA_S507))
- return(-EINVAL);
+ if (flp->type != SDLA_S508 && flp->type != SDLA_S507)
+ goto fail;
}
- switch (map->irq)
- {
+ switch (map->irq) {
case 2:
if (flp->type != SDLA_S502E)
- return(-EINVAL);
+ goto fail;
break;
case 10:
case 12:
case 15:
case 4:
- if ((flp->type != SDLA_S508) && (flp->type != SDLA_S507))
- return(-EINVAL);
-
+ if (flp->type != SDLA_S508 && flp->type != SDLA_S507)
+ goto fail;
+ break;
case 3:
case 5:
case 7:
if (flp->type == SDLA_S502A)
- return(-EINVAL);
+ goto fail;
break;
default:
- return(-EINVAL);
+ goto fail;
}
- dev->irq = map->irq;
+ err = -EAGAIN;
if (request_irq(dev->irq, &sdla_isr, 0, dev->name, dev))
- return(-EAGAIN);
+ goto fail;
- if (flp->type == SDLA_S507)
- {
- switch(dev->irq)
- {
+ if (flp->type == SDLA_S507) {
+ switch(dev->irq) {
case 3:
flp->state = SDLA_S507_IRQ3;
break;
if (valid_mem[i] == map->mem_start)
break;
+ err = -EINVAL;
if (i == sizeof(valid_mem) / sizeof(int))
- /*
- * FIXME:
- * BUG BUG BUG: MUST RELEASE THE IRQ WE ALLOCATED IN
- * ALL THESE CASES
- *
- */
- return(-EINVAL);
+ goto fail2;
- if ((flp->type == SDLA_S502A) && (((map->mem_start & 0xF000) >> 12) == 0x0E))
- return(-EINVAL);
+ if (flp->type == SDLA_S502A && (map->mem_start & 0xF000) >> 12 == 0x0E)
+ goto fail2;
- if ((flp->type != SDLA_S507) && ((map->mem_start >> 16) == 0x0B))
- return(-EINVAL);
+ if (flp->type != SDLA_S507 && map->mem_start >> 16 == 0x0B)
+ goto fail2;
- if ((flp->type == SDLA_S507) && ((map->mem_start >> 16) == 0x0D))
- return(-EINVAL);
-
- dev->mem_start = map->mem_start;
- dev->mem_end = dev->mem_start + 0x2000;
+ if (flp->type == SDLA_S507 && map->mem_start >> 16 == 0x0D)
+ goto fail2;
byte = flp->type != SDLA_S508 ? SDLA_8K_WINDOW : 0;
byte |= (map->mem_start & 0xF000) >> (12 + (flp->type == SDLA_S508 ? 1 : 0));
- switch(flp->type)
- {
+ switch(flp->type) {
case SDLA_S502A:
case SDLA_S502E:
- switch (map->mem_start >> 16)
- {
+ switch (map->mem_start >> 16) {
case 0x0A:
byte |= SDLA_S502_SEG_A;
break;
}
break;
case SDLA_S507:
- switch (map->mem_start >> 16)
- {
+ switch (map->mem_start >> 16) {
case 0x0A:
byte |= SDLA_S507_SEG_A;
break;
}
break;
case SDLA_S508:
- switch (map->mem_start >> 16)
- {
+ switch (map->mem_start >> 16) {
case 0x0A:
byte |= SDLA_S508_SEG_A;
break;
}
/* set the memory bits, and enable access */
- outb(byte, dev->base_addr + SDLA_REG_PC_WINDOW);
+ outb(byte, base + SDLA_REG_PC_WINDOW);
switch(flp->type)
{
flp->state = SDLA_MEMEN;
break;
}
- outb(flp->state, dev->base_addr + SDLA_REG_CONTROL);
+ outb(flp->state, base + SDLA_REG_CONTROL);
+ dev->irq = map->irq;
+ dev->base_addr = base;
+ dev->mem_start = map->mem_start;
+ dev->mem_end = dev->mem_start + 0x2000;
flp->initialized = 1;
- return(0);
+ return 0;
+
+fail2:
+ free_irq(map->irq, dev);
+fail:
+ release_region(base, SDLA_IO_EXTENTS);
+ return err;
}
static struct net_device_stats *sdla_stats(struct net_device *dev)
static void __exit exit_sdla(void)
{
- struct frad_local *flp;
+ struct frad_local *flp = sdla->priv;
unregister_netdev(sdla);
- if (sdla->irq)
+ if (flp->initialized) {
free_irq(sdla->irq, sdla);
-
- flp = sdla->priv;
+ release_region(sdla->base_addr, SDLA_IO_EXTENTS);
+ }
del_timer_sync(&flp->timer);
free_netdev(sdla);
}
/* DMA off on the card, drop DTR */
outb(0, b->iobase);
release_region(b->iobase, 8);
+ kfree(b);
}
typedef struct {
- hdlc_device hdlc; /* HDLC device struct - must be first */
+ struct net_device *dev;
struct card_t *card;
spinlock_t lock; /* for wanxl_xmit */
int node; /* physical port #0 - 3 */
struct sk_buff *rx_skbs[RX_QUEUE_LENGTH];
card_status_t *status; /* shared between host and card */
dma_addr_t status_address;
+ port_t __ports[0];
}card_t;
-static inline port_t* hdlc_to_port(hdlc_device *hdlc)
-{
- return (port_t*)hdlc;
-}
-
-
static inline port_t* dev_to_port(struct net_device *dev)
{
- return hdlc_to_port(dev_to_hdlc(dev));
+ return (port_t *)dev_to_hdlc(dev)->priv;
}
static inline struct net_device *port_to_dev(port_t* port)
{
- return hdlc_to_dev(&port->hdlc);
+ return port->dev;
}
static inline const char* port_name(port_t *port)
{
- return hdlc_to_name((hdlc_device*)port);
+ return port_to_dev(port)->name;
}
printk(KERN_INFO "%s: %s%s module, %s cable%s%s\n",
port_name(port), pm, dte, cable, dsr, dcd);
- hdlc_set_carrier(value & STATUS_CABLE_DCD, &port->hdlc);
+ hdlc_set_carrier(value & STATUS_CABLE_DCD, port_to_dev(port));
}
/* Transmit complete interrupt service */
static inline void wanxl_tx_intr(port_t *port)
{
+ struct net_device *dev = port_to_dev(port);
+ struct net_device_stats *stats = hdlc_stats(dev);
while (1) {
desc_t *desc = &get_status(port)->tx_descs[port->tx_in];
struct sk_buff *skb = port->tx_skbs[port->tx_in];
switch (desc->stat) {
case PACKET_FULL:
case PACKET_EMPTY:
- netif_wake_queue(port_to_dev(port));
+ netif_wake_queue(dev);
return;
case PACKET_UNDERRUN:
- port->hdlc.stats.tx_errors++;
- port->hdlc.stats.tx_fifo_errors++;
+ stats->tx_errors++;
+ stats->tx_fifo_errors++;
break;
default:
- port->hdlc.stats.tx_packets++;
- port->hdlc.stats.tx_bytes += skb->len;
+ stats->tx_packets++;
+ stats->tx_bytes += skb->len;
}
desc->stat = PACKET_EMPTY; /* Free descriptor */
pci_unmap_single(port->card->pdev, desc->address, skb->len,
struct sk_buff *skb = card->rx_skbs[card->rx_in];
port_t *port = card->ports[desc->stat & PACKET_PORT_MASK];
struct net_device *dev = port_to_dev(port);
+ struct net_device_stats *stats = hdlc_stats(dev);
if ((desc->stat & PACKET_PORT_MASK) > card->n_ports)
printk(KERN_CRIT "wanXL %s: received packet for"
" nonexistent port\n", card_name(card->pdev));
else if (!skb)
- port->hdlc.stats.rx_dropped++;
+ stats->rx_dropped++;
else {
pci_unmap_single(card->pdev, desc->address,
skb->len);
debug_frame(skb);
#endif
- port->hdlc.stats.rx_packets++;
- port->hdlc.stats.rx_bytes += skb->len;
+ stats->rx_packets++;
+ stats->rx_bytes += skb->len;
skb->mac.raw = skb->data;
skb->dev = dev;
dev->last_rx = jiffies;
static int wanxl_xmit(struct sk_buff *skb, struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
desc_t *desc;
spin_lock(&port->lock);
-static int wanxl_attach(hdlc_device *hdlc, unsigned short encoding,
+static int wanxl_attach(struct net_device *dev, unsigned short encoding,
unsigned short parity)
{
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
if (encoding != ENCODING_NRZ &&
encoding != ENCODING_NRZI)
{
const size_t size = sizeof(sync_serial_settings);
sync_serial_settings line;
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
if (cmd != SIOCWANDEV)
return hdlc_ioctl(dev, ifr, cmd);
static int wanxl_open(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
u8 *dbr = port->card->plx + PLX_DOORBELL_TO_CARD;
unsigned long timeout;
int i;
printk(KERN_ERR "%s: port already open\n", port_name(port));
return -EIO;
}
- if ((i = hdlc_open(hdlc)) != 0)
+ if ((i = hdlc_open(dev)) != 0)
return i;
port->tx_in = port->tx_out = 0;
static int wanxl_close(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
+ port_t *port = dev_to_port(dev);
unsigned long timeout;
int i;
- hdlc_close(hdlc);
+ hdlc_close(dev);
/* signal the card */
writel(1 << (DOORBELL_TO_CARD_CLOSE_0 + port->node),
port->card->plx + PLX_DOORBELL_TO_CARD);
static struct net_device_stats *wanxl_get_stats(struct net_device *dev)
{
- hdlc_device *hdlc = dev_to_hdlc(dev);
- port_t *port = hdlc_to_port(hdlc);
-
- hdlc->stats.rx_over_errors = get_status(port)->rx_overruns;
- hdlc->stats.rx_frame_errors = get_status(port)->rx_frame_errors;
- hdlc->stats.rx_errors = hdlc->stats.rx_over_errors +
- hdlc->stats.rx_frame_errors;
- return &hdlc->stats;
+ struct net_device_stats *stats = hdlc_stats(dev);
+ port_t *port = dev_to_port(dev);
+
+ stats->rx_over_errors = get_status(port)->rx_overruns;
+ stats->rx_frame_errors = get_status(port)->rx_frame_errors;
+ stats->rx_errors = stats->rx_over_errors + stats->rx_frame_errors;
+ return stats;
}
card_t *card = pci_get_drvdata(pdev);
int i;
+ for (i = 0; i < 4; i++)
+ if (card->ports[i]) {
+ struct net_device *dev = port_to_dev(card->ports[i]);
+ unregister_hdlc_device(dev);
+ }
+
/* unregister and free all host resources */
if (card->irq)
free_irq(card->irq, card);
- for (i = 0; i < 4; i++)
- if (card->ports[i])
- unregister_hdlc_device(&card->ports[i]->hdlc);
-
wanxl_reset(card);
for (i = 0; i < RX_QUEUE_LENGTH; i++)
pci_free_consistent(pdev, sizeof(card_status_t),
card->status, card->status_address);
+ for (i = 0; i < card->n_ports; i++)
+ if (card->__ports[i].dev)
+ free_netdev(card->__ports[i].dev);
+
pci_set_drvdata(pdev, NULL);
kfree(card);
pci_release_regions(pdev);
card->pdev = pdev;
card->n_ports = ports;
+ for (i = 0; i < ports; i++) {
+ card->__ports[i].dev = alloc_hdlcdev(&card->__ports[i]);
+ if (!card->__ports[i].dev) {
+ printk(KERN_ERR "wanXL %s: unable to allocate memory\n",
+ card_name(pdev));
+ wanxl_pci_remove_one(pdev);
+ return -ENOMEM;
+ }
+ }
+
card->status = pci_alloc_consistent(pdev, sizeof(card_status_t),
&card->status_address);
if (card->status == NULL) {
return -ENODEV;
}
- for (i = 0; i < ports; i++) {
- port_t *port = (void *)card + sizeof(card_t) +
- i * sizeof(port_t);
- struct net_device *dev = hdlc_to_dev(&port->hdlc);
- spin_lock_init(&port->lock);
- SET_MODULE_OWNER(dev);
- dev->tx_queue_len = 50;
- dev->do_ioctl = wanxl_ioctl;
- dev->open = wanxl_open;
- dev->stop = wanxl_close;
- port->hdlc.attach = wanxl_attach;
- port->hdlc.xmit = wanxl_xmit;
- if(register_hdlc_device(&port->hdlc)) {
- printk(KERN_ERR "wanXL %s: unable to register hdlc"
- " device\n", card_name(pdev));
- wanxl_pci_remove_one(pdev);
- return -ENOBUFS;
- }
- card->ports[i] = port;
- dev->get_stats = wanxl_get_stats;
- port->card = card;
- port->node = i;
- get_status(port)->clocking = CLOCK_EXT;
- }
-
for (i = 0; i < RX_QUEUE_LENGTH; i++) {
struct sk_buff *skb = dev_alloc_skb(BUFFER_LENGTH);
card->rx_skbs[i] = skb;
}
card->irq = pdev->irq;
+ for (i = 0; i < ports; i++) {
+ port_t *port = &card->__ports[i];
+ struct net_device *dev = port_to_dev(port);
+ hdlc_device *hdlc = dev_to_hdlc(dev);
+ spin_lock_init(&port->lock);
+ SET_MODULE_OWNER(dev);
+ dev->tx_queue_len = 50;
+ dev->do_ioctl = wanxl_ioctl;
+ dev->open = wanxl_open;
+ dev->stop = wanxl_close;
+ hdlc->attach = wanxl_attach;
+ hdlc->xmit = wanxl_xmit;
+ card->ports[i] = port;
+ dev->get_stats = wanxl_get_stats;
+ port->card = card;
+ port->node = i;
+ get_status(port)->clocking = CLOCK_EXT;
+ if (register_hdlc_device(dev)) {
+ printk(KERN_ERR "wanXL %s: unable to register hdlc"
+ " device\n", card_name(pdev));
+ card->ports[i] = NULL;
+ wanxl_pci_remove_one(pdev);
+ return -ENOBUFS;
+ }
+ }
+
return 0;
}
memcpy(skb_put(skb,count), sl->rbuff, count);
skb->mac.raw=skb->data;
skb->protocol=htons(ETH_P_X25);
- if((err=lapb_data_received(sl,skb))!=LAPB_OK)
+ if((err=lapb_data_received(skb->dev, skb))!=LAPB_OK)
{
kfree_skb(skb);
printk(KERN_DEBUG "x25_asy: data received err - %d\n",err);
{
case 0x00:break;
case 0x01: /* Connection request .. do nothing */
- if((err=lapb_connect_request(sl))!=LAPB_OK)
+ if((err=lapb_connect_request(dev))!=LAPB_OK)
printk(KERN_ERR "x25_asy: lapb_connect_request error - %d\n", err);
kfree_skb(skb);
return 0;
case 0x02: /* Disconnect request .. do nothing - hang up ?? */
- if((err=lapb_disconnect_request(sl))!=LAPB_OK)
+ if((err=lapb_disconnect_request(dev))!=LAPB_OK)
printk(KERN_ERR "x25_asy: lapb_disconnect_request error - %d\n", err);
default:
kfree_skb(skb);
* 14 Oct 1994 Dmitry Gorodchanin.
*/
- if((err=lapb_data_request(sl,skb))!=LAPB_OK)
+ if((err=lapb_data_request(dev,skb))!=LAPB_OK)
{
printk(KERN_ERR "lapbeth: lapb_data_request error - %d\n", err);
kfree_skb(skb);
* at the net layer.
*/
-static int x25_asy_data_indication(void *token, struct sk_buff *skb)
+static int x25_asy_data_indication(struct net_device *dev, struct sk_buff *skb)
{
skb->dev->last_rx = jiffies;
return netif_rx(skb);
* perhaps lapb should allow us to bounce this ?
*/
-static void x25_asy_data_transmit(void *token, struct sk_buff *skb)
+static void x25_asy_data_transmit(struct net_device *dev, struct sk_buff *skb)
{
- struct x25_asy *sl=token;
+ struct x25_asy *sl=dev->priv;
spin_lock(&sl->lock);
if (netif_queue_stopped(sl->dev) || sl->tty == NULL)
* LAPB connection establish/down information.
*/
-static void x25_asy_connected(void *token, int reason)
+static void x25_asy_connected(struct net_device *dev, int reason)
{
- struct x25_asy *sl = token;
+ struct x25_asy *sl = dev->priv;
struct sk_buff *skb;
unsigned char *ptr;
sl->dev->last_rx = jiffies;
}
-static void x25_asy_disconnected(void *token, int reason)
+static void x25_asy_disconnected(struct net_device *dev, int reason)
{
- struct x25_asy *sl = token;
+ struct x25_asy *sl = dev->priv;
struct sk_buff *skb;
unsigned char *ptr;
/*
* Now attach LAPB
*/
- if((err=lapb_register(sl, &x25_asy_callbacks))==LAPB_OK)
+ if((err=lapb_register(dev, &x25_asy_callbacks))==LAPB_OK)
return 0;
/* Cleanup */
netif_stop_queue(dev);
sl->rcount = 0;
sl->xleft = 0;
- if((err=lapb_unregister(sl))!=LAPB_OK)
+ if((err=lapb_unregister(dev))!=LAPB_OK)
printk(KERN_ERR "x25_asy_close: lapb_unregister error -%d\n",err);
spin_unlock(&sl->lock);
return 0;
config ATMEL
tristate "Atmel at76c50x chipset 802.11b support"
depends on NET_RADIO && EXPERIMENTAL
- enable FW_LOADER
- enable CRC32
+ select FW_LOADER
+ select CRC32
---help---
A driver 802.11b wireless cards based on the Atmel fast-vnet
chips. This driver supports standard Linux wireless extensions.
return rc;
}
-static void wifi_setup(struct net_device *dev, struct net_device *ethdev)
+static void wifi_setup(struct net_device *dev)
{
- struct airo_info *ai = ethdev->priv;
- dev->priv = ai;
dev->hard_header = 0;
dev->rebuild_header = 0;
dev->hard_header_cache = 0;
dev->change_mtu = &airo_change_mtu;
dev->open = &airo_open;
dev->stop = &airo_close;
- dev->irq = ethdev->irq;
- dev->base_addr = ethdev->base_addr;
dev->type = ARPHRD_IEEE80211;
dev->hard_header_len = ETH_HLEN;
dev->mtu = 2312;
dev->addr_len = ETH_ALEN;
- memcpy(dev->dev_addr, ethdev->dev_addr, dev->addr_len);
dev->tx_queue_len = 100;
memset(dev->broadcast,0xFF, ETH_ALEN);
struct net_device *ethdev)
{
int err;
- struct net_device *dev = (struct net_device*)kmalloc(sizeof *dev,GFP_KERNEL);
- if (!dev) return 0;
- memset(dev, 0, sizeof(*dev));
-
- strcpy(dev->name, "wifi%d");
- dev->priv = ai;
- wifi_setup(dev, ethdev);
+ struct net_device *dev = alloc_netdev(0, "wifi%d", wifi_setup);
+ if (!dev)
+ return NULL;
+ dev->priv = ethdev->priv;
+ dev->irq = ethdev->irq;
+ dev->base_addr = ethdev->base_addr;
+ memcpy(dev->dev_addr, ethdev->dev_addr, dev->addr_len);
err = register_netdev(dev);
if (err<0) {
- kfree(dev);
- return 0;
+ free_netdev(dev);
+ return NULL;
}
return dev;
}
kill_proc(ai->thr_pid, SIGTERM, 1);
wait_for_completion(&ai->thr_exited);
err_out_free:
- kfree(dev);
+ free_netdev(dev);
return NULL;
}
struct orinoco_private *priv;
dev = alloc_etherdev(sizeof(struct orinoco_private) + sizeof_card);
+ if (!dev)
+ return NULL;
priv = (struct orinoco_private *)dev->priv;
priv->ndev = dev;
if (sizeof_card)
*linkp = link->next;
if (link->priv)
- kfree(link->priv);
+ free_netdev(link->priv);
kfree(link);
out:
return;
#include <linux/kernel.h>
#include <linux/notifier.h>
#include <linux/smp.h>
-#include <linux/irq.h>
#include <linux/oprofile.h>
#include <linux/profile.h>
#include <linux/init.h>
res = &dino_dev->hba.lmmio_space;
res->flags = IORESOURCE_MEM;
- size = snprintf(name, sizeof(name), "Dino LMMIO (%s)", bus->bridge->bus_id);
+ size = scnprintf(name, sizeof(name), "Dino LMMIO (%s)", bus->bridge->bus_id);
res->name = kmalloc(size+1, GFP_KERNEL);
if(res->name)
strcpy((char *)res->name, name);
/* stuff we want to pass to /sbin/hotplug */
envp[i++] = scratch;
- length += snprintf (scratch, buffer_size - length, "PCI_CLASS=%04X",
+ length += scnprintf (scratch, buffer_size - length, "PCI_CLASS=%04X",
pdev->class);
if ((buffer_size - length <= 0) || (i >= num_envp))
return -ENOMEM;
scratch += length;
envp[i++] = scratch;
- length += snprintf (scratch, buffer_size - length, "PCI_ID=%04X:%04X",
+ length += scnprintf (scratch, buffer_size - length, "PCI_ID=%04X:%04X",
pdev->vendor, pdev->device);
if ((buffer_size - length <= 0) || (i >= num_envp))
return -ENOMEM;
scratch += length;
envp[i++] = scratch;
- length += snprintf (scratch, buffer_size - length,
+ length += scnprintf (scratch, buffer_size - length,
"PCI_SUBSYS_ID=%04X:%04X", pdev->subsystem_vendor,
pdev->subsystem_device);
if ((buffer_size - length <= 0) || (i >= num_envp))
scratch += length;
envp[i++] = scratch;
- length += snprintf (scratch, buffer_size - length, "PCI_SLOT_NAME=%s",
+ length += scnprintf (scratch, buffer_size - length, "PCI_SLOT_NAME=%s",
pci_name(pdev));
if ((buffer_size - length <= 0) || (i >= num_envp))
return -ENOMEM;
cpumask_t cpumask = pcibus_to_cpumask((to_pci_bus(class_dev))->number);
int ret;
- ret = cpumask_snprintf(buf, PAGE_SIZE, cpumask);
+ ret = cpumask_scnprintf(buf, PAGE_SIZE, cpumask);
if (ret < PAGE_SIZE)
buf[ret++] = '\n';
return ret;
/* See fs/partition/check.c:register_disk,rescan_partitions */
bdev = bdget_disk(device->gdp, 0);
if (bdev) {
- if (blkdev_get(bdev, FMODE_READ, 1, BDEV_RAW) >= 0) {
+ if (blkdev_get(bdev, FMODE_READ, 1) >= 0) {
/* Can't call rescan_partitions directly. Use ioctl. */
ioctl_by_bdev(bdev, BLKRRPART, 0);
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
}
}
}
struct tape_device *tdev;
tdev = (struct tape_device *) dev->driver_data;
- return snprintf(buf, PAGE_SIZE, "%i\n", tdev->medium_state);
+ return scnprintf(buf, PAGE_SIZE, "%i\n", tdev->medium_state);
}
static
struct tape_device *tdev;
tdev = (struct tape_device *) dev->driver_data;
- return snprintf(buf, PAGE_SIZE, "%i\n", tdev->first_minor);
+ return scnprintf(buf, PAGE_SIZE, "%i\n", tdev->first_minor);
}
static
struct tape_device *tdev;
tdev = (struct tape_device *) dev->driver_data;
- return snprintf(buf, PAGE_SIZE, "%s\n", (tdev->first_minor < 0) ?
+ return scnprintf(buf, PAGE_SIZE, "%s\n", (tdev->first_minor < 0) ?
"OFFLINE" : tape_state_verbose[tdev->tape_state]);
}
tdev = (struct tape_device *) dev->driver_data;
if (tdev->first_minor < 0)
- return snprintf(buf, PAGE_SIZE, "N/A\n");
+ return scnprintf(buf, PAGE_SIZE, "N/A\n");
spin_lock_irq(get_ccwdev_lock(tdev->cdev));
if (list_empty(&tdev->req_queue))
- rc = snprintf(buf, PAGE_SIZE, "---\n");
+ rc = scnprintf(buf, PAGE_SIZE, "---\n");
else {
struct tape_request *req;
req = list_entry(tdev->req_queue.next, struct tape_request,
list);
- rc = snprintf(buf, PAGE_SIZE, "%s\n", tape_op_verbose[req->op]);
+ rc = scnprintf(buf,PAGE_SIZE, "%s\n", tape_op_verbose[req->op]);
}
spin_unlock_irq(get_ccwdev_lock(tdev->cdev));
return rc;
tdev = (struct tape_device *) dev->driver_data;
- return snprintf(buf, PAGE_SIZE, "%i\n", tdev->char_data.block_size);
+ return scnprintf(buf, PAGE_SIZE, "%i\n", tdev->char_data.block_size);
}
static
/* what we want to pass to /sbin/hotplug */
envp[i++] = buffer;
- length += snprintf(buffer, buffer_size - length, "CU_TYPE=%04X",
+ length += scnprintf(buffer, buffer_size - length, "CU_TYPE=%04X",
cdev->id.cu_type);
if ((buffer_size - length <= 0) || (i >= num_envp))
return -ENOMEM;
buffer += length;
envp[i++] = buffer;
- length += snprintf(buffer, buffer_size - length, "CU_MODEL=%02X",
+ length += scnprintf(buffer, buffer_size - length, "CU_MODEL=%02X",
cdev->id.cu_model);
if ((buffer_size - length <= 0) || (i >= num_envp))
return -ENOMEM;
/* The next two can be zero, that's ok for us */
envp[i++] = buffer;
- length += snprintf(buffer, buffer_size - length, "DEV_TYPE=%04X",
+ length += scnprintf(buffer, buffer_size - length, "DEV_TYPE=%04X",
cdev->id.dev_type);
if ((buffer_size - length <= 0) || (i >= num_envp))
return -ENOMEM;
buffer += length;
envp[i++] = buffer;
- length += snprintf(buffer, buffer_size - length, "DEV_MODEL=%02X",
+ length += scnprintf(buffer, buffer_size - length, "DEV_MODEL=%02X",
cdev->id.dev_model);
if ((buffer_size - length <= 0) || (i >= num_envp))
return -ENOMEM;
kfree(ipm_list);
}
#endif
- kfree(card->dev);
+ free_netdev(card->dev);
/* Cleanup channels. */
lcs_cleanup_channel(&card->write);
lcs_cleanup_channel(&card->read);
lcs_stopcard(card);
return 0;
out:
- lcs_cleanup_channel(&card->read);
- lcs_cleanup_channel(&card->write);
+ lcs_cleanup_card(card);
lcs_free_card(card);
return -ENODEV;
}
sysfs_remove_group(&dev->kobj, &netiucv_attr_group);
}
+/*
+ * XXX: Don't use sysfs unless you know WTF you are doing.
+ * This particular turd registers sysfs objects embedded into netiucv_priv
+ * which is kfreed without any regard to possible sysfs references.
+ * As the result, the wanker who'd decided that sysfs exports were too hip and
+ * cute to resist had generated a set of user-exploitable holes in this driver.
+ */
+
static int
netiucv_register_device(struct net_device *ndev, int ifno)
{
}
}
+static void setup_netiucv(struct net_device *dev)
+{
+ dev->mtu = NETIUCV_MTU_DEFAULT;
+ dev->hard_start_xmit = netiucv_tx;
+ dev->open = netiucv_open;
+ dev->stop = netiucv_close;
+ dev->get_stats = netiucv_stats;
+ dev->change_mtu = netiucv_change_mtu;
+ dev->hard_header_len = NETIUCV_HDRLEN;
+ dev->addr_len = 0;
+ dev->type = ARPHRD_SLIP;
+ dev->tx_queue_len = NETIUCV_QUEUELEN_DEFAULT;
+ dev->flags = IFF_POINTOPOINT | IFF_NOARP;
+ SET_MODULE_OWNER(dev);
+}
+
/**
* Allocate and initialize everything of a net device.
*/
struct netiucv_priv *privptr;
int priv_size;
- struct net_device *dev = kmalloc(sizeof(struct net_device), GFP_KERNEL);
+ struct net_device *dev = alloc_netdev(0, "", setup_netiucv);
if (!dev)
return NULL;
- memset(dev, 0, sizeof(struct net_device));
sprintf(dev->name, "iucv%d", ifno);
priv_size = sizeof(struct netiucv_priv);
dev->priv = kmalloc(priv_size, GFP_KERNEL);
if (dev->priv == NULL) {
- kfree(dev);
+ free_netdev(dev);
return NULL;
}
memset(dev->priv, 0, priv_size);
dev_fsm, DEV_FSM_LEN, GFP_KERNEL);
if (privptr->fsm == NULL) {
kfree(privptr);
- kfree(dev);
+ free_netdev(dev);
return NULL;
}
privptr->conn = netiucv_new_connection(dev, username);
if (!privptr->conn) {
kfree_fsm(privptr->fsm);
kfree(privptr);
- kfree(dev);
+ free_netdev(dev);
return NULL;
}
fsm_newstate(privptr->fsm, DEV_STATE_STOPPED);
- dev->mtu = NETIUCV_MTU_DEFAULT;
- dev->hard_start_xmit = netiucv_tx;
- dev->open = netiucv_open;
- dev->stop = netiucv_close;
- dev->get_stats = netiucv_stats;
- dev->change_mtu = netiucv_change_mtu;
- dev->hard_header_len = NETIUCV_HDRLEN;
- dev->addr_len = 0;
- dev->type = ARPHRD_SLIP;
- dev->tx_queue_len = NETIUCV_QUEUELEN_DEFAULT;
- dev->flags = IFF_POINTOPOINT | IFF_NOARP;
- SET_MODULE_OWNER(dev);
return dev;
}
}
static void
-qeth_destructor(struct net_device *dev)
-{
- struct qeth_card *card;
-
- card = (struct qeth_card *) (dev->priv);
- QETH_DBF_CARD2(0, trace, "dstr", card);
-}
-
-static void
qeth_set_multicast_list(struct net_device *dev)
{
struct qeth_card *card = dev->priv;
QETH_DBF_CARD3(0, trace, "inid", card);
- dev->tx_timeout = &qeth_tx_timeout;
- dev->watchdog_timeo = QETH_TX_TIMEOUT;
- dev->open = qeth_open;
- dev->stop = qeth_stop;
- dev->set_config = qeth_set_config;
- dev->hard_start_xmit = qeth_hard_start_xmit;
- dev->do_ioctl = qeth_do_ioctl;
- dev->get_stats = qeth_get_stats;
- dev->change_mtu = qeth_change_mtu;
-#ifdef QETH_VLAN
- dev->vlan_rx_register = qeth_vlan_rx_register;
- dev->vlan_rx_kill_vid = qeth_vlan_rx_kill_vid;
-#endif
dev->rebuild_header = __qeth_rebuild_header_func(card);
dev->hard_header = __qeth_hard_header_func(card);
dev->header_cache_update = __qeth_header_cache_update_func(card);
dev->hard_header_cache = __qeth_hard_header_cache_func(card);
dev->hard_header_parse = NULL;
- dev->destructor = qeth_destructor;
- dev->set_multicast_list = qeth_set_multicast_list;
- dev->set_mac_address = qeth_set_mac_address;
- dev->neigh_setup = qeth_neigh_setup;
dev->flags |= qeth_get_additional_dev_flags(card->type);
dev->tx_queue_len = qeth_get_device_tx_q_len(card->type);
dev->hard_header_len =
qeth_get_hlen(card->link_type) + card->options.add_hhlen;
- dev->addr_len = OSA_ADDR_LEN; /* is ok for eth, tr, atm lane */
- SET_MODULE_OWNER(dev);
netif_start_queue(dev);
dev->mtu = card->initial_mtu;
card->options.fake_ll = DONT_FAKE_LL;
}
+static void qeth_setup(struct net_device *dev)
+{
+ dev->tx_timeout = &qeth_tx_timeout;
+ dev->watchdog_timeo = QETH_TX_TIMEOUT;
+ dev->open = qeth_open;
+ dev->stop = qeth_stop;
+ dev->set_config = qeth_set_config;
+ dev->hard_start_xmit = qeth_hard_start_xmit;
+ dev->do_ioctl = qeth_do_ioctl;
+ dev->get_stats = qeth_get_stats;
+ dev->change_mtu = qeth_change_mtu;
+#ifdef QETH_VLAN
+ dev->vlan_rx_register = qeth_vlan_rx_register;
+ dev->vlan_rx_kill_vid = qeth_vlan_rx_kill_vid;
+#endif
+ dev->set_multicast_list = qeth_set_multicast_list;
+ dev->set_mac_address = qeth_set_mac_address;
+ dev->neigh_setup = qeth_neigh_setup;
+ dev->addr_len = OSA_ADDR_LEN; /* is ok for eth, tr, atm lane */
+ SET_MODULE_OWNER(dev);
+}
+
static int
qeth_alloc_card_stuff(struct qeth_card *card)
{
goto exit_dma2;
memset(card->dma_stuff->sendbuf, 0, QETH_BUFSIZE);
- card->dev = (struct net_device *) kmalloc(sizeof (struct net_device),
- GFP_KERNEL);
+ card->dev = alloc_netdev(0, "", qeth_setup);
if (!card->dev)
goto exit_dev;
- memset(card->dev, 0, sizeof (struct net_device));
card->stats =
(struct net_device_stats *)
" seltime:<int> Selection Timeout:\n"
" (0/256ms,1/128ms,2/64ms,3/32ms)\n"
"\n"
-" Sample /etc/modules.conf line:\n"
+" Sample /etc/modprobe.conf line:\n"
" Enable verbose logging\n"
" Set tag depth on Controller 2/Target 2 to 10 tags\n"
" Shorten the selection timeout to 128ms\n"
"\n"
" options aic79xx 'aic79xx=verbose.tag_info:{{}.{}.{..10}}.seltime:1'\n"
"\n"
-" Sample /etc/modules.conf line:\n"
+" Sample /etc/modprobe.conf line:\n"
" Change Read Streaming for Controller's 2 and 3\n"
"\n"
" options aic79xx 'aic79xx=rd_strm:{..0xFFF0.0xC0F0}'");
" seltime:<int> Selection Timeout\n"
" (0/256ms,1/128ms,2/64ms,3/32ms)\n"
"\n"
-" Sample /etc/modules.conf line:\n"
+" Sample /etc/modprobe.conf line:\n"
" Toggle EISA/VLB probing\n"
" Set tag depth on Controller 1/Target 1 to 10 tags\n"
" Shorten the selection timeout to 128ms\n"
drive->driver_data = host;
idescsi = scsihost_to_idescsi(host);
idescsi->drive = drive;
- err = ide_register_subdriver (drive, &idescsi_driver,
- IDE_SUBDRIVER_VERSION);
+ err = ide_register_subdriver(drive, &idescsi_driver);
if (!err) {
idescsi_setup (drive, idescsi);
drive->disk->fops = &idescsi_ops;
int displayConfig;
module_param(displayConfig, int, 0);
MODULE_PARM_DESC(displayConfig,
- "If 1 then display the configuration used in /etc/modules.conf.");
+ "If 1 then display the configuration used in /etc/modprobe.conf.");
int ql2xplogiabsentdevice;
module_param(ql2xplogiabsentdevice, int, 0);
* support added by Michael Neuffer <mike@i-connect.net>
*
* Added request_module("scsi_hostadapter") for kerneld:
- * (Put an "alias scsi_hostadapter your_hostadapter" in /etc/modules.conf)
+ * (Put an "alias scsi_hostadapter your_hostadapter" in /etc/modprobe.conf)
* Bjorn Ekwall <bj0rn@blox.se>
* (changed to kmod)
*
dev_id_num = ((devip->sdbg_host->shost->host_no + 1) * 2000) +
(devip->target * 1000) + devip->lun;
- len = snprintf(dev_id_str, 6, "%d", dev_id_num);
- len = (len > 6) ? 6 : len;
+ len = scnprintf(dev_id_str, 6, "%d", dev_id_num);
if (0 == cmd[2]) { /* supported vital product data pages */
arr[3] = 3;
arr[4] = 0x0; /* this page */
static ssize_t sdebug_delay_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_delay);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_delay);
}
static ssize_t sdebug_delay_store(struct device_driver * ddp,
static ssize_t sdebug_opts_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "0x%x\n", scsi_debug_opts);
+ return scnprintf(buf, PAGE_SIZE, "0x%x\n", scsi_debug_opts);
}
static ssize_t sdebug_opts_store(struct device_driver * ddp,
static ssize_t sdebug_num_tgts_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_num_tgts);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_num_tgts);
}
static ssize_t sdebug_num_tgts_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t sdebug_dev_size_mb_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dev_size_mb);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dev_size_mb);
}
DRIVER_ATTR(dev_size_mb, S_IRUGO, sdebug_dev_size_mb_show, NULL)
static ssize_t sdebug_every_nth_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_every_nth);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_every_nth);
}
static ssize_t sdebug_every_nth_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t sdebug_max_luns_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_max_luns);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_max_luns);
}
static ssize_t sdebug_max_luns_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t sdebug_scsi_level_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_scsi_level);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_scsi_level);
}
DRIVER_ATTR(scsi_level, S_IRUGO, sdebug_scsi_level_show, NULL)
static ssize_t sdebug_add_host_show(struct device_driver * ddp, char * buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_add_host);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_add_host);
}
static ssize_t sdebug_add_host_store(struct device_driver * ddp,
help
::: To be written :::
+config SERIAL_DZ
+ bool "DECstation DZ serial driver"
+ depends on DECSTATION
+ select SERIAL_CORE
+ help
+ DZ11-family serial controllers for VAXstations, including the
+ DC7085, M7814, and M7819.
+
+config SERIAL_DZ_CONSOLE
+ bool "Support console on DECstation DZ serial driver"
+ depends on SERIAL_DZ=y
+ select SERIAL_CORE_CONSOLE
+ help
+ If you say Y here, it will be possible to use a serial port as the
+ system console (the system console is the device which receives all
+ kernel messages and warnings and which allows logins in single user
+ mode). Note that the firmware uses ttyS0 as the serial console on
+ the Maxine and ttyS2 on the others.
+
+ If unsure, say Y.
+
config SERIAL_21285
tristate "DC21285 serial port support"
depends on ARM && FOOTBRIDGE
on your Sparc system as the console, you can do so by answering
Y to this option.
+config SERIAL_IP22_ZILOG
+ tristate "IP22 Zilog8530 serial support"
+ depends on SGI_IP22
+ select SERIAL_CORE
+ help
+ This driver supports the Zilog8530 serial ports found on SGI IP22
+ systems. Say Y or M if you want to be able to these serial ports.
+
+config SERIAL_IP22_ZILOG_CONSOLE
+ bool "Console on IP22 Zilog8530 serial port"
+ depends on SERIAL_IP22_ZILOG=y
+ select SERIAL_CORE_CONSOLE
+ help
+
config V850E_UART
bool "NEC V850E on-chip UART support"
depends on V850E_MA1 || V850E_ME2 || V850E_TEG || V850E2_ANNA || V850E_AS85EP1
depends on SERIAL98=y
select SERIAL_CORE_CONSOLE
+config SERIAL_AU1X00
+ bool "Enable Au1x00 UART Support"
+ depends on MIPS && SOC_AU1X00
+ select SERIAL_CORE
+ help
+ If you have an Alchemy AU1X00 processor (MIPS based) and you want
+ to use serial ports, say Y. Otherwise, say N.
+
+config SERIAL_AU1X00_CONSOLE
+ bool "Enable Au1x00 serial console"
+ depends on SERIAL_AU1X00
+ select SERIAL_CORE_CONSOLE
+ help
+ If you have an Alchemy AU1X00 processor (MIPS based) and you want
+ to use a console on a serial port, say Y. Otherwise, say N.
+
config SERIAL_CORE
tristate
obj-$(CONFIG_SERIAL_UART00) += uart00.o
obj-$(CONFIG_SERIAL_SUNCORE) += suncore.o
obj-$(CONFIG_SERIAL_SUNZILOG) += sunzilog.o
+obj-$(CONFIG_SERIAL_IP22_ZILOG) += ip22zilog.o
obj-$(CONFIG_SERIAL_SUNSU) += sunsu.o
obj-$(CONFIG_SERIAL_SUNSAB) += sunsab.o
obj-$(CONFIG_SERIAL_MUX) += mux.o
obj-$(CONFIG_V850E_UART) += v850e_uart.o
obj-$(CONFIG_SERIAL98) += serial98.o
obj-$(CONFIG_SERIAL_PMACZILOG) += pmac_zilog.o
+obj-$(CONFIG_SERIAL_AU1X00) += au1x00_uart.o
+obj-$(CONFIG_SERIAL_DZ) += dz.o
--- /dev/null
+/*
+ * Driver for 8250/16550-type serial ports
+ *
+ * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
+ *
+ * Copyright (C) 2001 Russell King.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * A note about mapbase / membase
+ *
+ * mapbase is the physical address of the IO port. Currently, we don't
+ * support this very well, and it may well be dropped from this driver
+ * in future. As such, mapbase should be NULL.
+ *
+ * membase is an 'ioremapped' cookie. This is compatible with the old
+ * serial.c driver, and is currently the preferred form.
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/serial.h>
+#include <linux/serialP.h>
+#include <linux/delay.h>
+
+#include <asm/serial.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/mach-au1x00/au1000.h>
+
+#if defined(CONFIG_SERIAL_AU1X00_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/serial_core.h>
+#include "8250.h"
+
+/*
+ * Debugging.
+ */
+#if 0
+#define DEBUG_AUTOCONF(fmt...) printk(fmt)
+#else
+#define DEBUG_AUTOCONF(fmt...) do { } while (0)
+#endif
+
+#if 0
+#define DEBUG_INTR(fmt...) printk(fmt)
+#else
+#define DEBUG_INTR(fmt...) do { } while (0)
+#endif
+
+#define PASS_LIMIT 256
+
+/*
+ * We default to IRQ0 for the "no irq" hack. Some
+ * machine types want others as well - they're free
+ * to redefine this in their header file.
+ */
+#define is_real_interrupt(irq) ((irq) != 0)
+
+static struct old_serial_port old_serial_port[] = {
+ { .baud_base = 0,
+ .iomem_base = (u8 *)UART0_ADDR,
+ .irq = AU1000_UART0_INT,
+ .flags = STD_COM_FLAGS,
+ .iomem_reg_shift = 2,
+ }, {
+ .baud_base = 0,
+ .iomem_base = (u8 *)UART1_ADDR,
+ .irq = AU1000_UART1_INT,
+ .flags = STD_COM_FLAGS,
+ .iomem_reg_shift = 2
+ }, {
+ .baud_base = 0,
+ .iomem_base = (u8 *)UART2_ADDR,
+ .irq = AU1000_UART2_INT,
+ .flags = STD_COM_FLAGS,
+ .iomem_reg_shift = 2
+ }, {
+ .baud_base = 0,
+ .iomem_base = (u8 *)UART3_ADDR,
+ .irq = AU1000_UART3_INT,
+ .flags = STD_COM_FLAGS,
+ .iomem_reg_shift = 2
+ }
+};
+
+#define UART_NR ARRAY_SIZE(old_serial_port)
+
+struct uart_8250_port {
+ struct uart_port port;
+ struct timer_list timer; /* "no irq" timer */
+ struct list_head list; /* ports on this IRQ */
+ unsigned short rev;
+ unsigned char acr;
+ unsigned char ier;
+ unsigned char lcr;
+ unsigned char mcr_mask; /* mask of user bits */
+ unsigned char mcr_force; /* mask of forced bits */
+ unsigned char lsr_break_flag;
+
+ /*
+ * We provide a per-port pm hook.
+ */
+ void (*pm)(struct uart_port *port,
+ unsigned int state, unsigned int old);
+};
+
+struct irq_info {
+ spinlock_t lock;
+ struct list_head *head;
+};
+
+static struct irq_info irq_lists[NR_IRQS];
+
+/*
+ * Here we define the default xmit fifo size used for each type of UART.
+ */
+static const struct serial_uart_config uart_config[PORT_MAX_8250+1] = {
+ { "unknown", 1, 0 },
+ { "8250", 1, 0 },
+ { "16450", 1, 0 },
+ { "16550", 1, 0 },
+ /* PORT_16550A */
+ { "AU1X00_UART",16, UART_CLEAR_FIFO | UART_USE_FIFO },
+};
+
+static _INLINE_ unsigned int serial_in(struct uart_8250_port *up, int offset)
+{
+ return au_readl((unsigned long)up->port.membase + offset);
+}
+
+static _INLINE_ void
+serial_out(struct uart_8250_port *up, int offset, int value)
+{
+ au_writel(value, (unsigned long)up->port.membase + offset);
+}
+
+#define serial_inp(up, offset) serial_in(up, offset)
+#define serial_outp(up, offset, value) serial_out(up, offset, value)
+
+/*
+ * This routine is called by rs_init() to initialize a specific serial
+ * port. It determines what type of UART chip this serial port is
+ * using: 8250, 16450, 16550, 16550A. The important question is
+ * whether or not this UART is a 16550A or not, since this will
+ * determine whether or not we can use its FIFO features or not.
+ */
+static void autoconfig(struct uart_8250_port *up, unsigned int probeflags)
+{
+ unsigned char save_lcr, save_mcr;
+ unsigned long flags;
+
+ if (!up->port.iobase && !up->port.mapbase && !up->port.membase)
+ return;
+
+ DEBUG_AUTOCONF("ttyS%d: autoconf (0x%04x, 0x%08lx): ",
+ up->port.line, up->port.iobase, up->port.membase);
+
+ /*
+ * We really do need global IRQs disabled here - we're going to
+ * be frobbing the chips IRQ enable register to see if it exists.
+ */
+ spin_lock_irqsave(&up->port.lock, flags);
+// save_flags(flags); cli();
+
+ save_mcr = serial_in(up, UART_MCR);
+ save_lcr = serial_in(up, UART_LCR);
+
+ up->port.type = PORT_16550A;
+ serial_outp(up, UART_LCR, save_lcr);
+
+ up->port.fifosize = uart_config[up->port.type].dfl_xmit_fifo_size;
+
+ if (up->port.type == PORT_UNKNOWN)
+ goto out;
+
+ /*
+ * Reset the UART.
+ */
+ serial_outp(up, UART_MCR, save_mcr);
+ serial_outp(up, UART_FCR, (UART_FCR_ENABLE_FIFO |
+ UART_FCR_CLEAR_RCVR |
+ UART_FCR_CLEAR_XMIT));
+ serial_outp(up, UART_FCR, 0);
+ (void)serial_in(up, UART_RX);
+ serial_outp(up, UART_IER, 0);
+
+ out:
+ spin_unlock_irqrestore(&up->port.lock, flags);
+// restore_flags(flags);
+ DEBUG_AUTOCONF("type=%s\n", uart_config[up->port.type].name);
+}
+
+static void serial8250_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+
+ if (up->ier & UART_IER_THRI) {
+ up->ier &= ~UART_IER_THRI;
+ serial_out(up, UART_IER, up->ier);
+ }
+}
+
+static void serial8250_start_tx(struct uart_port *port, unsigned int tty_start)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+
+ if (!(up->ier & UART_IER_THRI)) {
+ up->ier |= UART_IER_THRI;
+ serial_out(up, UART_IER, up->ier);
+ }
+}
+
+static void serial8250_stop_rx(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+
+ up->ier &= ~UART_IER_RLSI;
+ up->port.read_status_mask &= ~UART_LSR_DR;
+ serial_out(up, UART_IER, up->ier);
+}
+
+static void serial8250_enable_ms(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+
+ up->ier |= UART_IER_MSI;
+ serial_out(up, UART_IER, up->ier);
+}
+
+static _INLINE_ void
+receive_chars(struct uart_8250_port *up, int *status, struct pt_regs *regs)
+{
+ struct tty_struct *tty = up->port.info->tty;
+ unsigned char ch;
+ int max_count = 256;
+
+ do {
+ if (unlikely(tty->flip.count >= TTY_FLIPBUF_SIZE)) {
+ tty->flip.work.func((void *)tty);
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE)
+ return; // if TTY_DONT_FLIP is set
+ }
+ ch = serial_inp(up, UART_RX);
+ *tty->flip.char_buf_ptr = ch;
+ *tty->flip.flag_buf_ptr = TTY_NORMAL;
+ up->port.icount.rx++;
+
+ if (unlikely(*status & (UART_LSR_BI | UART_LSR_PE |
+ UART_LSR_FE | UART_LSR_OE))) {
+ /*
+ * For statistics only
+ */
+ if (*status & UART_LSR_BI) {
+ *status &= ~(UART_LSR_FE | UART_LSR_PE);
+ up->port.icount.brk++;
+ /*
+ * We do the SysRQ and SAK checking
+ * here because otherwise the break
+ * may get masked by ignore_status_mask
+ * or read_status_mask.
+ */
+ if (uart_handle_break(&up->port))
+ goto ignore_char;
+ } else if (*status & UART_LSR_PE)
+ up->port.icount.parity++;
+ else if (*status & UART_LSR_FE)
+ up->port.icount.frame++;
+ if (*status & UART_LSR_OE)
+ up->port.icount.overrun++;
+
+ /*
+ * Mask off conditions which should be ingored.
+ */
+ *status &= up->port.read_status_mask;
+
+#ifdef CONFIG_SERIAL_AU1X00_CONSOLE
+ if (up->port.line == up->port.cons->index) {
+ /* Recover the break flag from console xmit */
+ *status |= up->lsr_break_flag;
+ up->lsr_break_flag = 0;
+ }
+#endif
+ if (*status & UART_LSR_BI) {
+ DEBUG_INTR("handling break....");
+ *tty->flip.flag_buf_ptr = TTY_BREAK;
+ } else if (*status & UART_LSR_PE)
+ *tty->flip.flag_buf_ptr = TTY_PARITY;
+ else if (*status & UART_LSR_FE)
+ *tty->flip.flag_buf_ptr = TTY_FRAME;
+ }
+ if (uart_handle_sysrq_char(&up->port, ch, regs))
+ goto ignore_char;
+ if ((*status & up->port.ignore_status_mask) == 0) {
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ }
+ if ((*status & UART_LSR_OE) &&
+ tty->flip.count < TTY_FLIPBUF_SIZE) {
+ /*
+ * Overrun is special, since it's reported
+ * immediately, and doesn't affect the current
+ * character.
+ */
+ *tty->flip.flag_buf_ptr = TTY_OVERRUN;
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ }
+ ignore_char:
+ *status = serial_inp(up, UART_LSR);
+ } while ((*status & UART_LSR_DR) && (max_count-- > 0));
+ tty_flip_buffer_push(tty);
+}
+
+static _INLINE_ void transmit_chars(struct uart_8250_port *up)
+{
+ struct circ_buf *xmit = &up->port.info->xmit;
+ int count;
+
+ if (up->port.x_char) {
+ serial_outp(up, UART_TX, up->port.x_char);
+ up->port.icount.tx++;
+ up->port.x_char = 0;
+ return;
+ }
+ if (uart_circ_empty(xmit) || uart_tx_stopped(&up->port)) {
+ serial8250_stop_tx(&up->port, 0);
+ return;
+ }
+
+ count = up->port.fifosize;
+ do {
+ serial_out(up, UART_TX, xmit->buf[xmit->tail]);
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ up->port.icount.tx++;
+ if (uart_circ_empty(xmit))
+ break;
+ } while (--count > 0);
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&up->port);
+
+ DEBUG_INTR("THRE...");
+
+ if (uart_circ_empty(xmit))
+ serial8250_stop_tx(&up->port, 0);
+}
+
+static _INLINE_ void check_modem_status(struct uart_8250_port *up)
+{
+ int status;
+
+ status = serial_in(up, UART_MSR);
+
+ if ((status & UART_MSR_ANY_DELTA) == 0)
+ return;
+
+ if (status & UART_MSR_TERI)
+ up->port.icount.rng++;
+ if (status & UART_MSR_DDSR)
+ up->port.icount.dsr++;
+ if (status & UART_MSR_DDCD)
+ uart_handle_dcd_change(&up->port, status & UART_MSR_DCD);
+ if (status & UART_MSR_DCTS)
+ uart_handle_cts_change(&up->port, status & UART_MSR_CTS);
+
+ wake_up_interruptible(&up->port.info->delta_msr_wait);
+}
+
+/*
+ * This handles the interrupt from one port.
+ */
+static inline void
+serial8250_handle_port(struct uart_8250_port *up, struct pt_regs *regs)
+{
+ unsigned int status = serial_inp(up, UART_LSR);
+
+ DEBUG_INTR("status = %x...", status);
+
+ if (status & UART_LSR_DR)
+ receive_chars(up, &status, regs);
+ check_modem_status(up);
+ if (status & UART_LSR_THRE)
+ transmit_chars(up);
+}
+
+/*
+ * This is the serial driver's interrupt routine.
+ *
+ * Arjan thinks the old way was overly complex, so it got simplified.
+ * Alan disagrees, saying that need the complexity to handle the weird
+ * nature of ISA shared interrupts. (This is a special exception.)
+ *
+ * In order to handle ISA shared interrupts properly, we need to check
+ * that all ports have been serviced, and therefore the ISA interrupt
+ * line has been de-asserted.
+ *
+ * This means we need to loop through all ports. checking that they
+ * don't have an interrupt pending.
+ */
+static irqreturn_t serial8250_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct irq_info *i = dev_id;
+ struct list_head *l, *end = NULL;
+ int pass_counter = 0;
+
+ DEBUG_INTR("serial8250_interrupt(%d)...", irq);
+
+ spin_lock(&i->lock);
+
+ l = i->head;
+ do {
+ struct uart_8250_port *up;
+ unsigned int iir;
+
+ up = list_entry(l, struct uart_8250_port, list);
+
+ iir = serial_in(up, UART_IIR);
+ if (!(iir & UART_IIR_NO_INT)) {
+ spin_lock(&up->port.lock);
+ serial8250_handle_port(up, regs);
+ spin_unlock(&up->port.lock);
+
+ end = NULL;
+ } else if (end == NULL)
+ end = l;
+
+ l = l->next;
+
+ if (l == i->head && pass_counter++ > PASS_LIMIT) {
+ /* If we hit this, we're dead. */
+ printk(KERN_ERR "serial8250: too much work for "
+ "irq%d\n", irq);
+ break;
+ }
+ } while (l != end);
+
+ spin_unlock(&i->lock);
+
+ DEBUG_INTR("end.\n");
+ /* FIXME! Was it really ours? */
+ return IRQ_HANDLED;
+}
+
+/*
+ * To support ISA shared interrupts, we need to have one interrupt
+ * handler that ensures that the IRQ line has been deasserted
+ * before returning. Failing to do this will result in the IRQ
+ * line being stuck active, and, since ISA irqs are edge triggered,
+ * no more IRQs will be seen.
+ */
+static void serial_do_unlink(struct irq_info *i, struct uart_8250_port *up)
+{
+ spin_lock_irq(&i->lock);
+
+ if (!list_empty(i->head)) {
+ if (i->head == &up->list)
+ i->head = i->head->next;
+ list_del(&up->list);
+ } else {
+ BUG_ON(i->head != &up->list);
+ i->head = NULL;
+ }
+
+ spin_unlock_irq(&i->lock);
+}
+
+static int serial_link_irq_chain(struct uart_8250_port *up)
+{
+ struct irq_info *i = irq_lists + up->port.irq;
+ int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? SA_SHIRQ : 0;
+
+ spin_lock_irq(&i->lock);
+
+ if (i->head) {
+ list_add(&up->list, i->head);
+ spin_unlock_irq(&i->lock);
+
+ ret = 0;
+ } else {
+ INIT_LIST_HEAD(&up->list);
+ i->head = &up->list;
+ spin_unlock_irq(&i->lock);
+
+ ret = request_irq(up->port.irq, serial8250_interrupt,
+ irq_flags, "serial", i);
+ if (ret < 0)
+ serial_do_unlink(i, up);
+ }
+
+ return ret;
+}
+
+static void serial_unlink_irq_chain(struct uart_8250_port *up)
+{
+ struct irq_info *i = irq_lists + up->port.irq;
+
+ BUG_ON(i->head == NULL);
+
+ if (list_empty(i->head))
+ free_irq(up->port.irq, i);
+
+ serial_do_unlink(i, up);
+}
+
+/*
+ * This function is used to handle ports that do not have an
+ * interrupt. This doesn't work very well for 16450's, but gives
+ * barely passable results for a 16550A. (Although at the expense
+ * of much CPU overhead).
+ */
+static void serial8250_timeout(unsigned long data)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)data;
+ unsigned int timeout;
+ unsigned int iir;
+
+ iir = serial_in(up, UART_IIR);
+ if (!(iir & UART_IIR_NO_INT)) {
+ spin_lock(&up->port.lock);
+ serial8250_handle_port(up, NULL);
+ spin_unlock(&up->port.lock);
+ }
+
+ timeout = up->port.timeout;
+ timeout = timeout > 6 ? (timeout / 2 - 2) : 1;
+ mod_timer(&up->timer, jiffies + timeout);
+}
+
+static unsigned int serial8250_tx_empty(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned long flags;
+ unsigned int ret;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ ret = serial_in(up, UART_LSR) & UART_LSR_TEMT ? TIOCSER_TEMT : 0;
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ return ret;
+}
+
+static unsigned int serial8250_get_mctrl(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned long flags;
+ unsigned char status;
+ unsigned int ret;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ status = serial_in(up, UART_MSR);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ ret = 0;
+ if (status & UART_MSR_DCD)
+ ret |= TIOCM_CAR;
+ if (status & UART_MSR_RI)
+ ret |= TIOCM_RNG;
+ if (status & UART_MSR_DSR)
+ ret |= TIOCM_DSR;
+ if (status & UART_MSR_CTS)
+ ret |= TIOCM_CTS;
+ return ret;
+}
+
+static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned char mcr = 0;
+
+ if (mctrl & TIOCM_RTS)
+ mcr |= UART_MCR_RTS;
+ if (mctrl & TIOCM_DTR)
+ mcr |= UART_MCR_DTR;
+ if (mctrl & TIOCM_OUT1)
+ mcr |= UART_MCR_OUT1;
+ if (mctrl & TIOCM_OUT2)
+ mcr |= UART_MCR_OUT2;
+ if (mctrl & TIOCM_LOOP)
+ mcr |= UART_MCR_LOOP;
+
+ mcr = (mcr & up->mcr_mask) | up->mcr_force;
+
+ serial_out(up, UART_MCR, mcr);
+}
+
+static void serial8250_break_ctl(struct uart_port *port, int break_state)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ if (break_state == -1)
+ up->lcr |= UART_LCR_SBC;
+ else
+ up->lcr &= ~UART_LCR_SBC;
+ serial_out(up, UART_LCR, up->lcr);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static int serial8250_startup(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned long flags;
+ int retval;
+
+ /*
+ * Clear the FIFO buffers and disable them.
+ * (they will be reeanbled in set_termios())
+ */
+ if (uart_config[up->port.type].flags & UART_CLEAR_FIFO) {
+ serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO |
+ UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
+ serial_outp(up, UART_FCR, 0);
+ }
+
+ /*
+ * Clear the interrupt registers.
+ */
+ (void) serial_inp(up, UART_LSR);
+ (void) serial_inp(up, UART_RX);
+ (void) serial_inp(up, UART_IIR);
+ (void) serial_inp(up, UART_MSR);
+
+ /*
+ * At this point, there's no way the LSR could still be 0xff;
+ * if it is, then bail out, because there's likely no UART
+ * here.
+ */
+ if (!(up->port.flags & UPF_BUGGY_UART) &&
+ (serial_inp(up, UART_LSR) == 0xff)) {
+ printk("ttyS%d: LSR safety check engaged!\n", up->port.line);
+ return -ENODEV;
+ }
+
+ retval = serial_link_irq_chain(up);
+ if (retval)
+ return retval;
+
+ /*
+ * Now, initialize the UART
+ */
+ serial_outp(up, UART_LCR, UART_LCR_WLEN8);
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ if (up->port.flags & UPF_FOURPORT) {
+ if (!is_real_interrupt(up->port.irq))
+ up->port.mctrl |= TIOCM_OUT1;
+ } else
+ /*
+ * Most PC uarts need OUT2 raised to enable interrupts.
+ */
+ if (is_real_interrupt(up->port.irq))
+ up->port.mctrl |= TIOCM_OUT2;
+
+ serial8250_set_mctrl(&up->port, up->port.mctrl);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ /*
+ * Finally, enable interrupts. Note: Modem status interrupts
+ * are set via set_termios(), which will be occurring imminently
+ * anyway, so we don't enable them here.
+ */
+ up->ier = UART_IER_RLSI | UART_IER_RDI;
+ serial_outp(up, UART_IER, up->ier);
+
+ if (up->port.flags & UPF_FOURPORT) {
+ unsigned int icp;
+ /*
+ * Enable interrupts on the AST Fourport board
+ */
+ icp = (up->port.iobase & 0xfe0) | 0x01f;
+ outb_p(0x80, icp);
+ (void) inb_p(icp);
+ }
+
+ /*
+ * And clear the interrupt registers again for luck.
+ */
+ (void) serial_inp(up, UART_LSR);
+ (void) serial_inp(up, UART_RX);
+ (void) serial_inp(up, UART_IIR);
+ (void) serial_inp(up, UART_MSR);
+
+ return 0;
+}
+
+static void serial8250_shutdown(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned long flags;
+
+ /*
+ * Disable interrupts from this port
+ */
+ up->ier = 0;
+ serial_outp(up, UART_IER, 0);
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ if (up->port.flags & UPF_FOURPORT) {
+ /* reset interrupts on the AST Fourport board */
+ inb((up->port.iobase & 0xfe0) | 0x1f);
+ up->port.mctrl |= TIOCM_OUT1;
+ } else
+ up->port.mctrl &= ~TIOCM_OUT2;
+
+ serial8250_set_mctrl(&up->port, up->port.mctrl);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ /*
+ * Disable break condition and FIFOs
+ */
+ serial_out(up, UART_LCR, serial_inp(up, UART_LCR) & ~UART_LCR_SBC);
+ serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO |
+ UART_FCR_CLEAR_RCVR |
+ UART_FCR_CLEAR_XMIT);
+ serial_outp(up, UART_FCR, 0);
+
+ /*
+ * Read data port to reset things, and then unlink from
+ * the IRQ chain.
+ */
+ (void) serial_in(up, UART_RX);
+
+ if (!is_real_interrupt(up->port.irq))
+ del_timer_sync(&up->timer);
+ else
+ serial_unlink_irq_chain(up);
+}
+
+static unsigned int serial8250_get_divisor(struct uart_port *port, unsigned int baud)
+{
+ unsigned int quot;
+
+ /*
+ * Handle magic divisors for baud rates above baud_base on
+ * SMSC SuperIO chips.
+ */
+ if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
+ baud == (port->uartclk/4))
+ quot = 0x8001;
+ else if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
+ baud == (port->uartclk/8))
+ quot = 0x8002;
+ else
+ quot = uart_get_divisor(port, baud);
+
+ return quot;
+}
+
+static void
+serial8250_set_termios(struct uart_port *port, struct termios *termios,
+ struct termios *old)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned char cval, fcr = 0;
+ unsigned long flags;
+ unsigned int baud, quot;
+
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ cval = 0x00;
+ break;
+ case CS6:
+ cval = 0x01;
+ break;
+ case CS7:
+ cval = 0x02;
+ break;
+ default:
+ case CS8:
+ cval = 0x03;
+ break;
+ }
+
+ if (termios->c_cflag & CSTOPB)
+ cval |= 0x04;
+ if (termios->c_cflag & PARENB)
+ cval |= UART_LCR_PARITY;
+ if (!(termios->c_cflag & PARODD))
+ cval |= UART_LCR_EPAR;
+#ifdef CMSPAR
+ if (termios->c_cflag & CMSPAR)
+ cval |= UART_LCR_SPAR;
+#endif
+
+ /*
+ * Ask the core to calculate the divisor for us.
+ */
+ baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16);
+ quot = serial8250_get_divisor(port, baud);
+ quot = 0x35; /* FIXME */
+
+ /*
+ * Work around a bug in the Oxford Semiconductor 952 rev B
+ * chip which causes it to seriously miscalculate baud rates
+ * when DLL is 0.
+ */
+ if ((quot & 0xff) == 0 && up->port.type == PORT_16C950 &&
+ up->rev == 0x5201)
+ quot ++;
+
+ if (uart_config[up->port.type].flags & UART_USE_FIFO) {
+ if (baud < 2400)
+ fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIGGER_1;
+ else
+ fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIGGER_8;
+ }
+
+ /*
+ * Ok, we're now changing the port state. Do it with
+ * interrupts disabled.
+ */
+ spin_lock_irqsave(&up->port.lock, flags);
+
+ /*
+ * Update the per-port timeout.
+ */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ up->port.read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
+ if (termios->c_iflag & INPCK)
+ up->port.read_status_mask |= UART_LSR_FE | UART_LSR_PE;
+ if (termios->c_iflag & (BRKINT | PARMRK))
+ up->port.read_status_mask |= UART_LSR_BI;
+
+ /*
+ * Characteres to ignore
+ */
+ up->port.ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ up->port.ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
+ if (termios->c_iflag & IGNBRK) {
+ up->port.ignore_status_mask |= UART_LSR_BI;
+ /*
+ * If we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+ up->port.ignore_status_mask |= UART_LSR_OE;
+ }
+
+ /*
+ * ignore all characters if CREAD is not set
+ */
+ if ((termios->c_cflag & CREAD) == 0)
+ up->port.ignore_status_mask |= UART_LSR_DR;
+
+ /*
+ * CTS flow control flag and modem status interrupts
+ */
+ up->ier &= ~UART_IER_MSI;
+ if (UART_ENABLE_MS(&up->port, termios->c_cflag))
+ up->ier |= UART_IER_MSI;
+
+ serial_out(up, UART_IER, up->ier);
+ serial_outp(up, 0x28, quot & 0xffff);
+ up->lcr = cval; /* Save LCR */
+ if (up->port.type != PORT_16750) {
+ if (fcr & UART_FCR_ENABLE_FIFO) {
+ /* emulated UARTs (Lucent Venus 167x) need two steps */
+ serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ }
+ serial_outp(up, UART_FCR, fcr); /* set fcr */
+ }
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static void
+serial8250_pm(struct uart_port *port, unsigned int state,
+ unsigned int oldstate)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ if (state) {
+ /* sleep */
+ if (up->pm)
+ up->pm(port, state, oldstate);
+ } else {
+ /* wake */
+ if (up->pm)
+ up->pm(port, state, oldstate);
+ }
+}
+
+/*
+ * Resource handling. This is complicated by the fact that resources
+ * depend on the port type. Maybe we should be claiming the standard
+ * 8250 ports, and then trying to get other resources as necessary?
+ */
+static int
+serial8250_request_std_resource(struct uart_8250_port *up, struct resource **res)
+{
+ unsigned int size = 8 << up->port.regshift;
+ int ret = 0;
+
+ switch (up->port.iotype) {
+ case SERIAL_IO_MEM:
+ if (up->port.mapbase) {
+ *res = request_mem_region(up->port.mapbase, size, "serial");
+ if (!*res)
+ ret = -EBUSY;
+ }
+ break;
+
+ case SERIAL_IO_HUB6:
+ case SERIAL_IO_PORT:
+ *res = request_region(up->port.iobase, size, "serial");
+ if (!*res)
+ ret = -EBUSY;
+ break;
+ }
+ return ret;
+}
+
+
+static void serial8250_release_port(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ unsigned long start, offset = 0, size = 0;
+
+ size <<= up->port.regshift;
+
+ switch (up->port.iotype) {
+ case SERIAL_IO_MEM:
+ if (up->port.mapbase) {
+ /*
+ * Unmap the area.
+ */
+ iounmap(up->port.membase);
+ up->port.membase = NULL;
+
+ start = up->port.mapbase;
+
+ if (size)
+ release_mem_region(start + offset, size);
+ release_mem_region(start, 8 << up->port.regshift);
+ }
+ break;
+
+ case SERIAL_IO_HUB6:
+ case SERIAL_IO_PORT:
+ start = up->port.iobase;
+
+ if (size)
+ release_region(start + offset, size);
+ release_region(start + offset, 8 << up->port.regshift);
+ break;
+
+ default:
+ break;
+ }
+}
+
+static int serial8250_request_port(struct uart_port *port)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ struct resource *res = NULL, *res_rsa = NULL;
+ int ret = 0;
+
+ if (up->port.flags & UPF_RESOURCES) {
+ ret = serial8250_request_std_resource(up, &res);
+ }
+
+ /*
+ * If we have a mapbase, then request that as well.
+ */
+ if (ret == 0 && up->port.flags & UPF_IOREMAP) {
+ int size = res->end - res->start + 1;
+
+ up->port.membase = ioremap(up->port.mapbase, size);
+ if (!up->port.membase)
+ ret = -ENOMEM;
+ }
+
+ if (ret < 0) {
+ if (res_rsa)
+ release_resource(res_rsa);
+ if (res)
+ release_resource(res);
+ }
+ return ret;
+}
+
+static void serial8250_config_port(struct uart_port *port, int flags)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)port;
+ struct resource *res_std = NULL, *res_rsa = NULL;
+ int probeflags = PROBE_ANY;
+
+ probeflags &= ~PROBE_RSA;
+
+ if (flags & UART_CONFIG_TYPE)
+ autoconfig(up, probeflags);
+
+ /*
+ * If the port wasn't an RSA port, release the resource.
+ */
+ if (up->port.type != PORT_RSA && res_rsa)
+ release_resource(res_rsa);
+
+ if (up->port.type == PORT_UNKNOWN && res_std)
+ release_resource(res_std);
+}
+
+static int
+serial8250_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ if (ser->irq >= NR_IRQS || ser->irq < 0 ||
+ ser->baud_base < 9600 || ser->type < PORT_UNKNOWN ||
+ ser->type > PORT_MAX_8250 || ser->type == PORT_CIRRUS ||
+ ser->type == PORT_STARTECH)
+ return -EINVAL;
+ return 0;
+}
+
+static const char *
+serial8250_type(struct uart_port *port)
+{
+ int type = port->type;
+
+ if (type >= ARRAY_SIZE(uart_config))
+ type = 0;
+ return uart_config[type].name;
+}
+
+static struct uart_ops serial8250_pops = {
+ .tx_empty = serial8250_tx_empty,
+ .set_mctrl = serial8250_set_mctrl,
+ .get_mctrl = serial8250_get_mctrl,
+ .stop_tx = serial8250_stop_tx,
+ .start_tx = serial8250_start_tx,
+ .stop_rx = serial8250_stop_rx,
+ .enable_ms = serial8250_enable_ms,
+ .break_ctl = serial8250_break_ctl,
+ .startup = serial8250_startup,
+ .shutdown = serial8250_shutdown,
+ .set_termios = serial8250_set_termios,
+ .pm = serial8250_pm,
+ .type = serial8250_type,
+ .release_port = serial8250_release_port,
+ .request_port = serial8250_request_port,
+ .config_port = serial8250_config_port,
+ .verify_port = serial8250_verify_port,
+};
+
+static struct uart_8250_port serial8250_ports[UART_NR];
+
+static void __init serial8250_isa_init_ports(void)
+{
+ struct uart_8250_port *up;
+ static int first = 1;
+ int i;
+
+ if (!first)
+ return;
+ first = 0;
+
+ for (i = 0, up = serial8250_ports; i < ARRAY_SIZE(old_serial_port);
+ i++, up++) {
+ up->port.iobase = old_serial_port[i].port;
+ up->port.irq = old_serial_port[i].irq;
+ up->port.uartclk = get_au1x00_uart_baud_base();
+ up->port.flags = old_serial_port[i].flags |
+ UPF_RESOURCES;
+ up->port.hub6 = old_serial_port[i].hub6;
+ up->port.membase = old_serial_port[i].iomem_base;
+ up->port.iotype = old_serial_port[i].io_type;
+ up->port.regshift = old_serial_port[i].iomem_reg_shift;
+ up->port.ops = &serial8250_pops;
+ }
+}
+
+static void __init serial8250_register_ports(struct uart_driver *drv)
+{
+ int i;
+
+ serial8250_isa_init_ports();
+
+ for (i = 0; i < UART_NR; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ up->port.line = i;
+ up->port.ops = &serial8250_pops;
+ init_timer(&up->timer);
+ up->timer.function = serial8250_timeout;
+
+ /*
+ * ALPHA_KLUDGE_MCR needs to be killed.
+ */
+ up->mcr_mask = ~ALPHA_KLUDGE_MCR;
+ up->mcr_force = ALPHA_KLUDGE_MCR;
+
+ uart_add_one_port(drv, &up->port);
+ }
+}
+
+#ifdef CONFIG_SERIAL_AU1X00_CONSOLE
+
+#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
+
+/*
+ * Wait for transmitter & holding register to empty
+ */
+static inline void wait_for_xmitr(struct uart_8250_port *up)
+{
+ unsigned int status, tmout = 10000;
+
+ /* Wait up to 10ms for the character(s) to be sent. */
+ do {
+ status = serial_in(up, UART_LSR);
+
+ if (status & UART_LSR_BI)
+ up->lsr_break_flag = UART_LSR_BI;
+
+ if (--tmout == 0)
+ break;
+ udelay(1);
+ } while ((status & BOTH_EMPTY) != BOTH_EMPTY);
+
+ /* Wait up to 1s for flow control if necessary */
+ if (up->port.flags & UPF_CONS_FLOW) {
+ tmout = 1000000;
+ while (--tmout &&
+ ((serial_in(up, UART_MSR) & UART_MSR_CTS) == 0))
+ udelay(1);
+ }
+}
+
+/*
+ * Print a string to the serial port trying not to disturb
+ * any possible real use of the port...
+ *
+ * The console_lock must be held when we get here.
+ */
+static void
+serial8250_console_write(struct console *co, const char *s, unsigned int count)
+{
+ struct uart_8250_port *up = &serial8250_ports[co->index];
+ unsigned int ier;
+ int i;
+
+ /*
+ * First save the UER then disable the interrupts
+ */
+ ier = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, 0);
+
+ /*
+ * Now, do each character
+ */
+ for (i = 0; i < count; i++, s++) {
+ wait_for_xmitr(up);
+
+ /*
+ * Send the character out.
+ * If a LF, also do CR...
+ */
+ serial_out(up, UART_TX, *s);
+ if (*s == 10) {
+ wait_for_xmitr(up);
+ serial_out(up, UART_TX, 13);
+ }
+ }
+
+ /*
+ * Finally, wait for transmitter to become empty
+ * and restore the IER
+ */
+ wait_for_xmitr(up);
+ serial_out(up, UART_IER, ier);
+}
+
+static int __init serial8250_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port;
+ int baud = 9600;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+
+ /*
+ * Check whether an invalid uart number has been specified, and
+ * if so, search for the first available port that does have
+ * console support.
+ */
+ if (co->index >= UART_NR)
+ co->index = 0;
+ port = &serial8250_ports[co->index].port;
+
+ /*
+ * Temporary fix.
+ */
+ spin_lock_init(&port->lock);
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+
+ return uart_set_options(port, co, baud, parity, bits, flow);
+}
+
+extern struct uart_driver serial8250_reg;
+static struct console serial8250_console = {
+ .name = "ttyS",
+ .write = serial8250_console_write,
+ .device = uart_console_device,
+ .setup = serial8250_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+ .data = &serial8250_reg,
+};
+
+static int __init serial8250_console_init(void)
+{
+ serial8250_isa_init_ports();
+ register_console(&serial8250_console);
+ return 0;
+}
+console_initcall(serial8250_console_init);
+
+#define SERIAL8250_CONSOLE &serial8250_console
+#else
+#define SERIAL8250_CONSOLE NULL
+#endif
+
+static struct uart_driver serial8250_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "serial",
+ .devfs_name = "tts/",
+ .dev_name = "ttyS",
+ .major = TTY_MAJOR,
+ .minor = 64,
+ .nr = UART_NR,
+ .cons = SERIAL8250_CONSOLE,
+};
+
+/*
+ * register_serial and unregister_serial allows for 16x50 serial ports to be
+ * configured at run-time, to support PCMCIA modems.
+ */
+
+static int __register_serial(struct serial_struct *req, int line)
+{
+ struct uart_port port;
+
+ port.iobase = req->port;
+ port.membase = req->iomem_base;
+ port.irq = req->irq;
+ port.uartclk = req->baud_base * 16;
+ port.fifosize = req->xmit_fifo_size;
+ port.regshift = req->iomem_reg_shift;
+ port.iotype = req->io_type;
+ port.flags = req->flags | UPF_BOOT_AUTOCONF;
+ port.mapbase = req->iomap_base;
+ port.line = line;
+
+ if (HIGH_BITS_OFFSET)
+ port.iobase |= (long) req->port_high << HIGH_BITS_OFFSET;
+
+ /*
+ * If a clock rate wasn't specified by the low level
+ * driver, then default to the standard clock rate.
+ */
+ if (port.uartclk == 0)
+ port.uartclk = BASE_BAUD * 16;
+
+ return uart_register_port(&serial8250_reg, &port);
+}
+
+/**
+ * register_serial - configure a 16x50 serial port at runtime
+ * @req: request structure
+ *
+ * Configure the serial port specified by the request. If the
+ * port exists and is in use an error is returned. If the port
+ * is not currently in the table it is added.
+ *
+ * The port is then probed and if necessary the IRQ is autodetected
+ * If this fails an error is returned.
+ *
+ * On success the port is ready to use and the line number is returned.
+ */
+int register_serial(struct serial_struct *req)
+{
+ return __register_serial(req, -1);
+}
+
+int __init early_serial_setup(struct uart_port *port)
+{
+ serial8250_isa_init_ports();
+ serial8250_ports[port->line].port = *port;
+ serial8250_ports[port->line].port.ops = &serial8250_pops;
+ return 0;
+}
+
+/**
+ * unregister_serial - remove a 16x50 serial port at runtime
+ * @line: serial line number
+ *
+ * Remove one serial port. This may be called from interrupt
+ * context.
+ */
+void unregister_serial(int line)
+{
+ uart_unregister_port(&serial8250_reg, line);
+}
+
+/*
+ * This is for ISAPNP only.
+ */
+void serial8250_get_irq_map(unsigned int *map)
+{
+ int i;
+
+ for (i = 0; i < UART_NR; i++) {
+ if (serial8250_ports[i].port.type != PORT_UNKNOWN &&
+ serial8250_ports[i].port.irq < 16)
+ *map |= 1 << serial8250_ports[i].port.irq;
+ }
+}
+
+/**
+ * serial8250_suspend_port - suspend one serial port
+ * @line: serial line number
+ * @level: the level of port suspension, as per uart_suspend_port
+ *
+ * Suspend one serial port.
+ */
+void serial8250_suspend_port(int line)
+{
+ uart_suspend_port(&serial8250_reg, &serial8250_ports[line].port);
+}
+
+/**
+ * serial8250_resume_port - resume one serial port
+ * @line: serial line number
+ * @level: the level of port resumption, as per uart_resume_port
+ *
+ * Resume one serial port.
+ */
+void serial8250_resume_port(int line)
+{
+ uart_resume_port(&serial8250_reg, &serial8250_ports[line].port);
+}
+
+static int __init serial8250_init(void)
+{
+ int ret, i;
+
+ printk(KERN_INFO "Serial: Au1x00 driver\n");
+
+ for (i = 0; i < NR_IRQS; i++)
+ spin_lock_init(&irq_lists[i].lock);
+
+ ret = uart_register_driver(&serial8250_reg);
+ if (ret >= 0)
+ serial8250_register_ports(&serial8250_reg);
+
+ return ret;
+}
+
+static void __exit serial8250_exit(void)
+{
+ int i;
+
+ for (i = 0; i < UART_NR; i++)
+ uart_remove_one_port(&serial8250_reg, &serial8250_ports[i].port);
+
+ uart_unregister_driver(&serial8250_reg);
+}
+
+module_init(serial8250_init);
+module_exit(serial8250_exit);
+
+EXPORT_SYMBOL(register_serial);
+EXPORT_SYMBOL(unregister_serial);
+EXPORT_SYMBOL(serial8250_get_irq_map);
+EXPORT_SYMBOL(serial8250_suspend_port);
+EXPORT_SYMBOL(serial8250_resume_port);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Au1x00 serial driver\n");
--- /dev/null
+/*
+ * dz.c: Serial port driver for DECStations equiped
+ * with the DZ chipset.
+ *
+ * Copyright (C) 1998 Olivier A. D. Lebaillif
+ *
+ * Email: olivier.lebaillif@ifrsys.com
+ *
+ * [31-AUG-98] triemer
+ * Changed IRQ to use Harald's dec internals interrupts.h
+ * removed base_addr code - moving address assignment to setup.c
+ * Changed name of dz_init to rs_init to be consistent with tc code
+ * [13-NOV-98] triemer fixed code to receive characters
+ * after patches by harald to irq code.
+ * [09-JAN-99] triemer minor fix for schedule - due to removal of timeout
+ * field from "current" - somewhere between 2.1.121 and 2.1.131
+ Qua Jun 27 15:02:26 BRT 2001
+ * [27-JUN-2001] Arnaldo Carvalho de Melo <acme@conectiva.com.br> - cleanups
+ *
+ * Parts (C) 1999 David Airlie, airlied@linux.ie
+ * [07-SEP-99] Bugfixes
+ *
+ * [06-Jan-2002] Russell King <rmk@arm.linux.org.uk>
+ * Converted to new serial core
+ */
+
+#undef DEBUG_DZ
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+
+#include <asm/bootinfo.h>
+#include <asm/dec/interrupts.h>
+#include <asm/dec/kn01.h>
+#include <asm/dec/kn02.h>
+#include <asm/dec/machtype.h>
+#include <asm/dec/prom.h>
+#include <asm/irq.h>
+#include <asm/system.h>
+#include <asm/uaccess.h>
+
+#define CONSOLE_LINE (3) /* for definition of struct console */
+
+#include "dz.h"
+
+#define DZ_INTR_DEBUG 1
+
+static char *dz_name = "DECstation DZ serial driver version ";
+static char *dz_version = "1.02";
+
+struct dz_port {
+ struct uart_port port;
+ unsigned int cflag;
+};
+
+static struct dz_port dz_ports[DZ_NB_PORT];
+
+#ifdef DEBUG_DZ
+/*
+ * debugging code to send out chars via prom
+ */
+static void debug_console(const char *s, int count)
+{
+ unsigned i;
+
+ for (i = 0; i < count; i++) {
+ if (*s == 10)
+ prom_printf("%c", 13);
+ prom_printf("%c", *s++);
+ }
+}
+#endif
+
+/*
+ * ------------------------------------------------------------
+ * dz_in () and dz_out ()
+ *
+ * These routines are used to access the registers of the DZ
+ * chip, hiding relocation differences between implementation.
+ * ------------------------------------------------------------
+ */
+
+static inline unsigned short dz_in(struct dz_port *dport, unsigned offset)
+{
+ volatile unsigned short *addr =
+ (volatile unsigned short *) (dport->port.membase + offset);
+ return *addr;
+}
+
+static inline void dz_out(struct dz_port *dport, unsigned offset,
+ unsigned short value)
+{
+ volatile unsigned short *addr =
+ (volatile unsigned short *) (dport->port.membase + offset);
+ *addr = value;
+}
+
+/*
+ * ------------------------------------------------------------
+ * rs_stop () and rs_start ()
+ *
+ * These routines are called before setting or resetting
+ * tty->stopped. They enable or disable transmitter interrupts,
+ * as necessary.
+ * ------------------------------------------------------------
+ */
+
+static void dz_stop_tx(struct uart_port *uport, unsigned int tty_stop)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned short tmp, mask = 1 << dport->port.line;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dport->port.lock, flags);
+ tmp = dz_in(dport, DZ_TCR); /* read the TX flag */
+ tmp &= ~mask; /* clear the TX flag */
+ dz_out(dport, DZ_TCR, tmp);
+ spin_unlock_irqrestore(&dport->port.lock, flags);
+}
+
+static void dz_start_tx(struct uart_port *uport, unsigned int tty_start)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned short tmp, mask = 1 << dport->port.line;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dport->port.lock, flags);
+ tmp = dz_in(dport, DZ_TCR); /* read the TX flag */
+ tmp |= mask; /* set the TX flag */
+ dz_out(dport, DZ_TCR, tmp);
+ spin_unlock_irqrestore(&dport->port.lock, flags);
+}
+
+static void dz_stop_rx(struct uart_port *uport)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dport->port.lock, flags);
+ dport->cflag &= ~DZ_CREAD;
+ dz_out(dport, DZ_LPR, dport->cflag);
+ spin_unlock_irqrestore(&dport->port.lock, flags);
+}
+
+static void dz_enable_ms(struct uart_port *port)
+{
+ /* nothing to do */
+}
+
+/*
+ * ------------------------------------------------------------
+ * Here starts the interrupt handling routines. All of the
+ * following subroutines are declared as inline and are folded
+ * into dz_interrupt. They were separated out for readability's
+ * sake.
+ *
+ * Note: rs_interrupt() is a "fast" interrupt, which means that it
+ * runs with interrupts turned off. People who may want to modify
+ * rs_interrupt() should try to keep the interrupt handler as fast as
+ * possible. After you are done making modifications, it is not a bad
+ * idea to do:
+ *
+ * make drivers/serial/dz.s
+ *
+ * and look at the resulting assemble code in dz.s.
+ *
+ * ------------------------------------------------------------
+ */
+
+/*
+ * ------------------------------------------------------------
+ * receive_char ()
+ *
+ * This routine deals with inputs from any lines.
+ * ------------------------------------------------------------
+ */
+static inline void dz_receive_chars(struct dz_port *dport)
+{
+ struct tty_struct *tty = NULL;
+ struct uart_icount *icount;
+ int ignore = 0;
+ unsigned short status, tmp;
+ unsigned char ch;
+
+ /* this code is going to be a problem...
+ the call to tty_flip_buffer is going to need
+ to be rethought...
+ */
+ do {
+ status = dz_in(dport, DZ_RBUF);
+
+ /* punt so we don't get duplicate characters */
+ if (!(status & DZ_DVAL))
+ goto ignore_char;
+
+
+ ch = UCHAR(status); /* grab the char */
+
+#if 0
+ if (info->is_console) {
+ if (ch == 0)
+ return; /* it's a break ... */
+ }
+#endif
+
+ tty = dport->port.info->tty;/* now tty points to the proper dev */
+ icount = &dport->port.icount;
+
+ if (!tty)
+ break;
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE)
+ break;
+
+ *tty->flip.char_buf_ptr = ch;
+ *tty->flip.flag_buf_ptr = 0;
+ icount->rx++;
+
+ /* keep track of the statistics */
+ if (status & (DZ_OERR | DZ_FERR | DZ_PERR)) {
+ if (status & DZ_PERR) /* parity error */
+ icount->parity++;
+ else if (status & DZ_FERR) /* frame error */
+ icount->frame++;
+ if (status & DZ_OERR) /* overrun error */
+ icount->overrun++;
+
+ /* check to see if we should ignore the character
+ and mask off conditions that should be ignored
+ */
+
+ if (status & dport->port.ignore_status_mask) {
+ if (++ignore > 100)
+ break;
+ goto ignore_char;
+ }
+ /* mask off the error conditions we want to ignore */
+ tmp = status & dport->port.read_status_mask;
+
+ if (tmp & DZ_PERR) {
+ *tty->flip.flag_buf_ptr = TTY_PARITY;
+#ifdef DEBUG_DZ
+ debug_console("PERR\n", 5);
+#endif
+ } else if (tmp & DZ_FERR) {
+ *tty->flip.flag_buf_ptr = TTY_FRAME;
+#ifdef DEBUG_DZ
+ debug_console("FERR\n", 5);
+#endif
+ }
+ if (tmp & DZ_OERR) {
+#ifdef DEBUG_DZ
+ debug_console("OERR\n", 5);
+#endif
+ if (tty->flip.count < TTY_FLIPBUF_SIZE) {
+ tty->flip.count++;
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ *tty->flip.flag_buf_ptr = TTY_OVERRUN;
+ }
+ }
+ }
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ ignore_char:
+ } while (status & DZ_DVAL);
+
+ if (tty)
+ tty_flip_buffer_push(tty);
+}
+
+/*
+ * ------------------------------------------------------------
+ * transmit_char ()
+ *
+ * This routine deals with outputs to any lines.
+ * ------------------------------------------------------------
+ */
+static inline void dz_transmit_chars(struct dz_port *dport)
+{
+ struct circ_buf *xmit = &dport->port.info->xmit;
+ unsigned char tmp;
+
+ if (dport->port.x_char) { /* XON/XOFF chars */
+ dz_out(dport, DZ_TDR, dport->port.x_char);
+ dport->port.icount.tx++;
+ dport->port.x_char = 0;
+ return;
+ }
+ /* if nothing to do or stopped or hardware stopped */
+ if (uart_circ_empty(xmit) || uart_tx_stopped(&dport->port)) {
+ dz_stop_tx(&dport->port, 0);
+ return;
+ }
+
+ /*
+ * if something to do ... (rember the dz has no output fifo so we go
+ * one char at a time :-<
+ */
+ tmp = xmit->buf[xmit->tail];
+ xmit->tail = (xmit->tail + 1) & (DZ_XMIT_SIZE - 1);
+ dz_out(dport, DZ_TDR, tmp);
+ dport->port.icount.tx++;
+
+ if (uart_circ_chars_pending(xmit) < DZ_WAKEUP_CHARS)
+ uart_write_wakeup(&dport->port);
+
+ /* Are we done */
+ if (uart_circ_empty(xmit))
+ dz_stop_tx(&dport->port, 0);
+}
+
+/*
+ * ------------------------------------------------------------
+ * check_modem_status ()
+ *
+ * Only valid for the MODEM line duh !
+ * ------------------------------------------------------------
+ */
+static inline void check_modem_status(struct dz_port *dport)
+{
+ unsigned short status;
+
+ /* if not ne modem line just return */
+ if (dport->port.line != DZ_MODEM)
+ return;
+
+ status = dz_in(dport, DZ_MSR);
+
+ /* it's easy, since DSR2 is the only bit in the register */
+ if (status)
+ dport->port.icount.dsr++;
+}
+
+/*
+ * ------------------------------------------------------------
+ * dz_interrupt ()
+ *
+ * this is the main interrupt routine for the DZ chip.
+ * It deals with the multiple ports.
+ * ------------------------------------------------------------
+ */
+static irqreturn_t dz_interrupt(int irq, void *dev, struct pt_regs *regs)
+{
+ struct dz_port *dport;
+ unsigned short status;
+
+ /* get the reason why we just got an irq */
+ status = dz_in((struct dz_port *)dev, DZ_CSR);
+ dport = &dz_ports[LINE(status)];
+
+ if (status & DZ_RDONE)
+ dz_receive_chars(dport);
+
+ if (status & DZ_TRDY)
+ dz_transmit_chars(dport);
+
+ /* FIXME: what about check modem status??? --rmk */
+
+ return IRQ_HANDLED;
+}
+
+/*
+ * -------------------------------------------------------------------
+ * Here ends the DZ interrupt routines.
+ * -------------------------------------------------------------------
+ */
+
+static unsigned int dz_get_mctrl(struct uart_port *uport)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned int mctrl = TIOCM_CAR | TIOCM_DSR | TIOCM_CTS;
+
+ if (dport->port.line == DZ_MODEM) {
+ /*
+ * CHECKME: This is a guess from the other code... --rmk
+ */
+ if (dz_in(dport, DZ_MSR) & DZ_MODEM_DSR)
+ mctrl &= ~TIOCM_DSR;
+ }
+
+ return mctrl;
+}
+
+static void dz_set_mctrl(struct uart_port *uport, unsigned int mctrl)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned short tmp;
+
+ if (dport->port.line == DZ_MODEM) {
+ tmp = dz_in(dport, DZ_TCR);
+ if (mctrl & TIOCM_DTR)
+ tmp &= ~DZ_MODEM_DTR;
+ else
+ tmp |= DZ_MODEM_DTR;
+ dz_out(dport, DZ_TCR, tmp);
+ }
+}
+
+/*
+ * -------------------------------------------------------------------
+ * startup ()
+ *
+ * various initialization tasks
+ * -------------------------------------------------------------------
+ */
+static int dz_startup(struct uart_port *uport)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned long flags;
+ unsigned short tmp;
+
+ /* The dz lines for the mouse/keyboard must be
+ * opened using their respective drivers.
+ */
+ if ((dport->port.line == DZ_KEYBOARD) ||
+ (dport->port.line == DZ_MOUSE))
+ return -ENODEV;
+
+ spin_lock_irqsave(&dport->port.lock, flags);
+
+ /* enable the interrupt and the scanning */
+ tmp = dz_in(dport, DZ_CSR);
+ tmp |= DZ_RIE | DZ_TIE | DZ_MSE;
+ dz_out(dport, DZ_CSR, tmp);
+
+ spin_unlock_irqrestore(&dport->port.lock, flags);
+
+ return 0;
+}
+
+/*
+ * -------------------------------------------------------------------
+ * shutdown ()
+ *
+ * This routine will shutdown a serial port; interrupts are disabled, and
+ * DTR is dropped if the hangup on close termio flag is on.
+ * -------------------------------------------------------------------
+ */
+static void dz_shutdown(struct uart_port *uport)
+{
+ dz_stop_tx(uport, 0);
+}
+
+/*
+ * get_lsr_info - get line status register info
+ *
+ * Purpose: Let user call ioctl() to get info when the UART physically
+ * is emptied. On bus types like RS485, the transmitter must
+ * release the bus after transmitting. This must be done when
+ * the transmit shift register is empty, not be done when the
+ * transmit holding register is empty. This functionality
+ * allows an RS485 driver to be written in user space.
+ */
+static unsigned int dz_tx_empty(struct uart_port *uport)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned short status = dz_in(dport, DZ_LPR);
+
+ /* FIXME: this appears to be obviously broken --rmk. */
+ return status ? TIOCSER_TEMT : 0;
+}
+
+static void dz_break_ctl(struct uart_port *uport, int break_state)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned long flags;
+ unsigned short tmp, mask = 1 << uport->line;
+
+ spin_lock_irqsave(&uport->lock, flags);
+ tmp = dz_in(dport, DZ_TCR);
+ if (break_state)
+ tmp |= mask;
+ else
+ tmp &= ~mask;
+ dz_out(dport, DZ_TCR, tmp);
+ spin_unlock_irqrestore(&uport->lock, flags);
+}
+
+static void dz_set_termios(struct uart_port *uport, struct termios *termios,
+ struct termios *old_termios)
+{
+ struct dz_port *dport = (struct dz_port *)uport;
+ unsigned long flags;
+ unsigned int cflag, baud;
+
+ cflag = dport->port.line;
+
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ cflag |= DZ_CS5;
+ break;
+ case CS6:
+ cflag |= DZ_CS6;
+ break;
+ case CS7:
+ cflag |= DZ_CS7;
+ break;
+ case CS8:
+ default:
+ cflag |= DZ_CS8;
+ }
+
+ if (termios->c_cflag & CSTOPB)
+ cflag |= DZ_CSTOPB;
+ if (termios->c_cflag & PARENB)
+ cflag |= DZ_PARENB;
+ if (termios->c_cflag & PARODD)
+ cflag |= DZ_PARODD;
+
+ baud = uart_get_baud_rate(uport, termios, old_termios, 50, 9600);
+ switch (baud) {
+ case 50:
+ cflag |= DZ_B50;
+ break;
+ case 75:
+ cflag |= DZ_B75;
+ break;
+ case 110:
+ cflag |= DZ_B110;
+ break;
+ case 134:
+ cflag |= DZ_B134;
+ break;
+ case 150:
+ cflag |= DZ_B150;
+ break;
+ case 300:
+ cflag |= DZ_B300;
+ break;
+ case 600:
+ cflag |= DZ_B600;
+ break;
+ case 1200:
+ cflag |= DZ_B1200;
+ break;
+ case 1800:
+ cflag |= DZ_B1800;
+ break;
+ case 2000:
+ cflag |= DZ_B2000;
+ break;
+ case 2400:
+ cflag |= DZ_B2400;
+ break;
+ case 3600:
+ cflag |= DZ_B3600;
+ break;
+ case 4800:
+ cflag |= DZ_B4800;
+ break;
+ case 7200:
+ cflag |= DZ_B7200;
+ break;
+ case 9600:
+ default:
+ cflag |= DZ_B9600;
+ }
+
+ if (termios->c_cflag & CREAD)
+ cflag |= DZ_RXENAB;
+
+ spin_lock_irqsave(&dport->port.lock, flags);
+
+ dz_out(dport, DZ_LPR, cflag);
+ dport->cflag = cflag;
+
+ /* setup accept flag */
+ dport->port.read_status_mask = DZ_OERR;
+ if (termios->c_iflag & INPCK)
+ dport->port.read_status_mask |= DZ_FERR | DZ_PERR;
+
+ /* characters to ignore */
+ uport->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ dport->port.ignore_status_mask |= DZ_FERR | DZ_PERR;
+
+ spin_unlock_irqrestore(&dport->port.lock, flags);
+}
+
+static const char *dz_type(struct uart_port *port)
+{
+ return "DZ";
+}
+
+static void dz_release_port(struct uart_port *port)
+{
+ /* nothing to do */
+}
+
+static int dz_request_port(struct uart_port *port)
+{
+ return 0;
+}
+
+static void dz_config_port(struct uart_port *port, int flags)
+{
+ if (flags & UART_CONFIG_TYPE)
+ port->type = PORT_DZ;
+}
+
+/*
+ * verify the new serial_struct (for TIOCSSERIAL).
+ */
+static int dz_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ int ret = 0;
+ if (ser->type != PORT_UNKNOWN && ser->type != PORT_DZ)
+ ret = -EINVAL;
+ if (ser->irq != port->irq)
+ ret = -EINVAL;
+ return ret;
+}
+
+static struct uart_ops dz_ops = {
+ .tx_empty = dz_tx_empty,
+ .get_mctrl = dz_get_mctrl,
+ .set_mctrl = dz_set_mctrl,
+ .stop_tx = dz_stop_tx,
+ .start_tx = dz_start_tx,
+ .stop_rx = dz_stop_rx,
+ .enable_ms = dz_enable_ms,
+ .break_ctl = dz_break_ctl,
+ .startup = dz_startup,
+ .shutdown = dz_shutdown,
+ .set_termios = dz_set_termios,
+ .type = dz_type,
+ .release_port = dz_release_port,
+ .request_port = dz_request_port,
+ .config_port = dz_config_port,
+ .verify_port = dz_verify_port,
+};
+
+static void __init dz_init_ports(void)
+{
+ static int first = 1;
+ struct dz_port *dport;
+ unsigned long base;
+ int i;
+
+ if (!first)
+ return;
+ first = 0;
+
+ if (mips_machtype == MACH_DS23100 ||
+ mips_machtype == MACH_DS5100)
+ base = (unsigned long) KN01_DZ11_BASE;
+ else
+ base = (unsigned long) KN02_DZ11_BASE;
+
+ for (i = 0, dport = dz_ports; i < DZ_NB_PORT; i++, dport++) {
+ spin_lock_init(&dport->port.lock);
+ dport->port.membase = (char *) base;
+ dport->port.iotype = SERIAL_IO_PORT;
+ dport->port.irq = dec_interrupt[DEC_IRQ_DZ11];
+ dport->port.line = i;
+ dport->port.fifosize = 1;
+ dport->port.ops = &dz_ops;
+ dport->port.flags = UPF_BOOT_AUTOCONF;
+ }
+}
+
+static void dz_reset(struct dz_port *dport)
+{
+ dz_out(dport, DZ_CSR, DZ_CLR);
+
+ while (dz_in(dport, DZ_CSR) & DZ_CLR);
+ /* FIXME: cpu_relax? */
+
+ iob();
+
+ /* enable scanning */
+ dz_out(dport, DZ_CSR, DZ_MSE);
+}
+
+#ifdef CONFIG_SERIAL_DZ_CONSOLE
+static void dz_console_put_char(struct dz_port *dport, unsigned char ch)
+{
+ unsigned long flags;
+ int loops = 2500;
+ unsigned short tmp = ch;
+ /* this code sends stuff out to serial device - spinning its
+ wheels and waiting. */
+
+ spin_lock_irqsave(&dport->port.lock, flags);
+
+ /* spin our wheels */
+ while (((dz_in(dport, DZ_CSR) & DZ_TRDY) != DZ_TRDY) && loops--)
+ /* FIXME: cpu_relax, udelay? --rmk */
+ ;
+
+ /* Actually transmit the character. */
+ dz_out(dport, DZ_TDR, tmp);
+
+ spin_unlock_irqrestore(&dport->port.lock, flags);
+}
+/*
+ * -------------------------------------------------------------------
+ * dz_console_print ()
+ *
+ * dz_console_print is registered for printk.
+ * The console must be locked when we get here.
+ * -------------------------------------------------------------------
+ */
+static void dz_console_print(struct console *cons,
+ const char *str,
+ unsigned int count)
+{
+ struct dz_port *dport = &dz_ports[CONSOLE_LINE];
+#ifdef DEBUG_DZ
+ prom_printf((char *) str);
+#endif
+ while (count--) {
+ if (*str == '\n')
+ dz_console_put_char(dport, '\r');
+ dz_console_put_char(dport, *str++);
+ }
+}
+
+static int __init dz_console_setup(struct console *co, char *options)
+{
+ struct dz_port *dport = &dz_ports[CONSOLE_LINE];
+ int baud = 9600;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+ int ret;
+ unsigned short mask, tmp;
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+
+ dz_reset(dport);
+
+ ret = uart_set_options(&dport->port, co, baud, parity, bits, flow);
+ if (ret == 0) {
+ mask = 1 << dport->port.line;
+ tmp = dz_in(dport, DZ_TCR); /* read the TX flag */
+ if (!(tmp & mask)) {
+ tmp |= mask; /* set the TX flag */
+ dz_out(dport, DZ_TCR, tmp);
+ }
+ }
+
+ return ret;
+}
+
+static struct console dz_sercons =
+{
+ .name = "ttyS",
+ .write = dz_console_print,
+ .device = uart_console_device,
+ .setup = dz_console_setup,
+ .flags = CON_CONSDEV | CON_PRINTBUFFER,
+ .index = CONSOLE_LINE,
+};
+
+void __init dz_serial_console_init(void)
+{
+ dz_init_ports();
+
+ register_console(&dz_sercons);
+}
+
+#define SERIAL_DZ_CONSOLE &dz_sercons
+#else
+#define SERIAL_DZ_CONSOLE NULL
+#endif /* CONFIG_SERIAL_DZ_CONSOLE */
+
+static struct uart_driver dz_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "serial",
+#ifdef CONFIG_DEVFS
+ .dev_name = "tts/%d",
+#else
+ .dev_name = "ttyS%d",
+#endif
+ .major = TTY_MAJOR,
+ .minor = 64,
+ .nr = DZ_NB_PORT,
+ .cons = SERIAL_DZ_CONSOLE,
+};
+
+int __init dz_init(void)
+{
+ unsigned long flags;
+ int ret, i;
+
+ printk("%s%s\n", dz_name, dz_version);
+
+ dz_init_ports();
+
+ save_flags(flags);
+ cli();
+
+#ifndef CONFIG_SERIAL_DZ_CONSOLE
+ /* reset the chip */
+ dz_reset(&dz_ports[0]);
+#endif
+
+ /* order matters here... the trick is that flags
+ is updated... in request_irq - to immediatedly obliterate
+ it is unwise. */
+ restore_flags(flags);
+
+ if (request_irq(dz_ports[0].port.irq, dz_interrupt,
+ SA_INTERRUPT, "DZ", &dz_ports[0]))
+ panic("Unable to register DZ interrupt");
+
+ ret = uart_register_driver(&dz_reg);
+ if (ret != 0)
+ return ret;
+
+ for (i = 0; i < DZ_NB_PORT; i++)
+ uart_add_one_port(&dz_reg, &dz_ports[i].port);
+
+ return ret;
+}
+
+MODULE_DESCRIPTION("DECstation DZ serial driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * dz.h: Serial port driver for DECStations equiped
+ * with the DZ chipset.
+ *
+ * Copyright (C) 1998 Olivier A. D. Lebaillif
+ *
+ * Email: olivier.lebaillif@ifrsys.com
+ *
+ */
+#ifndef DZ_SERIAL_H
+#define DZ_SERIAL_H
+
+/*
+ * Definitions for the Control and Status Received.
+ */
+#define DZ_TRDY 0x8000 /* Transmitter empty */
+#define DZ_TIE 0x4000 /* Transmitter Interrupt Enable */
+#define DZ_RDONE 0x0080 /* Receiver data ready */
+#define DZ_RIE 0x0040 /* Receive Interrupt Enable */
+#define DZ_MSE 0x0020 /* Master Scan Enable */
+#define DZ_CLR 0x0010 /* Master reset */
+#define DZ_MAINT 0x0008 /* Loop Back Mode */
+
+/*
+ * Definitions for the Received buffer.
+ */
+#define DZ_RBUF_MASK 0x00FF /* Data Mask in the Receive Buffer */
+#define DZ_LINE_MASK 0x0300 /* Line Mask in the Receive Buffer */
+#define DZ_DVAL 0x8000 /* Valid Data indicator */
+#define DZ_OERR 0x4000 /* Overrun error indicator */
+#define DZ_FERR 0x2000 /* Frame error indicator */
+#define DZ_PERR 0x1000 /* Parity error indicator */
+
+#define LINE(x) (x & DZ_LINE_MASK) >> 8 /* Get the line number from the input buffer */
+#define UCHAR(x) (unsigned char)(x & DZ_RBUF_MASK)
+
+/*
+ * Definitions for the Transmit Register.
+ */
+#define DZ_LINE_KEYBOARD 0x0001
+#define DZ_LINE_MOUSE 0x0002
+#define DZ_LINE_MODEM 0x0004
+#define DZ_LINE_PRINTER 0x0008
+
+#define DZ_MODEM_DTR 0x0400 /* DTR for the modem line (2) */
+
+/*
+ * Definitions for the Modem Status Register.
+ */
+#define DZ_MODEM_DSR 0x0200 /* DSR for the modem line (2) */
+
+/*
+ * Definitions for the Transmit Data Register.
+ */
+#define DZ_BRK0 0x0100 /* Break assertion for line 0 */
+#define DZ_BRK1 0x0200 /* Break assertion for line 1 */
+#define DZ_BRK2 0x0400 /* Break assertion for line 2 */
+#define DZ_BRK3 0x0800 /* Break assertion for line 3 */
+
+/*
+ * Definitions for the Line Parameter Register.
+ */
+#define DZ_KEYBOARD 0x0000 /* line 0 = keyboard */
+#define DZ_MOUSE 0x0001 /* line 1 = mouse */
+#define DZ_MODEM 0x0002 /* line 2 = modem */
+#define DZ_PRINTER 0x0003 /* line 3 = printer */
+
+#define DZ_CSIZE 0x0018 /* Number of bits per byte (mask) */
+#define DZ_CS5 0x0000 /* 5 bits per byte */
+#define DZ_CS6 0x0008 /* 6 bits per byte */
+#define DZ_CS7 0x0010 /* 7 bits per byte */
+#define DZ_CS8 0x0018 /* 8 bits per byte */
+
+#define DZ_CSTOPB 0x0020 /* 2 stop bits instead of one */
+
+#define DZ_PARENB 0x0040 /* Parity enable */
+#define DZ_PARODD 0x0080 /* Odd parity instead of even */
+
+#define DZ_CBAUD 0x0E00 /* Baud Rate (mask) */
+#define DZ_B50 0x0000
+#define DZ_B75 0x0100
+#define DZ_B110 0x0200
+#define DZ_B134 0x0300
+#define DZ_B150 0x0400
+#define DZ_B300 0x0500
+#define DZ_B600 0x0600
+#define DZ_B1200 0x0700
+#define DZ_B1800 0x0800
+#define DZ_B2000 0x0900
+#define DZ_B2400 0x0A00
+#define DZ_B3600 0x0B00
+#define DZ_B4800 0x0C00
+#define DZ_B7200 0x0D00
+#define DZ_B9600 0x0E00
+
+#define DZ_CREAD 0x1000 /* Enable receiver */
+#define DZ_RXENAB 0x1000 /* enable receive char */
+/*
+ * Addresses for the DZ registers
+ */
+#define DZ_CSR 0x00 /* Control and Status Register */
+#define DZ_RBUF 0x08 /* Receive Buffer */
+#define DZ_LPR 0x08 /* Line Parameters Register */
+#define DZ_TCR 0x10 /* Transmitter Control Register */
+#define DZ_MSR 0x18 /* Modem Status Register */
+#define DZ_TDR 0x18 /* Transmit Data Register */
+
+#define DZ_NB_PORT 4
+
+#define DZ_XMIT_SIZE 4096 /* buffer size */
+#define DZ_WAKEUP_CHARS DZ_XMIT_SIZE/4
+
+#ifdef MODULE
+int init_module (void)
+void cleanup_module (void)
+#endif
+
+#endif /* DZ_SERIAL_H */
--- /dev/null
+/*
+ * Driver for Zilog serial chips found on SGI workstations and
+ * servers. This driver could actually be made more generic.
+ *
+ * This is based on the drivers/serial/sunzilog.c code as of 2.6.0-test7 and the
+ * old drivers/sgi/char/sgiserial.c code which itself is based of the original
+ * drivers/sbus/char/zs.c code. A lot of code has been simply moved over
+ * directly from there but much has been rewritten. Credits therefore go out
+ * to David S. Miller, Eddie C. Dost, Pete Zaitcev, Ted Ts'o and Alex Buell
+ * for their work there.
+ *
+ * Copyright (C) 2002 Ralf Baechle (ralf@linux-mips.org)
+ * Copyright (C) 2002 David S. Miller (davem@redhat.com)
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <linux/major.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/circ_buf.h>
+#include <linux/serial.h>
+#include <linux/sysrq.h>
+#include <linux/console.h>
+#include <linux/spinlock.h>
+#include <linux/init.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/sgialib.h>
+#include <asm/sgi/ioc.h>
+#include <asm/sgi/hpc3.h>
+#include <asm/sgi/ip22.h>
+
+#if defined(CONFIG_SERIAL_IP22_ZILOG_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/serial_core.h>
+
+#include "ip22zilog.h"
+
+int ip22serial_current_minor = 64;
+
+void ip22_do_break(void);
+
+/*
+ * On IP22 we need to delay after register accesses but we do not need to
+ * flush writes.
+ */
+#define ZSDELAY() udelay(5)
+#define ZSDELAY_LONG() udelay(20)
+#define ZS_WSYNC(channel) do { } while (0)
+
+#define NUM_IP22ZILOG 1
+#define NUM_CHANNELS (NUM_IP22ZILOG * 2)
+
+#define ZS_CLOCK 4915200 /* Zilog input clock rate. */
+#define ZS_CLOCK_DIVISOR 16 /* Divisor this driver uses. */
+
+/*
+ * We wrap our port structure around the generic uart_port.
+ */
+struct uart_ip22zilog_port {
+ struct uart_port port;
+
+ /* IRQ servicing chain. */
+ struct uart_ip22zilog_port *next;
+
+ /* Current values of Zilog write registers. */
+ unsigned char curregs[NUM_ZSREGS];
+
+ unsigned int flags;
+#define IP22ZILOG_FLAG_IS_CONS 0x00000004
+#define IP22ZILOG_FLAG_IS_KGDB 0x00000008
+#define IP22ZILOG_FLAG_MODEM_STATUS 0x00000010
+#define IP22ZILOG_FLAG_IS_CHANNEL_A 0x00000020
+#define IP22ZILOG_FLAG_REGS_HELD 0x00000040
+#define IP22ZILOG_FLAG_TX_STOPPED 0x00000080
+#define IP22ZILOG_FLAG_TX_ACTIVE 0x00000100
+
+ unsigned int cflag;
+
+ /* L1-A keyboard break state. */
+ int kbd_id;
+ int l1_down;
+
+ unsigned char parity_mask;
+ unsigned char prev_status;
+};
+
+#define ZILOG_CHANNEL_FROM_PORT(PORT) ((struct zilog_channel *)((PORT)->membase))
+#define UART_ZILOG(PORT) ((struct uart_ip22zilog_port *)(PORT))
+#define IP22ZILOG_GET_CURR_REG(PORT, REGNUM) \
+ (UART_ZILOG(PORT)->curregs[REGNUM])
+#define IP22ZILOG_SET_CURR_REG(PORT, REGNUM, REGVAL) \
+ ((UART_ZILOG(PORT)->curregs[REGNUM]) = (REGVAL))
+#define ZS_IS_CONS(UP) ((UP)->flags & IP22ZILOG_FLAG_IS_CONS)
+#define ZS_IS_KGDB(UP) ((UP)->flags & IP22ZILOG_FLAG_IS_KGDB)
+#define ZS_WANTS_MODEM_STATUS(UP) ((UP)->flags & IP22ZILOG_FLAG_MODEM_STATUS)
+#define ZS_IS_CHANNEL_A(UP) ((UP)->flags & IP22ZILOG_FLAG_IS_CHANNEL_A)
+#define ZS_REGS_HELD(UP) ((UP)->flags & IP22ZILOG_FLAG_REGS_HELD)
+#define ZS_TX_STOPPED(UP) ((UP)->flags & IP22ZILOG_FLAG_TX_STOPPED)
+#define ZS_TX_ACTIVE(UP) ((UP)->flags & IP22ZILOG_FLAG_TX_ACTIVE)
+
+/* Reading and writing Zilog8530 registers. The delays are to make this
+ * driver work on the IP22 which needs a settling delay after each chip
+ * register access, other machines handle this in hardware via auxiliary
+ * flip-flops which implement the settle time we do in software.
+ *
+ * The port lock must be held and local IRQs must be disabled
+ * when {read,write}_zsreg is invoked.
+ */
+static unsigned char read_zsreg(struct zilog_channel *channel,
+ unsigned char reg)
+{
+ unsigned char retval;
+
+ writeb(reg, &channel->control);
+ ZSDELAY();
+ retval = readb(&channel->control);
+ ZSDELAY();
+
+ return retval;
+}
+
+static void write_zsreg(struct zilog_channel *channel,
+ unsigned char reg, unsigned char value)
+{
+ writeb(reg, &channel->control);
+ ZSDELAY();
+ writeb(value, &channel->control);
+ ZSDELAY();
+}
+
+static void ip22zilog_clear_fifo(struct zilog_channel *channel)
+{
+ int i;
+
+ for (i = 0; i < 32; i++) {
+ unsigned char regval;
+
+ regval = readb(&channel->control);
+ ZSDELAY();
+ if (regval & Rx_CH_AV)
+ break;
+
+ regval = read_zsreg(channel, R1);
+ readb(&channel->data);
+ ZSDELAY();
+
+ if (regval & (PAR_ERR | Rx_OVR | CRC_ERR)) {
+ writeb(ERR_RES, &channel->control);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+ }
+ }
+}
+
+/* This function must only be called when the TX is not busy. The UART
+ * port lock must be held and local interrupts disabled.
+ */
+static void __load_zsregs(struct zilog_channel *channel, unsigned char *regs)
+{
+ int i;
+
+ /* Let pending transmits finish. */
+ for (i = 0; i < 1000; i++) {
+ unsigned char stat = read_zsreg(channel, R1);
+ if (stat & ALL_SNT)
+ break;
+ udelay(100);
+ }
+
+ writeb(ERR_RES, &channel->control);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ ip22zilog_clear_fifo(channel);
+
+ /* Disable all interrupts. */
+ write_zsreg(channel, R1,
+ regs[R1] & ~(RxINT_MASK | TxINT_ENAB | EXT_INT_ENAB));
+
+ /* Set parity, sync config, stop bits, and clock divisor. */
+ write_zsreg(channel, R4, regs[R4]);
+
+ /* Set misc. TX/RX control bits. */
+ write_zsreg(channel, R10, regs[R10]);
+
+ /* Set TX/RX controls sans the enable bits. */
+ write_zsreg(channel, R3, regs[R3] & ~RxENAB);
+ write_zsreg(channel, R5, regs[R5] & ~TxENAB);
+
+ /* Synchronous mode config. */
+ write_zsreg(channel, R6, regs[R6]);
+ write_zsreg(channel, R7, regs[R7]);
+
+ /* Don't mess with the interrupt vector (R2, unused by us) and
+ * master interrupt control (R9). We make sure this is setup
+ * properly at probe time then never touch it again.
+ */
+
+ /* Disable baud generator. */
+ write_zsreg(channel, R14, regs[R14] & ~BRENAB);
+
+ /* Clock mode control. */
+ write_zsreg(channel, R11, regs[R11]);
+
+ /* Lower and upper byte of baud rate generator divisor. */
+ write_zsreg(channel, R12, regs[R12]);
+ write_zsreg(channel, R13, regs[R13]);
+
+ /* Now rewrite R14, with BRENAB (if set). */
+ write_zsreg(channel, R14, regs[R14]);
+
+ /* External status interrupt control. */
+ write_zsreg(channel, R15, regs[R15]);
+
+ /* Reset external status interrupts. */
+ write_zsreg(channel, R0, RES_EXT_INT);
+ write_zsreg(channel, R0, RES_EXT_INT);
+
+ /* Rewrite R3/R5, this time without enables masked. */
+ write_zsreg(channel, R3, regs[R3]);
+ write_zsreg(channel, R5, regs[R5]);
+
+ /* Rewrite R1, this time without IRQ enabled masked. */
+ write_zsreg(channel, R1, regs[R1]);
+}
+
+/* Reprogram the Zilog channel HW registers with the copies found in the
+ * software state struct. If the transmitter is busy, we defer this update
+ * until the next TX complete interrupt. Else, we do it right now.
+ *
+ * The UART port lock must be held and local interrupts disabled.
+ */
+static void ip22zilog_maybe_update_regs(struct uart_ip22zilog_port *up,
+ struct zilog_channel *channel)
+{
+ if (!ZS_REGS_HELD(up)) {
+ if (ZS_TX_ACTIVE(up)) {
+ up->flags |= IP22ZILOG_FLAG_REGS_HELD;
+ } else {
+ __load_zsregs(channel, up->curregs);
+ }
+ }
+}
+
+static void ip22zilog_receive_chars(struct uart_ip22zilog_port *up,
+ struct zilog_channel *channel,
+ struct pt_regs *regs)
+{
+ struct tty_struct *tty = up->port.info->tty; /* XXX info==NULL? */
+
+ while (1) {
+ unsigned char ch, r1;
+
+ if (unlikely(tty->flip.count >= TTY_FLIPBUF_SIZE)) {
+ tty->flip.work.func((void *)tty);
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE)
+ return; /* XXX Ignores SysRq when we need it most. Fix. */
+ }
+
+ r1 = read_zsreg(channel, R1);
+ if (r1 & (PAR_ERR | Rx_OVR | CRC_ERR)) {
+ writeb(ERR_RES, &channel->control);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+ }
+
+ ch = readb(&channel->control);
+ ZSDELAY();
+
+ /* This funny hack depends upon BRK_ABRT not interfering
+ * with the other bits we care about in R1.
+ */
+ if (ch & BRK_ABRT)
+ r1 |= BRK_ABRT;
+
+ ch = readb(&channel->data);
+ ZSDELAY();
+
+ ch &= up->parity_mask;
+
+ if (ZS_IS_CONS(up) && (r1 & BRK_ABRT)) {
+ /* Wait for BREAK to deassert to avoid potentially
+ * confusing the PROM.
+ */
+ while (1) {
+ ch = readb(&channel->control);
+ ZSDELAY();
+ if (!(ch & BRK_ABRT))
+ break;
+ }
+ ip22_do_break();
+ return;
+ }
+
+ /* A real serial line, record the character and status. */
+ *tty->flip.char_buf_ptr = ch;
+ *tty->flip.flag_buf_ptr = TTY_NORMAL;
+ up->port.icount.rx++;
+ if (r1 & (BRK_ABRT | PAR_ERR | Rx_OVR | CRC_ERR)) {
+ if (r1 & BRK_ABRT) {
+ r1 &= ~(PAR_ERR | CRC_ERR);
+ up->port.icount.brk++;
+ if (uart_handle_break(&up->port))
+ goto next_char;
+ }
+ else if (r1 & PAR_ERR)
+ up->port.icount.parity++;
+ else if (r1 & CRC_ERR)
+ up->port.icount.frame++;
+ if (r1 & Rx_OVR)
+ up->port.icount.overrun++;
+ r1 &= up->port.read_status_mask;
+ if (r1 & BRK_ABRT)
+ *tty->flip.flag_buf_ptr = TTY_BREAK;
+ else if (r1 & PAR_ERR)
+ *tty->flip.flag_buf_ptr = TTY_PARITY;
+ else if (r1 & CRC_ERR)
+ *tty->flip.flag_buf_ptr = TTY_FRAME;
+ }
+ if (uart_handle_sysrq_char(&up->port, ch, regs))
+ goto next_char;
+
+ if (up->port.ignore_status_mask == 0xff ||
+ (r1 & up->port.ignore_status_mask) == 0) {
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ }
+ if ((r1 & Rx_OVR) &&
+ tty->flip.count < TTY_FLIPBUF_SIZE) {
+ *tty->flip.flag_buf_ptr = TTY_OVERRUN;
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ }
+ next_char:
+ ch = readb(&channel->control);
+ ZSDELAY();
+ if (!(ch & Rx_CH_AV))
+ break;
+ }
+
+ tty_flip_buffer_push(tty);
+}
+
+static void ip22zilog_status_handle(struct uart_ip22zilog_port *up,
+ struct zilog_channel *channel,
+ struct pt_regs *regs)
+{
+ unsigned char status;
+
+ status = readb(&channel->control);
+ ZSDELAY();
+
+ writeb(RES_EXT_INT, &channel->control);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ if (ZS_WANTS_MODEM_STATUS(up)) {
+ if (status & SYNC)
+ up->port.icount.dsr++;
+
+ /* The Zilog just gives us an interrupt when DCD/CTS/etc. change.
+ * But it does not tell us which bit has changed, we have to keep
+ * track of this ourselves.
+ */
+ if ((status & DCD) ^ up->prev_status)
+ uart_handle_dcd_change(&up->port,
+ (status & DCD));
+ if ((status & CTS) ^ up->prev_status)
+ uart_handle_cts_change(&up->port,
+ (status & CTS));
+
+ wake_up_interruptible(&up->port.info->delta_msr_wait);
+ }
+
+ up->prev_status = status;
+}
+
+static void ip22zilog_transmit_chars(struct uart_ip22zilog_port *up,
+ struct zilog_channel *channel)
+{
+ struct circ_buf *xmit;
+
+ if (ZS_IS_CONS(up)) {
+ unsigned char status = readb(&channel->control);
+ ZSDELAY();
+
+ /* TX still busy? Just wait for the next TX done interrupt.
+ *
+ * It can occur because of how we do serial console writes. It would
+ * be nice to transmit console writes just like we normally would for
+ * a TTY line. (ie. buffered and TX interrupt driven). That is not
+ * easy because console writes cannot sleep. One solution might be
+ * to poll on enough port->xmit space becomming free. -DaveM
+ */
+ if (!(status & Tx_BUF_EMP))
+ return;
+ }
+
+ up->flags &= ~IP22ZILOG_FLAG_TX_ACTIVE;
+
+ if (ZS_REGS_HELD(up)) {
+ __load_zsregs(channel, up->curregs);
+ up->flags &= ~IP22ZILOG_FLAG_REGS_HELD;
+ }
+
+ if (ZS_TX_STOPPED(up)) {
+ up->flags &= ~IP22ZILOG_FLAG_TX_STOPPED;
+ goto ack_tx_int;
+ }
+
+ if (up->port.x_char) {
+ up->flags |= IP22ZILOG_FLAG_TX_ACTIVE;
+ writeb(up->port.x_char, &channel->data);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ up->port.icount.tx++;
+ up->port.x_char = 0;
+ return;
+ }
+
+ if (up->port.info == NULL)
+ goto ack_tx_int;
+ xmit = &up->port.info->xmit;
+ if (uart_circ_empty(xmit)) {
+ uart_write_wakeup(&up->port);
+ goto ack_tx_int;
+ }
+ if (uart_tx_stopped(&up->port))
+ goto ack_tx_int;
+
+ up->flags |= IP22ZILOG_FLAG_TX_ACTIVE;
+ writeb(xmit->buf[xmit->tail], &channel->data);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ up->port.icount.tx++;
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&up->port);
+
+ return;
+
+ack_tx_int:
+ writeb(RES_Tx_P, &channel->control);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+}
+
+static irqreturn_t ip22zilog_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct uart_ip22zilog_port *up = dev_id;
+
+ while (up) {
+ struct zilog_channel *channel
+ = ZILOG_CHANNEL_FROM_PORT(&up->port);
+ unsigned char r3;
+
+ spin_lock(&up->port.lock);
+ r3 = read_zsreg(channel, R3);
+
+ /* Channel A */
+ if (r3 & (CHAEXT | CHATxIP | CHARxIP)) {
+ writeb(RES_H_IUS, &channel->control);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ if (r3 & CHARxIP)
+ ip22zilog_receive_chars(up, channel, regs);
+ if (r3 & CHAEXT)
+ ip22zilog_status_handle(up, channel, regs);
+ if (r3 & CHATxIP)
+ ip22zilog_transmit_chars(up, channel);
+ }
+ spin_unlock(&up->port.lock);
+
+ /* Channel B */
+ up = up->next;
+ channel = ZILOG_CHANNEL_FROM_PORT(&up->port);
+
+ spin_lock(&up->port.lock);
+ if (r3 & (CHBEXT | CHBTxIP | CHBRxIP)) {
+ writeb(RES_H_IUS, &channel->control);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ if (r3 & CHBRxIP)
+ ip22zilog_receive_chars(up, channel, regs);
+ if (r3 & CHBEXT)
+ ip22zilog_status_handle(up, channel, regs);
+ if (r3 & CHBTxIP)
+ ip22zilog_transmit_chars(up, channel);
+ }
+ spin_unlock(&up->port.lock);
+
+ up = up->next;
+ }
+
+ return IRQ_HANDLED;
+}
+
+/* A convenient way to quickly get R0 status. The caller must _not_ hold the
+ * port lock, it is acquired here.
+ */
+static __inline__ unsigned char ip22zilog_read_channel_status(struct uart_port *port)
+{
+ struct zilog_channel *channel;
+ unsigned long flags;
+ unsigned char status;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ channel = ZILOG_CHANNEL_FROM_PORT(port);
+ status = readb(&channel->control);
+ ZSDELAY();
+
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ return status;
+}
+
+/* The port lock is not held. */
+static unsigned int ip22zilog_tx_empty(struct uart_port *port)
+{
+ unsigned char status;
+ unsigned int ret;
+
+ status = ip22zilog_read_channel_status(port);
+ if (status & Tx_BUF_EMP)
+ ret = TIOCSER_TEMT;
+ else
+ ret = 0;
+
+ return ret;
+}
+
+/* The port lock is not held. */
+static unsigned int ip22zilog_get_mctrl(struct uart_port *port)
+{
+ unsigned char status;
+ unsigned int ret;
+
+ status = ip22zilog_read_channel_status(port);
+
+ ret = 0;
+ if (status & DCD)
+ ret |= TIOCM_CAR;
+ if (status & SYNC)
+ ret |= TIOCM_DSR;
+ if (status & CTS)
+ ret |= TIOCM_CTS;
+
+ return ret;
+}
+
+/* The port lock is held and interrupts are disabled. */
+static void ip22zilog_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ struct uart_ip22zilog_port *up = (struct uart_ip22zilog_port *) port;
+ struct zilog_channel *channel = ZILOG_CHANNEL_FROM_PORT(port);
+ unsigned char set_bits, clear_bits;
+
+ set_bits = clear_bits = 0;
+
+ if (mctrl & TIOCM_RTS)
+ set_bits |= RTS;
+ else
+ clear_bits |= RTS;
+ if (mctrl & TIOCM_DTR)
+ set_bits |= DTR;
+ else
+ clear_bits |= DTR;
+
+ /* NOTE: Not subject to 'transmitter active' rule. */
+ up->curregs[R5] |= set_bits;
+ up->curregs[R5] &= ~clear_bits;
+ write_zsreg(channel, R5, up->curregs[R5]);
+}
+
+/* The port lock is held and interrupts are disabled. */
+static void ip22zilog_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ struct uart_ip22zilog_port *up = (struct uart_ip22zilog_port *) port;
+
+ up->flags |= IP22ZILOG_FLAG_TX_STOPPED;
+}
+
+/* The port lock is held and interrupts are disabled. */
+static void ip22zilog_start_tx(struct uart_port *port, unsigned int tty_start)
+{
+ struct uart_ip22zilog_port *up = (struct uart_ip22zilog_port *) port;
+ struct zilog_channel *channel = ZILOG_CHANNEL_FROM_PORT(port);
+ unsigned char status;
+
+ up->flags |= IP22ZILOG_FLAG_TX_ACTIVE;
+ up->flags &= ~IP22ZILOG_FLAG_TX_STOPPED;
+
+ status = readb(&channel->control);
+ ZSDELAY();
+
+ /* TX busy? Just wait for the TX done interrupt. */
+ if (!(status & Tx_BUF_EMP))
+ return;
+
+ /* Send the first character to jump-start the TX done
+ * IRQ sending engine.
+ */
+ if (port->x_char) {
+ writeb(port->x_char, &channel->data);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ port->icount.tx++;
+ port->x_char = 0;
+ } else {
+ struct circ_buf *xmit = &port->info->xmit;
+
+ writeb(xmit->buf[xmit->tail], &channel->data);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ port->icount.tx++;
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&up->port);
+ }
+}
+
+/* The port lock is not held. */
+static void ip22zilog_stop_rx(struct uart_port *port)
+{
+ struct uart_ip22zilog_port *up = UART_ZILOG(port);
+ struct zilog_channel *channel;
+ unsigned long flags;
+
+ if (ZS_IS_CONS(up))
+ return;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ channel = ZILOG_CHANNEL_FROM_PORT(port);
+
+ /* Disable all RX interrupts. */
+ up->curregs[R1] &= ~RxINT_MASK;
+ ip22zilog_maybe_update_regs(up, channel);
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+/* The port lock is not held. */
+static void ip22zilog_enable_ms(struct uart_port *port)
+{
+ struct uart_ip22zilog_port *up = (struct uart_ip22zilog_port *) port;
+ struct zilog_channel *channel = ZILOG_CHANNEL_FROM_PORT(port);
+ unsigned char new_reg;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ new_reg = up->curregs[R15] | (DCDIE | SYNCIE | CTSIE);
+ if (new_reg != up->curregs[R15]) {
+ up->curregs[R15] = new_reg;
+
+ /* NOTE: Not subject to 'transmitter active' rule. */
+ write_zsreg(channel, R15, up->curregs[R15]);
+ }
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+/* The port lock is not held. */
+static void ip22zilog_break_ctl(struct uart_port *port, int break_state)
+{
+ struct uart_ip22zilog_port *up = (struct uart_ip22zilog_port *) port;
+ struct zilog_channel *channel = ZILOG_CHANNEL_FROM_PORT(port);
+ unsigned char set_bits, clear_bits, new_reg;
+ unsigned long flags;
+
+ set_bits = clear_bits = 0;
+
+ if (break_state)
+ set_bits |= SND_BRK;
+ else
+ clear_bits |= SND_BRK;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ new_reg = (up->curregs[R5] | set_bits) & ~clear_bits;
+ if (new_reg != up->curregs[R5]) {
+ up->curregs[R5] = new_reg;
+
+ /* NOTE: Not subject to 'transmitter active' rule. */
+ write_zsreg(channel, R5, up->curregs[R5]);
+ }
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void __ip22zilog_startup(struct uart_ip22zilog_port *up)
+{
+ struct zilog_channel *channel;
+
+ channel = ZILOG_CHANNEL_FROM_PORT(&up->port);
+ up->prev_status = readb(&channel->control);
+
+ /* Enable receiver and transmitter. */
+ up->curregs[R3] |= RxENAB;
+ up->curregs[R5] |= TxENAB;
+
+ up->curregs[R1] |= EXT_INT_ENAB | INT_ALL_Rx | TxINT_ENAB;
+ ip22zilog_maybe_update_regs(up, channel);
+}
+
+static int ip22zilog_startup(struct uart_port *port)
+{
+ struct uart_ip22zilog_port *up = UART_ZILOG(port);
+ unsigned long flags;
+
+ if (ZS_IS_CONS(up))
+ return 0;
+
+ spin_lock_irqsave(&port->lock, flags);
+ __ip22zilog_startup(up);
+ spin_unlock_irqrestore(&port->lock, flags);
+ return 0;
+}
+
+/*
+ * The test for ZS_IS_CONS is explained by the following e-mail:
+ *****
+ * From: Russell King <rmk@arm.linux.org.uk>
+ * Date: Sun, 8 Dec 2002 10:18:38 +0000
+ *
+ * On Sun, Dec 08, 2002 at 02:43:36AM -0500, Pete Zaitcev wrote:
+ * > I boot my 2.5 boxes using "console=ttyS0,9600" argument,
+ * > and I noticed that something is not right with reference
+ * > counting in this case. It seems that when the console
+ * > is open by kernel initially, this is not accounted
+ * > as an open, and uart_startup is not called.
+ *
+ * That is correct. We are unable to call uart_startup when the serial
+ * console is initialised because it may need to allocate memory (as
+ * request_irq does) and the memory allocators may not have been
+ * initialised.
+ *
+ * 1. initialise the port into a state where it can send characters in the
+ * console write method.
+ *
+ * 2. don't do the actual hardware shutdown in your shutdown() method (but
+ * do the normal software shutdown - ie, free irqs etc)
+ *****
+ */
+static void ip22zilog_shutdown(struct uart_port *port)
+{
+ struct uart_ip22zilog_port *up = UART_ZILOG(port);
+ struct zilog_channel *channel;
+ unsigned long flags;
+
+ if (ZS_IS_CONS(up))
+ return;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ channel = ZILOG_CHANNEL_FROM_PORT(port);
+
+ /* Disable receiver and transmitter. */
+ up->curregs[R3] &= ~RxENAB;
+ up->curregs[R5] &= ~TxENAB;
+
+ /* Disable all interrupts and BRK assertion. */
+ up->curregs[R1] &= ~(EXT_INT_ENAB | TxINT_ENAB | RxINT_MASK);
+ up->curregs[R5] &= ~SND_BRK;
+ ip22zilog_maybe_update_regs(up, channel);
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+/* Shared by TTY driver and serial console setup. The port lock is held
+ * and local interrupts are disabled.
+ */
+static void
+ip22zilog_convert_to_zs(struct uart_ip22zilog_port *up, unsigned int cflag,
+ unsigned int iflag, int brg)
+{
+
+ up->curregs[R10] = NRZ;
+ up->curregs[R11] = TCBR | RCBR;
+
+ /* Program BAUD and clock source. */
+ up->curregs[R4] &= ~XCLK_MASK;
+ up->curregs[R4] |= X16CLK;
+ up->curregs[R12] = brg & 0xff;
+ up->curregs[R13] = (brg >> 8) & 0xff;
+ up->curregs[R14] = BRSRC | BRENAB;
+
+ /* Character size, stop bits, and parity. */
+ up->curregs[3] &= ~RxN_MASK;
+ up->curregs[5] &= ~TxN_MASK;
+ switch (cflag & CSIZE) {
+ case CS5:
+ up->curregs[3] |= Rx5;
+ up->curregs[5] |= Tx5;
+ up->parity_mask = 0x1f;
+ break;
+ case CS6:
+ up->curregs[3] |= Rx6;
+ up->curregs[5] |= Tx6;
+ up->parity_mask = 0x3f;
+ break;
+ case CS7:
+ up->curregs[3] |= Rx7;
+ up->curregs[5] |= Tx7;
+ up->parity_mask = 0x7f;
+ break;
+ case CS8:
+ default:
+ up->curregs[3] |= Rx8;
+ up->curregs[5] |= Tx8;
+ up->parity_mask = 0xff;
+ break;
+ };
+ up->curregs[4] &= ~0x0c;
+ if (cflag & CSTOPB)
+ up->curregs[4] |= SB2;
+ else
+ up->curregs[4] |= SB1;
+ if (cflag & PARENB)
+ up->curregs[4] |= PAR_ENAB;
+ else
+ up->curregs[4] &= ~PAR_ENAB;
+ if (!(cflag & PARODD))
+ up->curregs[4] |= PAR_EVEN;
+ else
+ up->curregs[4] &= ~PAR_EVEN;
+
+ up->port.read_status_mask = Rx_OVR;
+ if (iflag & INPCK)
+ up->port.read_status_mask |= CRC_ERR | PAR_ERR;
+ if (iflag & (BRKINT | PARMRK))
+ up->port.read_status_mask |= BRK_ABRT;
+
+ up->port.ignore_status_mask = 0;
+ if (iflag & IGNPAR)
+ up->port.ignore_status_mask |= CRC_ERR | PAR_ERR;
+ if (iflag & IGNBRK) {
+ up->port.ignore_status_mask |= BRK_ABRT;
+ if (iflag & IGNPAR)
+ up->port.ignore_status_mask |= Rx_OVR;
+ }
+
+ if ((cflag & CREAD) == 0)
+ up->port.ignore_status_mask = 0xff;
+}
+
+/* The port lock is not held. */
+static void
+ip22zilog_set_termios(struct uart_port *port, struct termios *termios,
+ struct termios *old)
+{
+ struct uart_ip22zilog_port *up = (struct uart_ip22zilog_port *) port;
+ unsigned long flags;
+ int baud, brg;
+
+ baud = uart_get_baud_rate(port, termios, old, 1200, 76800);
+
+ spin_lock_irqsave(&up->port.lock, flags);
+
+ brg = BPS_TO_BRG(baud, ZS_CLOCK / ZS_CLOCK_DIVISOR);
+
+ ip22zilog_convert_to_zs(up, termios->c_cflag, termios->c_iflag, brg);
+
+ if (UART_ENABLE_MS(&up->port, termios->c_cflag))
+ up->flags |= IP22ZILOG_FLAG_MODEM_STATUS;
+ else
+ up->flags &= ~IP22ZILOG_FLAG_MODEM_STATUS;
+
+ up->cflag = termios->c_cflag;
+
+ ip22zilog_maybe_update_regs(up, ZILOG_CHANNEL_FROM_PORT(port));
+
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static const char *ip22zilog_type(struct uart_port *port)
+{
+ return "IP22-Zilog";
+}
+
+/* We do not request/release mappings of the registers here, this
+ * happens at early serial probe time.
+ */
+static void ip22zilog_release_port(struct uart_port *port)
+{
+}
+
+static int ip22zilog_request_port(struct uart_port *port)
+{
+ return 0;
+}
+
+/* These do not need to do anything interesting either. */
+static void ip22zilog_config_port(struct uart_port *port, int flags)
+{
+}
+
+/* We do not support letting the user mess with the divisor, IRQ, etc. */
+static int ip22zilog_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ return -EINVAL;
+}
+
+static struct uart_ops ip22zilog_pops = {
+ .tx_empty = ip22zilog_tx_empty,
+ .set_mctrl = ip22zilog_set_mctrl,
+ .get_mctrl = ip22zilog_get_mctrl,
+ .stop_tx = ip22zilog_stop_tx,
+ .start_tx = ip22zilog_start_tx,
+ .stop_rx = ip22zilog_stop_rx,
+ .enable_ms = ip22zilog_enable_ms,
+ .break_ctl = ip22zilog_break_ctl,
+ .startup = ip22zilog_startup,
+ .shutdown = ip22zilog_shutdown,
+ .set_termios = ip22zilog_set_termios,
+ .type = ip22zilog_type,
+ .release_port = ip22zilog_release_port,
+ .request_port = ip22zilog_request_port,
+ .config_port = ip22zilog_config_port,
+ .verify_port = ip22zilog_verify_port,
+};
+
+static struct uart_ip22zilog_port *ip22zilog_port_table;
+static struct zilog_layout **ip22zilog_chip_regs;
+
+static struct uart_ip22zilog_port *ip22zilog_irq_chain;
+static int zilog_irq = -1;
+
+static struct uart_driver ip22zilog_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "ttyS",
+ .devfs_name = "tty/",
+ .major = TTY_MAJOR,
+};
+
+static void * __init alloc_one_table(unsigned long size)
+{
+ void *ret;
+
+ ret = kmalloc(size, GFP_KERNEL);
+ if (ret != NULL)
+ memset(ret, 0, size);
+
+ return ret;
+}
+
+static void __init ip22zilog_alloc_tables(void)
+{
+ ip22zilog_port_table = (struct uart_ip22zilog_port *)
+ alloc_one_table(NUM_CHANNELS * sizeof(struct uart_ip22zilog_port));
+ ip22zilog_chip_regs = (struct zilog_layout **)
+ alloc_one_table(NUM_IP22ZILOG * sizeof(struct zilog_layout *));
+
+ if (ip22zilog_port_table == NULL || ip22zilog_chip_regs == NULL) {
+ panic("IP22-Zilog: Cannot allocate IP22-Zilog tables.");
+ }
+}
+
+/* Get the address of the registers for IP22-Zilog instance CHIP. */
+static struct zilog_layout * __init get_zs(int chip)
+{
+ unsigned long base;
+
+ if (chip < 0 || chip >= NUM_IP22ZILOG) {
+ panic("IP22-Zilog: Illegal chip number %d in get_zs.", chip);
+ }
+
+ /* Not probe-able, hard code it. */
+ base = (unsigned long) &sgioc->serport;
+
+ zilog_irq = SGI_SERIAL_IRQ;
+ request_mem_region(base, 8, "IP22-Zilog");
+
+ return (struct zilog_layout *) base;
+}
+
+#define ZS_PUT_CHAR_MAX_DELAY 2000 /* 10 ms */
+
+#ifdef CONFIG_SERIAL_IP22_ZILOG_CONSOLE
+static void ip22zilog_put_char(struct zilog_channel *channel, unsigned char ch)
+{
+ int loops = ZS_PUT_CHAR_MAX_DELAY;
+
+ /* This is a timed polling loop so do not switch the explicit
+ * udelay with ZSDELAY as that is a NOP on some platforms. -DaveM
+ */
+ do {
+ unsigned char val = readb(&channel->control);
+ if (val & Tx_BUF_EMP) {
+ ZSDELAY();
+ break;
+ }
+ udelay(5);
+ } while (--loops);
+
+ writeb(ch, &channel->data);
+ ZSDELAY();
+ ZS_WSYNC(channel);
+}
+
+static void
+ip22zilog_console_write(struct console *con, const char *s, unsigned int count)
+{
+ struct uart_ip22zilog_port *up = &ip22zilog_port_table[con->index];
+ struct zilog_channel *channel = ZILOG_CHANNEL_FROM_PORT(&up->port);
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ for (i = 0; i < count; i++, s++) {
+ ip22zilog_put_char(channel, *s);
+ if (*s == 10)
+ ip22zilog_put_char(channel, 13);
+ }
+ udelay(2);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+void
+ip22serial_console_termios(struct console *con, char *options)
+{
+ int baud = 9600, bits = 8, cflag;
+ int parity = 'n';
+ int flow = 'n';
+
+ if (!serial_console)
+ return;
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+
+ cflag = CREAD | HUPCL | CLOCAL;
+
+ switch (baud) {
+ case 150: cflag |= B150; break;
+ case 300: cflag |= B300; break;
+ case 600: cflag |= B600; break;
+ case 1200: cflag |= B1200; break;
+ case 2400: cflag |= B2400; break;
+ case 4800: cflag |= B4800; break;
+ case 9600: cflag |= B9600; break;
+ case 19200: cflag |= B19200; break;
+ case 38400: cflag |= B38400; break;
+ default: baud = 9600; cflag |= B9600; break;
+ }
+
+ con->cflag = cflag | CS8; /* 8N1 */
+}
+
+static int __init ip22zilog_console_setup(struct console *con, char *options)
+{
+ struct uart_ip22zilog_port *up = &ip22zilog_port_table[con->index];
+ unsigned long flags;
+ int baud, brg;
+
+ printk("Console: ttyS%d (IP22-Zilog)\n",
+ (ip22zilog_reg.minor - 64) + con->index);
+
+ /* Get firmware console settings. */
+ ip22serial_console_termios(con, options);
+
+ /* Firmware console speed is limited to 150-->38400 baud so
+ * this hackish cflag thing is OK.
+ */
+ switch (con->cflag & CBAUD) {
+ case B150: baud = 150; break;
+ case B300: baud = 300; break;
+ case B600: baud = 600; break;
+ case B1200: baud = 1200; break;
+ case B2400: baud = 2400; break;
+ case B4800: baud = 4800; break;
+ default: case B9600: baud = 9600; break;
+ case B19200: baud = 19200; break;
+ case B38400: baud = 38400; break;
+ };
+
+ brg = BPS_TO_BRG(baud, ZS_CLOCK / ZS_CLOCK_DIVISOR);
+
+ spin_lock_irqsave(&up->port.lock, flags);
+
+ up->curregs[R15] = BRKIE;
+ ip22zilog_convert_to_zs(up, con->cflag, 0, brg);
+
+ __ip22zilog_startup(up);
+
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ return 0;
+}
+
+static struct console ip22zilog_console = {
+ .name = "ttyS",
+ .write = ip22zilog_console_write,
+ .device = uart_console_device,
+ .setup = ip22zilog_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+ .data = &ip22zilog_reg,
+};
+#define IP22ZILOG_CONSOLE (&ip22zilog_console)
+
+static int __init ip22zilog_console_init(void)
+{
+ int i;
+
+ if (con_is_present())
+ return 0;
+
+ for (i = 0; i < NUM_CHANNELS; i++) {
+ int this_minor = ip22zilog_reg.minor + i;
+
+ if ((this_minor - 64) == (serial_console - 1))
+ break;
+ }
+ if (i == NUM_CHANNELS)
+ return 0;
+
+ ip22zilog_console.index = i;
+ register_console(&ip22zilog_console);
+ return 0;
+}
+#else /* CONFIG_SERIAL_IP22_ZILOG_CONSOLE */
+#define IP22ZILOG_CONSOLE (NULL)
+#define ip22zilog_console_init() do { } while (0)
+#endif
+
+static void __init ip22zilog_prepare(void)
+{
+ struct uart_ip22zilog_port *up;
+ struct zilog_layout *rp;
+ int channel, chip;
+
+ /*
+ * Temporary fix.
+ */
+ for (channel = 0; channel < NUM_CHANNELS; channel++)
+ spin_lock_init(&ip22zilog_port_table[channel].port.lock);
+
+ ip22zilog_irq_chain = up = &ip22zilog_port_table[0];
+ for (channel = 0; channel < NUM_CHANNELS - 1; channel++)
+ up[channel].next = &up[channel + 1];
+ up[channel].next = NULL;
+
+ for (chip = 0; chip < NUM_IP22ZILOG; chip++) {
+ if (!ip22zilog_chip_regs[chip]) {
+ ip22zilog_chip_regs[chip] = rp = get_zs(chip);
+
+ up[(chip * 2) + 0].port.membase = (char *) &rp->channelA;
+ up[(chip * 2) + 1].port.membase = (char *) &rp->channelB;
+ }
+
+ /* Channel A */
+ up[(chip * 2) + 0].port.iotype = UPIO_MEM;
+ up[(chip * 2) + 0].port.irq = zilog_irq;
+ up[(chip * 2) + 0].port.uartclk = ZS_CLOCK;
+ up[(chip * 2) + 0].port.fifosize = 1;
+ up[(chip * 2) + 0].port.ops = &ip22zilog_pops;
+ up[(chip * 2) + 0].port.type = PORT_IP22ZILOG;
+ up[(chip * 2) + 0].port.flags = 0;
+ up[(chip * 2) + 0].port.line = (chip * 2) + 0;
+ up[(chip * 2) + 0].flags |= IP22ZILOG_FLAG_IS_CHANNEL_A;
+
+ /* Channel B */
+ up[(chip * 2) + 1].port.iotype = UPIO_MEM;
+ up[(chip * 2) + 1].port.irq = zilog_irq;
+ up[(chip * 2) + 1].port.uartclk = ZS_CLOCK;
+ up[(chip * 2) + 1].port.fifosize = 1;
+ up[(chip * 2) + 1].port.ops = &ip22zilog_pops;
+ up[(chip * 2) + 1].port.type = PORT_IP22ZILOG;
+ up[(chip * 2) + 1].port.flags = 0;
+ up[(chip * 2) + 1].port.line = (chip * 2) + 1;
+ up[(chip * 2) + 1].flags |= 0;
+ }
+}
+
+static void __init ip22zilog_init_hw(void)
+{
+ int i;
+
+ for (i = 0; i < NUM_CHANNELS; i++) {
+ struct uart_ip22zilog_port *up = &ip22zilog_port_table[i];
+ struct zilog_channel *channel = ZILOG_CHANNEL_FROM_PORT(&up->port);
+ unsigned long flags;
+ int baud, brg;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+
+ if (ZS_IS_CHANNEL_A(up)) {
+ write_zsreg(channel, R9, FHWRES);
+ ZSDELAY_LONG();
+ (void) read_zsreg(channel, R0);
+ }
+
+ /* Normal serial TTY. */
+ up->parity_mask = 0xff;
+ up->curregs[R1] = EXT_INT_ENAB | INT_ALL_Rx | TxINT_ENAB;
+ up->curregs[R4] = PAR_EVEN | X16CLK | SB1;
+ up->curregs[R3] = RxENAB | Rx8;
+ up->curregs[R5] = TxENAB | Tx8;
+ up->curregs[R9] = NV | MIE;
+ up->curregs[R10] = NRZ;
+ up->curregs[R11] = TCBR | RCBR;
+ baud = 9600;
+ brg = BPS_TO_BRG(baud, ZS_CLOCK / ZS_CLOCK_DIVISOR);
+ up->curregs[R12] = (brg & 0xff);
+ up->curregs[R13] = (brg >> 8) & 0xff;
+ up->curregs[R14] = BRSRC | BRENAB;
+ __load_zsregs(channel, up->curregs);
+
+ spin_unlock_irqrestore(&up->port.lock, flags);
+ }
+}
+
+static int __init ip22zilog_ports_init(void)
+{
+ int ret;
+
+ printk(KERN_INFO "Serial: IP22 Zilog driver (%d chips).\n", NUM_IP22ZILOG);
+
+ ip22zilog_prepare();
+
+ if (request_irq(zilog_irq, ip22zilog_interrupt, 0,
+ "IP22-Zilog", ip22zilog_irq_chain)) {
+ panic("IP22-Zilog: Unable to register zs interrupt handler.\n");
+ }
+
+ ip22zilog_init_hw();
+
+ /* We can only init this once we have probed the Zilogs
+ * in the system.
+ */
+ ip22zilog_reg.nr = NUM_CHANNELS;
+ ip22zilog_reg.cons = IP22ZILOG_CONSOLE;
+
+ ip22zilog_reg.minor = ip22serial_current_minor;
+ ip22serial_current_minor += NUM_CHANNELS;
+
+ ret = uart_register_driver(&ip22zilog_reg);
+ if (ret == 0) {
+ int i;
+
+ for (i = 0; i < NUM_CHANNELS; i++) {
+ struct uart_ip22zilog_port *up = &ip22zilog_port_table[i];
+
+ uart_add_one_port(&ip22zilog_reg, &up->port);
+ }
+ }
+
+ return ret;
+}
+
+static int __init ip22zilog_init(void)
+{
+ /* IP22 Zilog setup is hard coded, no probing to do. */
+
+ ip22zilog_alloc_tables();
+
+ ip22zilog_ports_init();
+ ip22zilog_console_init();
+
+ return 0;
+}
+
+static void __exit ip22zilog_exit(void)
+{
+ int i;
+
+ for (i = 0; i < NUM_CHANNELS; i++) {
+ struct uart_ip22zilog_port *up = &ip22zilog_port_table[i];
+
+ uart_remove_one_port(&ip22zilog_reg, &up->port);
+ }
+
+ uart_unregister_driver(&ip22zilog_reg);
+}
+
+module_init(ip22zilog_init);
+module_exit(ip22zilog_exit);
+
+/* David wrote it but I'm to blame for the bugs ... */
+MODULE_AUTHOR("Ralf Baechle <ralf@linux-mips.org>");
+MODULE_DESCRIPTION("SGI Zilog serial port driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+#ifndef _IP22_ZILOG_H
+#define _IP22_ZILOG_H
+
+#include <asm/byteorder.h>
+
+struct zilog_channel {
+#ifdef __BIG_ENDIAN
+ volatile unsigned char unused0[3];
+ volatile unsigned char control;
+ volatile unsigned char unused1[3];
+ volatile unsigned char data;
+#else /* __LITTLE_ENDIAN */
+ volatile unsigned char control;
+ volatile unsigned char unused0[3];
+ volatile unsigned char data;
+ volatile unsigned char unused1[3];
+#endif
+};
+
+struct zilog_layout {
+ struct zilog_channel channelB;
+ struct zilog_channel channelA;
+};
+
+#define NUM_ZSREGS 16
+
+/* Conversion routines to/from brg time constants from/to bits
+ * per second.
+ */
+#define BRG_TO_BPS(brg, freq) ((freq) / 2 / ((brg) + 2))
+#define BPS_TO_BRG(bps, freq) ((((freq) + (bps)) / (2 * (bps))) - 2)
+
+/* The Zilog register set */
+
+#define FLAG 0x7e
+
+/* Write Register 0 */
+#define R0 0 /* Register selects */
+#define R1 1
+#define R2 2
+#define R3 3
+#define R4 4
+#define R5 5
+#define R6 6
+#define R7 7
+#define R8 8
+#define R9 9
+#define R10 10
+#define R11 11
+#define R12 12
+#define R13 13
+#define R14 14
+#define R15 15
+
+#define NULLCODE 0 /* Null Code */
+#define POINT_HIGH 0x8 /* Select upper half of registers */
+#define RES_EXT_INT 0x10 /* Reset Ext. Status Interrupts */
+#define SEND_ABORT 0x18 /* HDLC Abort */
+#define RES_RxINT_FC 0x20 /* Reset RxINT on First Character */
+#define RES_Tx_P 0x28 /* Reset TxINT Pending */
+#define ERR_RES 0x30 /* Error Reset */
+#define RES_H_IUS 0x38 /* Reset highest IUS */
+
+#define RES_Rx_CRC 0x40 /* Reset Rx CRC Checker */
+#define RES_Tx_CRC 0x80 /* Reset Tx CRC Checker */
+#define RES_EOM_L 0xC0 /* Reset EOM latch */
+
+/* Write Register 1 */
+
+#define EXT_INT_ENAB 0x1 /* Ext Int Enable */
+#define TxINT_ENAB 0x2 /* Tx Int Enable */
+#define PAR_SPEC 0x4 /* Parity is special condition */
+
+#define RxINT_DISAB 0 /* Rx Int Disable */
+#define RxINT_FCERR 0x8 /* Rx Int on First Character Only or Error */
+#define INT_ALL_Rx 0x10 /* Int on all Rx Characters or error */
+#define INT_ERR_Rx 0x18 /* Int on error only */
+#define RxINT_MASK 0x18
+
+#define WT_RDY_RT 0x20 /* Wait/Ready on R/T */
+#define WT_FN_RDYFN 0x40 /* Wait/FN/Ready FN */
+#define WT_RDY_ENAB 0x80 /* Wait/Ready Enable */
+
+/* Write Register #2 (Interrupt Vector) */
+
+/* Write Register 3 */
+
+#define RxENAB 0x1 /* Rx Enable */
+#define SYNC_L_INH 0x2 /* Sync Character Load Inhibit */
+#define ADD_SM 0x4 /* Address Search Mode (SDLC) */
+#define RxCRC_ENAB 0x8 /* Rx CRC Enable */
+#define ENT_HM 0x10 /* Enter Hunt Mode */
+#define AUTO_ENAB 0x20 /* Auto Enables */
+#define Rx5 0x0 /* Rx 5 Bits/Character */
+#define Rx7 0x40 /* Rx 7 Bits/Character */
+#define Rx6 0x80 /* Rx 6 Bits/Character */
+#define Rx8 0xc0 /* Rx 8 Bits/Character */
+#define RxN_MASK 0xc0
+
+/* Write Register 4 */
+
+#define PAR_ENAB 0x1 /* Parity Enable */
+#define PAR_EVEN 0x2 /* Parity Even/Odd* */
+
+#define SYNC_ENAB 0 /* Sync Modes Enable */
+#define SB1 0x4 /* 1 stop bit/char */
+#define SB15 0x8 /* 1.5 stop bits/char */
+#define SB2 0xc /* 2 stop bits/char */
+
+#define MONSYNC 0 /* 8 Bit Sync character */
+#define BISYNC 0x10 /* 16 bit sync character */
+#define SDLC 0x20 /* SDLC Mode (01111110 Sync Flag) */
+#define EXTSYNC 0x30 /* External Sync Mode */
+
+#define X1CLK 0x0 /* x1 clock mode */
+#define X16CLK 0x40 /* x16 clock mode */
+#define X32CLK 0x80 /* x32 clock mode */
+#define X64CLK 0xC0 /* x64 clock mode */
+#define XCLK_MASK 0xC0
+
+/* Write Register 5 */
+
+#define TxCRC_ENAB 0x1 /* Tx CRC Enable */
+#define RTS 0x2 /* RTS */
+#define SDLC_CRC 0x4 /* SDLC/CRC-16 */
+#define TxENAB 0x8 /* Tx Enable */
+#define SND_BRK 0x10 /* Send Break */
+#define Tx5 0x0 /* Tx 5 bits (or less)/character */
+#define Tx7 0x20 /* Tx 7 bits/character */
+#define Tx6 0x40 /* Tx 6 bits/character */
+#define Tx8 0x60 /* Tx 8 bits/character */
+#define TxN_MASK 0x60
+#define DTR 0x80 /* DTR */
+
+/* Write Register 6 (Sync bits 0-7/SDLC Address Field) */
+
+/* Write Register 7 (Sync bits 8-15/SDLC 01111110) */
+
+/* Write Register 8 (transmit buffer) */
+
+/* Write Register 9 (Master interrupt control) */
+#define VIS 1 /* Vector Includes Status */
+#define NV 2 /* No Vector */
+#define DLC 4 /* Disable Lower Chain */
+#define MIE 8 /* Master Interrupt Enable */
+#define STATHI 0x10 /* Status high */
+#define NORESET 0 /* No reset on write to R9 */
+#define CHRB 0x40 /* Reset channel B */
+#define CHRA 0x80 /* Reset channel A */
+#define FHWRES 0xc0 /* Force hardware reset */
+
+/* Write Register 10 (misc control bits) */
+#define BIT6 1 /* 6 bit/8bit sync */
+#define LOOPMODE 2 /* SDLC Loop mode */
+#define ABUNDER 4 /* Abort/flag on SDLC xmit underrun */
+#define MARKIDLE 8 /* Mark/flag on idle */
+#define GAOP 0x10 /* Go active on poll */
+#define NRZ 0 /* NRZ mode */
+#define NRZI 0x20 /* NRZI mode */
+#define FM1 0x40 /* FM1 (transition = 1) */
+#define FM0 0x60 /* FM0 (transition = 0) */
+#define CRCPS 0x80 /* CRC Preset I/O */
+
+/* Write Register 11 (Clock Mode control) */
+#define TRxCXT 0 /* TRxC = Xtal output */
+#define TRxCTC 1 /* TRxC = Transmit clock */
+#define TRxCBR 2 /* TRxC = BR Generator Output */
+#define TRxCDP 3 /* TRxC = DPLL output */
+#define TRxCOI 4 /* TRxC O/I */
+#define TCRTxCP 0 /* Transmit clock = RTxC pin */
+#define TCTRxCP 8 /* Transmit clock = TRxC pin */
+#define TCBR 0x10 /* Transmit clock = BR Generator output */
+#define TCDPLL 0x18 /* Transmit clock = DPLL output */
+#define RCRTxCP 0 /* Receive clock = RTxC pin */
+#define RCTRxCP 0x20 /* Receive clock = TRxC pin */
+#define RCBR 0x40 /* Receive clock = BR Generator output */
+#define RCDPLL 0x60 /* Receive clock = DPLL output */
+#define RTxCX 0x80 /* RTxC Xtal/No Xtal */
+
+/* Write Register 12 (lower byte of baud rate generator time constant) */
+
+/* Write Register 13 (upper byte of baud rate generator time constant) */
+
+/* Write Register 14 (Misc control bits) */
+#define BRENAB 1 /* Baud rate generator enable */
+#define BRSRC 2 /* Baud rate generator source */
+#define DTRREQ 4 /* DTR/Request function */
+#define AUTOECHO 8 /* Auto Echo */
+#define LOOPBAK 0x10 /* Local loopback */
+#define SEARCH 0x20 /* Enter search mode */
+#define RMC 0x40 /* Reset missing clock */
+#define DISDPLL 0x60 /* Disable DPLL */
+#define SSBR 0x80 /* Set DPLL source = BR generator */
+#define SSRTxC 0xa0 /* Set DPLL source = RTxC */
+#define SFMM 0xc0 /* Set FM mode */
+#define SNRZI 0xe0 /* Set NRZI mode */
+
+/* Write Register 15 (external/status interrupt control) */
+#define ZCIE 2 /* Zero count IE */
+#define DCDIE 8 /* DCD IE */
+#define SYNCIE 0x10 /* Sync/hunt IE */
+#define CTSIE 0x20 /* CTS IE */
+#define TxUIE 0x40 /* Tx Underrun/EOM IE */
+#define BRKIE 0x80 /* Break/Abort IE */
+
+
+/* Read Register 0 */
+#define Rx_CH_AV 0x1 /* Rx Character Available */
+#define ZCOUNT 0x2 /* Zero count */
+#define Tx_BUF_EMP 0x4 /* Tx Buffer empty */
+#define DCD 0x8 /* DCD */
+#define SYNC 0x10 /* Sync/hunt */
+#define CTS 0x20 /* CTS */
+#define TxEOM 0x40 /* Tx underrun */
+#define BRK_ABRT 0x80 /* Break/Abort */
+
+/* Read Register 1 */
+#define ALL_SNT 0x1 /* All sent */
+/* Residue Data for 8 Rx bits/char programmed */
+#define RES3 0x8 /* 0/3 */
+#define RES4 0x4 /* 0/4 */
+#define RES5 0xc /* 0/5 */
+#define RES6 0x2 /* 0/6 */
+#define RES7 0xa /* 0/7 */
+#define RES8 0x6 /* 0/8 */
+#define RES18 0xe /* 1/8 */
+#define RES28 0x0 /* 2/8 */
+/* Special Rx Condition Interrupts */
+#define PAR_ERR 0x10 /* Parity error */
+#define Rx_OVR 0x20 /* Rx Overrun Error */
+#define CRC_ERR 0x40 /* CRC/Framing Error */
+#define END_FR 0x80 /* End of Frame (SDLC) */
+
+/* Read Register 2 (channel b only) - Interrupt vector */
+#define CHB_Tx_EMPTY 0x00
+#define CHB_EXT_STAT 0x02
+#define CHB_Rx_AVAIL 0x04
+#define CHB_SPECIAL 0x06
+#define CHA_Tx_EMPTY 0x08
+#define CHA_EXT_STAT 0x0a
+#define CHA_Rx_AVAIL 0x0c
+#define CHA_SPECIAL 0x0e
+#define STATUS_MASK 0x0e
+
+/* Read Register 3 (interrupt pending register) ch a only */
+#define CHBEXT 0x1 /* Channel B Ext/Stat IP */
+#define CHBTxIP 0x2 /* Channel B Tx IP */
+#define CHBRxIP 0x4 /* Channel B Rx IP */
+#define CHAEXT 0x8 /* Channel A Ext/Stat IP */
+#define CHATxIP 0x10 /* Channel A Tx IP */
+#define CHARxIP 0x20 /* Channel A Rx IP */
+
+/* Read Register 8 (receive data register) */
+
+/* Read Register 10 (misc status bits) */
+#define ONLOOP 2 /* On loop */
+#define LOOPSEND 0x10 /* Loop sending */
+#define CLK2MIS 0x40 /* Two clocks missing */
+#define CLK1MIS 0x80 /* One clock missing */
+
+/* Read Register 12 (lower byte of baud rate generator constant) */
+
+/* Read Register 13 (upper byte of baud rate generator constant) */
+
+/* Read Register 15 (value of WR 15) */
+
+/* Misc macros */
+#define ZS_CLEARERR(channel) do { writeb(ERR_RES, &channel->control); \
+ udelay(5); } while(0)
+
+#define ZS_CLEARSTAT(channel) do { writeb(RES_EXT_INT, &channel->control); \
+ udelay(5); } while(0)
+
+#define ZS_CLEARFIFO(channel) do { readb(&channel->data); \
+ udelay(2); \
+ readb(&channel->data); \
+ udelay(2); \
+ readb(&channel->data); \
+ udelay(2); } while(0)
+
+#endif /* _IP22_ZILOG_H */
{
DECLARE_WAITQUEUE(wait, current);
struct usblp *usblp = file->private_data;
- int timeout, err = 0;
+ int timeout, err = 0, transfer_length;
size_t writecount = 0;
while (writecount < count) {
continue;
}
- writecount += usblp->writeurb->transfer_buffer_length;
- usblp->writeurb->transfer_buffer_length = 0;
+ transfer_length=(count - writecount);
+ if (transfer_length > USBLP_BUF_SIZE)
+ transfer_length = USBLP_BUF_SIZE;
- if (writecount == count) {
- up (&usblp->sem);
- break;
- }
+ usblp->writeurb->transfer_buffer_length = transfer_length;
- usblp->writeurb->transfer_buffer_length = (count - writecount) < USBLP_BUF_SIZE ?
- (count - writecount) : USBLP_BUF_SIZE;
-
- if (copy_from_user(usblp->writeurb->transfer_buffer, buffer + writecount,
- usblp->writeurb->transfer_buffer_length)) {
+ if (copy_from_user(usblp->writeurb->transfer_buffer, buffer + writecount, transfer_length)) {
up(&usblp->sem);
return writecount ? writecount : -EFAULT;
}
break;
}
up (&usblp->sem);
+
+ writecount += transfer_length;
}
return count;
* DMA memory management for framework level HCD code (hc_driver)
*
* This implementation plugs in through generic "usb_bus" level methods,
- * and works with real PCI, or when "pci device == null" makes sense.
+ * and should work with all USB controllers, regardles of bus type.
*/
#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
-#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/mm.h>
+#include <asm/io.h>
+#include <asm/scatterlist.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
#ifdef CONFIG_USB_DEBUG
if (!(size = pool_max [i]))
continue;
snprintf (name, sizeof name, "buffer-%d", size);
- hcd->pool [i] = pci_pool_create (name, hcd->pdev,
+ hcd->pool [i] = dma_pool_create (name, hcd->self.controller,
size, size, 0);
if (!hcd->pool [i]) {
hcd_buffer_destroy (hcd);
int i;
for (i = 0; i < HCD_BUFFER_POOLS; i++) {
- struct pci_pool *pool = hcd->pool [i];
+ struct dma_pool *pool = hcd->pool [i];
if (pool) {
- pci_pool_destroy (pool);
+ dma_pool_destroy (pool);
hcd->pool [i] = 0;
}
}
for (i = 0; i < HCD_BUFFER_POOLS; i++) {
if (size <= pool_max [i])
- return pci_pool_alloc (hcd->pool [i], mem_flags, dma);
+ return dma_pool_alloc (hcd->pool [i], mem_flags, dma);
}
- return pci_alloc_consistent (hcd->pdev, size, dma);
+ return dma_alloc_coherent (hcd->self.controller, size, dma, 0);
}
void hcd_buffer_free (
return;
for (i = 0; i < HCD_BUFFER_POOLS; i++) {
if (size <= pool_max [i]) {
- pci_pool_free (hcd->pool [i], addr, dma);
+ dma_pool_free (hcd->pool [i], addr, dma);
return;
}
}
- pci_free_consistent (hcd->pdev, size, addr, dma);
+ dma_free_coherent (hcd->self.controller, size, addr, dma);
}
#include <linux/usb.h>
#include "hcd.h"
-#ifdef CONFIG_PPC_PMAC
-#include <asm/machdep.h>
-#include <asm/pmac_feature.h>
-#include <asm/pci-bridge.h>
-#include <asm/prom.h>
-#endif /* CONFIG_PPC_PMAC */
/* PCI-based HCs are normal, but custom bus glue should be ok */
pci_set_drvdata (dev, hcd);
hcd->driver = driver;
hcd->description = driver->description;
- hcd->pdev = dev;
hcd->self.bus_name = pci_name(dev);
if (hcd->product_desc == NULL)
hcd->product_desc = "USB Host Controller";
hcd->self.controller = &dev->dev;
- hcd->controller = hcd->self.controller;
if ((retval = hcd_buffer_create (hcd)) != 0) {
clean_3:
goto clean_2;
}
- dev_info (hcd->controller, "%s\n", hcd->product_desc);
+ dev_info (hcd->self.controller, "%s\n", hcd->product_desc);
/* till now HC has been in an indeterminate state ... */
if (driver->reset && (retval = driver->reset (hcd)) < 0) {
- dev_err (hcd->controller, "can't reset\n");
+ dev_err (hcd->self.controller, "can't reset\n");
goto clean_3;
}
hcd->state = USB_STATE_HALT;
retval = request_irq (dev->irq, usb_hcd_irq, SA_SHIRQ,
hcd->description, hcd);
if (retval != 0) {
- dev_err (hcd->controller,
+ dev_err (hcd->self.controller,
"request interrupt %s failed\n", bufp);
goto clean_3;
}
hcd->irq = dev->irq;
- dev_info (hcd->controller, "irq %s, %s %p\n", bufp,
+ dev_info (hcd->self.controller, "irq %s, %s %p\n", bufp,
(driver->flags & HCD_MEMORY) ? "pci mem" : "io base",
base);
hcd = pci_get_drvdata(dev);
if (!hcd)
return;
- dev_info (hcd->controller, "remove, state %x\n", hcd->state);
+ dev_info (hcd->self.controller, "remove, state %x\n", hcd->state);
if (in_interrupt ())
BUG ();
if (HCD_IS_RUNNING (hcd->state))
hcd->state = USB_STATE_QUIESCING;
- dev_dbg (hcd->controller, "roothub graceful disconnect\n");
+ dev_dbg (hcd->self.controller, "roothub graceful disconnect\n");
usb_disconnect (&hub);
hcd->driver->stop (hcd);
int retval = 0;
hcd = pci_get_drvdata(dev);
- dev_dbg (hcd->controller, "suspend D%d --> D%d\n",
+ dev_dbg (hcd->self.controller, "suspend D%d --> D%d\n",
dev->current_state, state);
switch (hcd->state) {
case USB_STATE_HALT:
- dev_dbg (hcd->controller, "halted; hcd not suspended\n");
+ dev_dbg (hcd->self.controller, "halted; hcd not suspended\n");
break;
case USB_STATE_SUSPENDED:
- dev_dbg (hcd->controller, "hcd already suspended\n");
+ dev_dbg (hcd->self.controller, "hcd already suspended\n");
break;
default:
/* remote wakeup needs hub->suspend() cooperation */
hcd->state = USB_STATE_QUIESCING;
retval = hcd->driver->suspend (hcd, state);
if (retval)
- dev_dbg (hcd->controller, "suspend fail, retval %d\n",
+ dev_dbg (hcd->self.controller,
+ "suspend fail, retval %d\n",
retval);
- else {
-#ifdef CONFIG_PPC_PMAC
- struct device_node *of_node;
-
- /* Disable USB PAD & cell clock for Keylargo built-in controller */
- of_node = pci_device_to_OF_node (dev);
- if (of_node)
- pmac_call_feature(PMAC_FTR_USB_ENABLE, of_node, 0, 0);
-#endif /* CONFIG_PPC_PMAC */
+ else
hcd->state = USB_STATE_SUSPENDED;
- }
}
pci_set_power_state (dev, state);
int retval;
hcd = pci_get_drvdata(dev);
- dev_dbg (hcd->controller, "resume from state D%d\n",
+ dev_dbg (hcd->self.controller, "resume from state D%d\n",
dev->current_state);
if (hcd->state != USB_STATE_SUSPENDED) {
- dev_dbg (hcd->controller, "can't resume, not suspended!\n");
+ dev_dbg (hcd->self.controller,
+ "can't resume, not suspended!\n");
return -EL3HLT;
}
hcd->state = USB_STATE_RESUMING;
pci_set_power_state (dev, 0);
-#ifdef CONFIG_PPC_PMAC
- {
- struct device_node *of_node;
-
- /* Re-enable USB PAD & cell clock for Keykargo built-in controller */
- of_node = pci_device_to_OF_node (dev);
- if (of_node)
- pmac_call_feature (PMAC_FTR_USB_ENABLE, of_node, 0, 1);
- }
-#endif /* CONFIG_PPC_PMAC */
-
pci_restore_state (dev, hcd->pci_state);
/* remote wakeup needs hub->suspend() cooperation */
retval = hcd->driver->resume (hcd);
if (!HCD_IS_RUNNING (hcd->state)) {
- dev_dbg (hcd->controller, "resume fail, retval %d\n", retval);
+ dev_dbg (hcd->self.controller,
+ "resume fail, retval %d\n", retval);
usb_hc_died (hcd);
}
/* FALLTHROUGH */
case DeviceOutRequest | USB_REQ_CLEAR_FEATURE:
case DeviceOutRequest | USB_REQ_SET_FEATURE:
- dev_dbg (hcd->controller, "no device features yet yet\n");
+ dev_dbg (hcd->self.controller, "no device features yet yet\n");
break;
case DeviceRequest | USB_REQ_GET_CONFIGURATION:
ubuf [0] = 1;
break;
case DeviceOutRequest | USB_REQ_SET_ADDRESS:
// wValue == urb->dev->devaddr
- dev_dbg (hcd->controller, "root hub device address %d\n",
+ dev_dbg (hcd->self.controller, "root hub device address %d\n",
wValue);
break;
/* FALLTHROUGH */
case EndpointOutRequest | USB_REQ_CLEAR_FEATURE:
case EndpointOutRequest | USB_REQ_SET_FEATURE:
- dev_dbg (hcd->controller, "no endpoint features yet\n");
+ dev_dbg (hcd->self.controller, "no endpoint features yet\n");
break;
/* CLASS REQUESTS (and errors) */
error:
/* "protocol stall" on error */
urb->status = -EPIPE;
- dev_dbg (hcd->controller, "unsupported hub control message (maxchild %d)\n",
+ dev_dbg (hcd->self.controller, "unsupported hub control message (maxchild %d)\n",
urb->dev->maxchild);
}
if (urb->status) {
urb->actual_length = 0;
- dev_dbg (hcd->controller, "CTRL: TypeReq=0x%x val=0x%x idx=0x%x len=%d ==> %d\n",
+ dev_dbg (hcd->self.controller, "CTRL: TypeReq=0x%x val=0x%x idx=0x%x len=%d ==> %d\n",
typeReq, wValue, wIndex, wLength, urb->status);
}
if (bufp) {
|| urb->status != -EINPROGRESS
|| urb->transfer_buffer_length < len
|| !HCD_IS_RUNNING (hcd->state)) {
- dev_dbg (hcd->controller,
+ dev_dbg (hcd->self.controller,
"not queuing rh status urb, stat %d\n",
urb->status);
return -EINVAL;
/* lower level hcd code should use *_dma exclusively,
* unless it uses pio or talks to another transport.
*/
- if (hcd->controller->dma_mask) {
+ if (hcd->self.controller->dma_mask) {
if (usb_pipecontrol (urb->pipe)
&& !(urb->transfer_flags & URB_NO_SETUP_DMA_MAP))
urb->setup_dma = dma_map_single (
- hcd->controller,
+ hcd->self.controller,
urb->setup_packet,
sizeof (struct usb_ctrlrequest),
DMA_TO_DEVICE);
if (urb->transfer_buffer_length != 0
&& !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP))
urb->transfer_dma = dma_map_single (
- hcd->controller,
+ hcd->self.controller,
urb->transfer_buffer,
urb->transfer_buffer_length,
usb_pipein (urb->pipe)
/* failures "should" be harmless */
value = hcd->driver->urb_dequeue (hcd, urb);
if (value != 0)
- dev_dbg (hcd->controller,
+ dev_dbg (hcd->self.controller,
"dequeue %p --> %d\n",
urb, value);
}
* finish unlinking the initial failed usb_set_address().
*/
if (!hcd->saw_irq) {
- dev_warn (hcd->controller, "Unlink after no-IRQ? "
+ dev_warn (hcd->self.controller, "Unlink after no-IRQ? "
"Different ACPI or APIC settings may help."
"\n");
hcd->saw_irq = 1;
*/
if (!(urb->transfer_flags & URB_ASYNC_UNLINK)) {
if (in_interrupt ()) {
- dev_dbg (hcd->controller, "non-async unlink in_interrupt");
+ dev_dbg (hcd->self.controller,
+ "non-async unlink in_interrupt");
retval = -EWOULDBLOCK;
goto done;
}
if (tmp == -EINPROGRESS) {
tmp = urb->pipe;
unlink1 (hcd, urb);
- dev_dbg (hcd->controller,
+ dev_dbg (hcd->self.controller,
"shutdown urb %p pipe %08x ep%d%s%s\n",
urb, tmp, usb_pipeendpoint (tmp),
(tmp & USB_DIR_IN) ? "in" : "out",
/* device driver problem with refcounts? */
if (!list_empty (&dev->urb_list)) {
- dev_dbg (hcd->controller, "free busy dev, %s devnum %d (bug!)\n",
+ dev_dbg (hcd->self.controller, "free busy dev, %s devnum %d (bug!)\n",
hcd->self.bus_name, udev->devnum);
return -EINVAL;
}
// It would catch exit/unlink paths for all urbs.
/* lower level hcd code should use *_dma exclusively */
- if (hcd->controller->dma_mask) {
+ if (hcd->self.controller->dma_mask) {
if (usb_pipecontrol (urb->pipe)
&& !(urb->transfer_flags & URB_NO_SETUP_DMA_MAP))
- dma_unmap_single (hcd->controller, urb->setup_dma,
+ dma_unmap_single (hcd->self.controller, urb->setup_dma,
sizeof (struct usb_ctrlrequest),
DMA_TO_DEVICE);
if (urb->transfer_buffer_length != 0
&& !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP))
- dma_unmap_single (hcd->controller, urb->transfer_dma,
+ dma_unmap_single (hcd->self.controller,
+ urb->transfer_dma,
urb->transfer_buffer_length,
usb_pipein (urb->pipe)
? DMA_FROM_DEVICE
*/
void usb_hc_died (struct usb_hcd *hcd)
{
- dev_err (hcd->controller, "HC died; cleaning up\n");
+ dev_err (hcd->self.controller, "HC died; cleaning up\n");
/* clean up old urbs and devices; needs a task context */
INIT_WORK (&hcd->work, hcd_panic, hcd);
unsigned saw_irq : 1;
int irq; /* irq allocated */
void *regs; /* device memory/io */
- struct device *controller; /* handle to hardware */
- /* a few non-PCI controllers exist, mostly for OHCI */
- struct pci_dev *pdev; /* pci is typical */
#ifdef CONFIG_PCI
int region; /* pci region for regs */
u32 pci_state [16]; /* for PM state save */
#endif
#define HCD_BUFFER_POOLS 4
- struct pci_pool *pool [HCD_BUFFER_POOLS];
+ struct dma_pool *pool [HCD_BUFFER_POOLS];
int state;
# define __ACTIVE 0x01
static inline int hcd_register_root (struct usb_hcd *hcd)
{
return usb_register_root_hub (
- hcd_to_bus (hcd)->root_hub, hcd->controller);
+ hcd_to_bus (hcd)->root_hub, hcd->self.controller);
}
/*-------------------------------------------------------------------------*/
hub->tt.hub = dev;
break;
case 2:
- dev_dbg(hub_dev, "TT per port\n");
+ ret = usb_set_interface(dev, 0, 1);
+ if (ret == 0) {
+ dev_dbg(hub_dev, "TT per port\n");
+ hub->tt.multi = 1;
+ } else
+ dev_err(hub_dev, "Using single TT (err %d)\n",
+ ret);
hub->tt.hub = dev;
- hub->tt.multi = 1;
break;
default:
dev_dbg(hub_dev, "Unrecognized hub protocol %d\n",
static void sg_complete (struct urb *urb, struct pt_regs *regs)
{
struct usb_sg_request *io = (struct usb_sg_request *) urb->context;
- unsigned long flags;
- spin_lock_irqsave (&io->lock, flags);
+ spin_lock (&io->lock);
/* In 2.5 we require hcds' endpoint queues not to progress after fault
* reports, until the completion callback (this!) returns. That lets
if (!io->count)
complete (&io->complete);
- spin_unlock_irqrestore (&io->lock, flags);
+ spin_unlock (&io->lock);
}
*/
void usb_sg_wait (struct usb_sg_request *io)
{
- int i;
- unsigned long flags;
+ int i, entries = io->entries;
/* queue the urbs. */
- spin_lock_irqsave (&io->lock, flags);
- for (i = 0; i < io->entries && !io->status; i++) {
+ spin_lock_irq (&io->lock);
+ for (i = 0; i < entries && !io->status; i++) {
int retval;
io->urbs [i]->dev = io->dev;
/* after we submit, let completions or cancelations fire;
* we handshake using io->status.
*/
- spin_unlock_irqrestore (&io->lock, flags);
+ spin_unlock_irq (&io->lock);
switch (retval) {
/* maybe we retrying will recover */
case -ENXIO: // hc didn't queue this one
/* fail any uncompleted urbs */
default:
+ spin_lock_irq (&io->lock);
+ io->count -= entries - i;
+ if (io->status == -EINPROGRESS)
+ io->status = retval;
+ if (io->count == 0)
+ complete (&io->complete);
+ spin_unlock_irq (&io->lock);
+
io->urbs [i]->dev = 0;
io->urbs [i]->status = retval;
dev_dbg (&io->dev->dev, "%s, submit --> %d\n",
__FUNCTION__, retval);
usb_sg_cancel (io);
}
- spin_lock_irqsave (&io->lock, flags);
+ spin_lock_irq (&io->lock);
if (retval && io->status == -ECONNRESET)
io->status = retval;
}
- spin_unlock_irqrestore (&io->lock, flags);
+ spin_unlock_irq (&io->lock);
/* OK, yes, this could be packaged as non-blocking.
* So could the submit loop above ... but it's easier to
config = dev->config[0].desc.bConfigurationValue;
if (dev->descriptor.bNumConfigurations != 1) {
for (i = 0; i < dev->descriptor.bNumConfigurations; i++) {
+ struct usb_interface_descriptor *desc;
+
/* heuristic: Linux is more likely to have class
* drivers, so avoid vendor-specific interfaces.
*/
- if (dev->config[i].interface[0]->altsetting
- ->desc.bInterfaceClass
- == USB_CLASS_VENDOR_SPEC)
+ desc = &dev->config[i].interface[0]
+ ->altsetting->desc;
+ if (desc->bInterfaceClass == USB_CLASS_VENDOR_SPEC)
+ continue;
+ /* COMM/2/all is CDC ACM, except 0xff is MSFT RNDIS */
+ if (desc->bInterfaceClass == USB_CLASS_COMM
+ && desc->bInterfaceSubClass == 2
+ && desc->bInterfaceProtocol == 0xff)
continue;
config = dev->config[i].desc.bConfigurationValue;
break;
int t;
/* int_status is the same format ... */
- t = snprintf(*next, *size,
+ t = scnprintf(*next, *size,
"%s %05X =" FOURBITS EIGHTBITS EIGHTBITS "\n",
label, mask,
(mask & INT_PWRDETECT) ? " power" : "",
/* basic device status */
tmp = readl(®s->power_detect);
is_usb_connected = tmp & PW_DETECT;
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"%s - %s\n"
"%s version: %s %s\n"
"Gadget driver: %s\n"
goto done;
/* registers for (active) device and ep0 */
- t = snprintf(next, size, "\nirqs %lu\ndataset %02x "
+ t = scnprintf(next, size, "\nirqs %lu\ndataset %02x "
"single.bcs %02x.%02x state %x addr %u\n",
dev->irqs, readl(®s->DataSet),
readl(®s->EPxSingle), readl(®s->EPxBCS),
next += t;
tmp = readl(®s->dma_master);
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"dma %03X =" EIGHTBITS "%s %s\n", tmp,
(tmp & MST_EOPB_DIS) ? " eopb-" : "",
(tmp & MST_EOPB_ENA) ? " eopb+" : "",
continue;
tmp = readl(ep->reg_status);
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"%s %s max %u %s, irqs %lu, "
"status %02x (%s) " FOURBITS "\n",
ep->ep.name,
next += t;
if (list_empty(&ep->queue)) {
- t = snprintf(next, size, "\t(nothing queued)\n");
+ t = scnprintf(next, size, "\t(nothing queued)\n");
if (t <= 0 || t > size)
goto done;
size -= t;
} else
tmp = req->req.actual;
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"\treq %p len %u/%u buf %p\n",
&req->req, tmp, req->req.length,
req->req.buf);
INFO(dev, "%s\n", driver_desc);
INFO(dev, "version: " DRIVER_VERSION " %s\n", dmastr());
#ifndef __sparc__
- snprintf(buf, sizeof buf, "%d", pdev->irq);
+ scnprintf(buf, sizeof buf, "%d", pdev->irq);
bufp = buf;
#else
bufp = __irq_itoa(pdev->irq);
|| !dev->driver->function
|| strlen (dev->driver->function) > PAGE_SIZE)
return 0;
- return snprintf (buf, PAGE_SIZE, "%s\n", dev->driver->function);
+ return scnprintf (buf, PAGE_SIZE, "%s\n", dev->driver->function);
}
static DEVICE_ATTR (function, S_IRUGO, show_function, NULL);
s = "(none)";
/* Main Control Registers */
- t = snprintf (next, size, "%s version " DRIVER_VERSION
+ t = scnprintf (next, size, "%s version " DRIVER_VERSION
", chiprev %04x, dma %s\n\n"
"devinit %03x fifoctl %08x gadget '%s'\n"
"pci irqenb0 %02x irqenb1 %08x "
/* full speed bit (6) not working?? */
} else
s = "not attached";
- t = snprintf (next, size,
+ t = scnprintf (next, size,
"stdrsp %08x usbctl %08x usbstat %08x "
"addr 0x%02x (%s)\n",
readl (&dev->usb->stdrsp), t1, t2,
t1 = readl (&ep->regs->ep_cfg);
t2 = readl (&ep->regs->ep_rsp) & 0xff;
- t = snprintf (next, size,
+ t = scnprintf (next, size,
"\n%s\tcfg %05x rsp (%02x) %s%s%s%s%s%s%s%s"
"irqenb %02x\n",
ep->ep.name, t1, t2,
size -= t;
next += t;
- t = snprintf (next, size,
+ t = scnprintf (next, size,
"\tstat %08x avail %04x "
"(ep%d%s-%s)%s\n",
readl (&ep->regs->ep_stat),
if (!ep->dma)
continue;
- t = snprintf (next, size,
+ t = scnprintf (next, size,
" dma\tctl %08x stat %08x count %08x\n"
"\taddr %08x desc %08x\n",
readl (&ep->dma->dmactl),
// none yet
/* Statistics */
- t = snprintf (next, size, "\nirqs: ");
+ t = scnprintf (next, size, "\nirqs: ");
size -= t;
next += t;
for (i = 0; i < 7; i++) {
ep = &dev->ep [i];
if (i && !ep->irqs)
continue;
- t = snprintf (next, size, " %s/%lu", ep->ep.name, ep->irqs);
+ t = scnprintf (next, size, " %s/%lu", ep->ep.name, ep->irqs);
size -= t;
next += t;
}
- t = snprintf (next, size, "\n");
+ t = scnprintf (next, size, "\n");
size -= t;
next += t;
if (!d)
continue;
t = d->bEndpointAddress;
- t = snprintf (next, size,
+ t = scnprintf (next, size,
"\n%s (ep%d%s-%s) max %04x %s fifo %d\n",
ep->ep.name, t & USB_ENDPOINT_NUMBER_MASK,
(t & USB_DIR_IN) ? "in" : "out",
ep->dma ? "dma" : "pio", ep->fifo_size
);
} else /* ep0 should only have one transfer queued */
- t = snprintf (next, size, "ep0 max 64 pio %s\n",
+ t = scnprintf (next, size, "ep0 max 64 pio %s\n",
ep->is_in ? "in" : "out");
if (t <= 0 || t > size)
goto done;
next += t;
if (list_empty (&ep->queue)) {
- t = snprintf (next, size, "\t(nothing queued)\n");
+ t = scnprintf (next, size, "\t(nothing queued)\n");
if (t <= 0 || t > size)
goto done;
size -= t;
}
list_for_each_entry (req, &ep->queue, queue) {
if (ep->dma && req->td_dma == readl (&ep->dma->dmadesc))
- t = snprintf (next, size,
+ t = scnprintf (next, size,
"\treq %p len %d/%d "
"buf %p (dmacount %08x)\n",
&req->req, req->req.actual,
req->req.length, req->req.buf,
readl (&ep->dma->dmacount));
else
- t = snprintf (next, size,
+ t = scnprintf (next, size,
"\treq %p len %d/%d buf %p\n",
&req->req, req->req.actual,
req->req.length, req->req.buf);
struct net2280_dma *td;
td = req->td;
- t = snprintf (next, size, "\t td %08x "
+ t = scnprintf (next, size, "\t td %08x "
" count %08x buf %08x desc %08x\n",
req->td_dma, td->dmacount,
td->dmaaddr, td->dmadesc);
goto done;
}
#ifndef __sparc__
- snprintf (buf, sizeof buf, "%d", pdev->irq);
+ scnprintf (buf, sizeof buf, "%d", pdev->irq);
bufp = buf;
#else
bufp = __irq_itoa(pdev->irq);
local_irq_save(flags);
/* basic device status */
- t = snprintf(next, size, DRIVER_DESC "\n"
+ t = scnprintf(next, size, DRIVER_DESC "\n"
"%s version: %s\nGadget driver: %s\nHost %s\n\n",
driver_name, DRIVER_VERSION SIZE_STR DMASTR,
dev->driver ? dev->driver->driver.name : "(none)",
next += t;
/* registers for device and ep0 */
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"uicr %02X.%02X, usir %02X.%02x, ufnr %02X.%02X\n",
UICR1, UICR0, USIR1, USIR0, UFNRH, UFNRL);
size -= t;
next += t;
tmp = UDCCR;
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"udccr %02X =%s%s%s%s%s%s%s%s\n", tmp,
(tmp & UDCCR_REM) ? " rem" : "",
(tmp & UDCCR_RSTIR) ? " rstir" : "",
next += t;
tmp = UDCCS0;
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"udccs0 %02X =%s%s%s%s%s%s%s%s\n", tmp,
(tmp & UDCCS0_SA) ? " sa" : "",
(tmp & UDCCS0_RNE) ? " rne" : "",
if (dev->has_cfr) {
tmp = UDCCFR;
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"udccfr %02X =%s%s\n", tmp,
(tmp & UDCCFR_AREN) ? " aren" : "",
(tmp & UDCCFR_ACM) ? " acm" : "");
if (!is_usb_connected() || !dev->driver)
goto done;
- t = snprintf(next, size, "ep0 IN %lu/%lu, OUT %lu/%lu\nirqs %lu\n\n",
+ t = scnprintf(next, size, "ep0 IN %lu/%lu, OUT %lu/%lu\nirqs %lu\n\n",
dev->stats.write.bytes, dev->stats.write.ops,
dev->stats.read.bytes, dev->stats.read.ops,
dev->stats.irqs);
if (!d)
continue;
tmp = *dev->ep [i].reg_udccs;
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"%s max %d %s udccs %02x irqs %lu/%lu\n",
ep->ep.name, le16_to_cpu (d->wMaxPacketSize),
(ep->dma >= 0) ? "dma" : "pio", tmp,
/* TODO translate all five groups of udccs bits! */
} else /* ep0 should only have one transfer queued */
- t = snprintf(next, size, "ep0 max 16 pio irqs %lu\n",
+ t = scnprintf(next, size, "ep0 max 16 pio irqs %lu\n",
ep->pio_irqs);
if (t <= 0 || t > size)
goto done;
next += t;
if (list_empty(&ep->queue)) {
- t = snprintf(next, size, "\t(nothing queued)\n");
+ t = scnprintf(next, size, "\t(nothing queued)\n");
if (t <= 0 || t > size)
goto done;
size -= t;
list_for_each_entry(req, &ep->queue, queue) {
#ifdef USE_DMA
if (ep->dma >= 0 && req->queue.prev == &ep->queue)
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"\treq %p len %d/%d "
"buf %p (dma%d dcmd %08x)\n",
&req->req, req->req.actual,
);
else
#endif
- t = snprintf(next, size,
+ t = scnprintf(next, size,
"\treq %p len %d/%d buf %p\n",
&req->req, req->req.actual,
req->req.length, req->req.buf);
|| !dev->driver->function
|| strlen (dev->driver->function) > PAGE_SIZE)
return 0;
- return snprintf (buf, PAGE_SIZE, "%s\n", dev->driver->function);
+ return scnprintf (buf, PAGE_SIZE, "%s\n", dev->driver->function);
}
static DEVICE_ATTR (function, S_IRUGO, show_function, NULL);
/* this file is part of ehci-hcd.c */
#define ehci_dbg(ehci, fmt, args...) \
- dev_dbg ((ehci)->hcd.controller , fmt , ## args )
+ dev_dbg ((ehci)->hcd.self.controller , fmt , ## args )
#define ehci_err(ehci, fmt, args...) \
- dev_err ((ehci)->hcd.controller , fmt , ## args )
+ dev_err ((ehci)->hcd.self.controller , fmt , ## args )
#define ehci_info(ehci, fmt, args...) \
- dev_info ((ehci)->hcd.controller , fmt , ## args )
+ dev_info ((ehci)->hcd.self.controller , fmt , ## args )
#define ehci_warn(ehci, fmt, args...) \
- dev_warn ((ehci)->hcd.controller , fmt , ## args )
+ dev_warn ((ehci)->hcd.self.controller , fmt , ## args )
#ifdef EHCI_VERBOSE_DEBUG
# define vdbg dbg
static int __attribute__((__unused__))
dbg_status_buf (char *buf, unsigned len, char *label, u32 status)
{
- return snprintf (buf, len,
+ return scnprintf (buf, len,
"%s%sstatus %04x%s%s%s%s%s%s%s%s%s%s",
label, label [0] ? " " : "", status,
(status & STS_ASS) ? " Async" : "",
static int __attribute__((__unused__))
dbg_intr_buf (char *buf, unsigned len, char *label, u32 enable)
{
- return snprintf (buf, len,
+ return scnprintf (buf, len,
"%s%sintrenable %02x%s%s%s%s%s%s",
label, label [0] ? " " : "", enable,
(enable & STS_IAA) ? " IAA" : "",
static int dbg_command_buf (char *buf, unsigned len, char *label, u32 command)
{
- return snprintf (buf, len,
+ return scnprintf (buf, len,
"%s%scommand %06x %s=%d ithresh=%d%s%s%s%s period=%s%s %s",
label, label [0] ? " " : "", command,
(command & CMD_PARK) ? "park" : "(park)",
default: sig = "?"; break;
}
- return snprintf (buf, len,
+ return scnprintf (buf, len,
"%s%sport %d status %06x%s%s sig=%s %s%s%s%s%s%s%s%s%s",
label, label [0] ? " " : "", port, status,
(status & PORT_POWER) ? " POWER" : "",
}
scratch = cpu_to_le32p (&qh->hw_info1);
hw_curr = (mark == '*') ? cpu_to_le32p (&qh->hw_current) : 0;
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
"qh/%p dev%d %cs ep%d %08x %08x (%08x%c %s nak%d)",
qh, scratch & 0x007f,
speed_char (scratch),
scratch, cpu_to_le32p (&qh->hw_info2),
cpu_to_le32p (&qh->hw_token), mark,
(__constant_cpu_to_le32 (QTD_TOGGLE) & qh->hw_token)
- ? "data0" : "data1",
+ ? "data1" : "data0",
(cpu_to_le32p (&qh->hw_alt_next) >> 1) & 0x0f);
size -= temp;
next += temp;
for (qh = ehci->async->qh_next.qh; size > 0 && qh; qh = qh->qh_next.qh)
qh_lines (ehci, qh, &next, &size);
if (ehci->reclaim && size > 0) {
- temp = snprintf (next, size, "\nreclaim =\n");
+ temp = scnprintf (next, size, "\nreclaim =\n");
size -= temp;
next += temp;
next = buf;
size = PAGE_SIZE;
- temp = snprintf (next, size, "size = %d\n", ehci->periodic_size);
+ temp = scnprintf (next, size, "size = %d\n", ehci->periodic_size);
size -= temp;
next += temp;
continue;
tag = Q_NEXT_TYPE (ehci->periodic [i]);
- temp = snprintf (next, size, "%4d: ", i);
+ temp = scnprintf (next, size, "%4d: ", i);
size -= temp;
next += temp;
do {
switch (tag) {
case Q_TYPE_QH:
- temp = snprintf (next, size, " qh%d-%04x/%p",
+ temp = scnprintf (next, size, " qh%d-%04x/%p",
p.qh->period,
le32_to_cpup (&p.qh->hw_info2)
/* uframe masks */
if (seen [temp].ptr != p.ptr)
continue;
if (p.qh->qh_next.ptr)
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
" ...");
p.ptr = 0;
break;
}
}
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
" (%c%d ep%d%s "
"[%d/%d] q%d p%d)",
speed_char (scratch),
}
break;
case Q_TYPE_FSTN:
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
" fstn-%8x/%p", p.fstn->hw_prev,
p.fstn);
tag = Q_NEXT_TYPE (p.fstn->hw_next);
p = p.fstn->fstn_next;
break;
case Q_TYPE_ITD:
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
" itd/%p", p.itd);
tag = Q_NEXT_TYPE (p.itd->hw_next);
p = p.itd->itd_next;
break;
case Q_TYPE_SITD:
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
" sitd/%p", p.sitd);
tag = Q_NEXT_TYPE (p.sitd->hw_next);
p = p.sitd->sitd_next;
next += temp;
} while (p.ptr);
- temp = snprintf (next, size, "\n");
+ temp = scnprintf (next, size, "\n");
size -= temp;
next += temp;
}
/* Capability Registers */
i = HC_VERSION(readl (&ehci->caps->hc_capbase));
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
"PCI device %s\nEHCI %x.%02x, hcd state %d (driver " DRIVER_VERSION ")\n",
- pci_name(hcd->pdev),
+ pci_name(to_pci_dev(hcd->self.controller)),
i >> 8, i & 0x0ff, ehci->hcd.state);
size -= temp;
next += temp;
// FIXME interpret both types of params
i = readl (&ehci->caps->hcs_params);
- temp = snprintf (next, size, "structural params 0x%08x\n", i);
+ temp = scnprintf (next, size, "structural params 0x%08x\n", i);
size -= temp;
next += temp;
i = readl (&ehci->caps->hcc_params);
- temp = snprintf (next, size, "capability params 0x%08x\n", i);
+ temp = scnprintf (next, size, "capability params 0x%08x\n", i);
size -= temp;
next += temp;
/* Operational Registers */
temp = dbg_status_buf (scratch, sizeof scratch, label,
readl (&ehci->regs->status));
- temp = snprintf (next, size, fmt, temp, scratch);
+ temp = scnprintf (next, size, fmt, temp, scratch);
size -= temp;
next += temp;
temp = dbg_command_buf (scratch, sizeof scratch, label,
readl (&ehci->regs->command));
- temp = snprintf (next, size, fmt, temp, scratch);
+ temp = scnprintf (next, size, fmt, temp, scratch);
size -= temp;
next += temp;
temp = dbg_intr_buf (scratch, sizeof scratch, label,
readl (&ehci->regs->intr_enable));
- temp = snprintf (next, size, fmt, temp, scratch);
+ temp = scnprintf (next, size, fmt, temp, scratch);
size -= temp;
next += temp;
- temp = snprintf (next, size, "uframe %04x\n",
+ temp = scnprintf (next, size, "uframe %04x\n",
readl (&ehci->regs->frame_index));
size -= temp;
next += temp;
for (i = 0; i < HCS_N_PORTS (ehci->hcs_params); i++) {
temp = dbg_port_buf (scratch, sizeof scratch, label, i,
readl (&ehci->regs->port_status [i]));
- temp = snprintf (next, size, fmt, temp, scratch);
+ temp = scnprintf (next, size, fmt, temp, scratch);
size -= temp;
next += temp;
}
if (ehci->reclaim) {
- temp = snprintf (next, size, "reclaim qh %p%s\n",
+ temp = scnprintf (next, size, "reclaim qh %p%s\n",
ehci->reclaim,
ehci->reclaim_ready ? " ready" : "");
size -= temp;
}
#ifdef EHCI_STATS
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
"irq normal %ld err %ld reclaim %ld (lost %ld)\n",
ehci->stats.normal, ehci->stats.error, ehci->stats.reclaim,
ehci->stats.lost_iaa);
size -= temp;
next += temp;
- temp = snprintf (next, size, "complete %ld unlink %ld\n",
+ temp = scnprintf (next, size, "complete %ld unlink %ld\n",
ehci->stats.complete, ehci->stats.unlink);
size -= temp;
next += temp;
#include <linux/module.h>
#include <linux/pci.h>
+#include <linux/dmapool.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/reboot.h>
#include <linux/usb.h>
#include <linux/moduleparam.h>
+#include <linux/dma-mapping.h>
#include "../core/hcd.h"
*
* HISTORY:
*
+ * 2004-02-24 Replace pci_* with generic dma_* API calls (dsaxena@plexity.net)
* 2003-12-29 Rewritten high speed iso transfer support (by Michal Sojka,
* <sojkam@centrum.cz>, updates by DB).
*
/* request handoff to OS */
cap &= 1 << 24;
- pci_write_config_dword (ehci->hcd.pdev, where, cap);
+ pci_write_config_dword (to_pci_dev(ehci->hcd.self.controller), where, cap);
/* and wait a while for it to happen */
do {
wait_ms (10);
msec -= 10;
- pci_read_config_dword (ehci->hcd.pdev, where, &cap);
+ pci_read_config_dword (to_pci_dev(ehci->hcd.self.controller), where, &cap);
} while ((cap & (1 << 16)) && msec);
if (cap & (1 << 16)) {
ehci_err (ehci, "BIOS handoff failed (%d, %04x)\n",
while (temp) {
u32 cap;
- pci_read_config_dword (ehci->hcd.pdev, temp, &cap);
+ pci_read_config_dword (to_pci_dev(ehci->hcd.self.controller), temp, &cap);
ehci_dbg (ehci, "capability %04x at %02x\n", cap, temp);
switch (cap & 0xff) {
case 1: /* BIOS/SMM/... handoff */
* periodic_size can shrink by USBCMD update if hcc_params allows.
*/
ehci->periodic_size = DEFAULT_I_TDPS;
- if ((retval = ehci_mem_init (ehci, SLAB_KERNEL)) < 0)
+ if ((retval = ehci_mem_init (ehci, GFP_KERNEL)) < 0)
return retval;
/* controllers may cache some of the periodic schedule ... */
writel (0, &ehci->regs->segment);
#if 0
// this is deeply broken on almost all architectures
- if (!pci_set_dma_mask (ehci->hcd.pdev, 0xffffffffffffffffULL))
+ if (!pci_set_dma_mask (to_pci_dev(ehci->hcd.self.controller), 0xffffffffffffffffULL))
ehci_info (ehci, "enabled 64bit PCI DMA\n");
#endif
}
/* help hc dma work well with cachelines */
- pci_set_mwi (ehci->hcd.pdev);
+ pci_set_mwi (to_pci_dev(ehci->hcd.self.controller));
/* clear interrupt enables, set irq latency */
temp = readl (&ehci->regs->command) & 0x0fff;
readl (&ehci->regs->command); /* unblock posted write */
/* PCI Serial Bus Release Number is at 0x60 offset */
- pci_read_config_byte (hcd->pdev, 0x60, &tempbyte);
+ pci_read_config_byte(to_pci_dev(hcd->self.controller), 0x60, &tempbyte);
temp = HC_VERSION(readl (&ehci->caps->hc_capbase));
ehci_info (ehci,
"USB %x.%x enabled, EHCI %x.%02x, driver %s\n",
* non-error returns are a promise to giveback() the urb later
* we drop ownership so next owner (or urb unlink) can get it
*
- * urb + dev is in hcd_dev.urb_list
+ * urb + dev is in hcd.self.controller.urb_list
* we're queueing TDs onto software and hardware lists
*
* hcd-specific init for hcpriv hasn't been done yet
u16 temp;
desc->bDescriptorType = 0x29;
- desc->bPwrOn2PwrGood = 10; /* FIXME: f(system power) */
+ desc->bPwrOn2PwrGood = 10; /* ehci 1.0, 2.3.9 says 20ms max */
desc->bHubContrCurrent = 0;
desc->bNbrPorts = ports;
* There's basically three types of memory:
* - data used only by the HCD ... kmalloc is fine
* - async and periodic schedules, shared by HC and HCD ... these
- * need to use pci_pool or pci_alloc_consistent
+ * need to use dma_pool or dma_alloc_coherent
* - driver buffers, read/written by HC ... single shot DMA mapped
*
* There's also PCI "register" data, which is memory mapped.
struct ehci_qtd *qtd;
dma_addr_t dma;
- qtd = pci_pool_alloc (ehci->qtd_pool, flags, &dma);
+ qtd = dma_pool_alloc (ehci->qtd_pool, flags, &dma);
if (qtd != 0) {
ehci_qtd_init (qtd, dma);
}
static inline void ehci_qtd_free (struct ehci_hcd *ehci, struct ehci_qtd *qtd)
{
- pci_pool_free (ehci->qtd_pool, qtd, qtd->qtd_dma);
+ dma_pool_free (ehci->qtd_pool, qtd, qtd->qtd_dma);
}
dma_addr_t dma;
qh = (struct ehci_qh *)
- pci_pool_alloc (ehci->qh_pool, flags, &dma);
+ dma_pool_alloc (ehci->qh_pool, flags, &dma);
if (!qh)
return qh;
qh->dummy = ehci_qtd_alloc (ehci, flags);
if (qh->dummy == 0) {
ehci_dbg (ehci, "no dummy td\n");
- pci_pool_free (ehci->qh_pool, qh, qh->qh_dma);
+ dma_pool_free (ehci->qh_pool, qh, qh->qh_dma);
qh = 0;
}
return qh;
if (qh->dummy)
ehci_qtd_free (ehci, qh->dummy);
usb_put_dev (qh->dev);
- pci_pool_free (ehci->qh_pool, qh, qh->qh_dma);
+ dma_pool_free (ehci->qh_pool, qh, qh->qh_dma);
}
/*-------------------------------------------------------------------------*/
qh_put (ehci, ehci->async);
ehci->async = 0;
- /* PCI consistent memory and pools */
+ /* DMA consistent memory and pools */
if (ehci->qtd_pool)
- pci_pool_destroy (ehci->qtd_pool);
+ dma_pool_destroy (ehci->qtd_pool);
ehci->qtd_pool = 0;
if (ehci->qh_pool) {
- pci_pool_destroy (ehci->qh_pool);
+ dma_pool_destroy (ehci->qh_pool);
ehci->qh_pool = 0;
}
if (ehci->itd_pool)
- pci_pool_destroy (ehci->itd_pool);
+ dma_pool_destroy (ehci->itd_pool);
ehci->itd_pool = 0;
if (ehci->sitd_pool)
- pci_pool_destroy (ehci->sitd_pool);
+ dma_pool_destroy (ehci->sitd_pool);
ehci->sitd_pool = 0;
if (ehci->periodic)
- pci_free_consistent (ehci->hcd.pdev,
+ dma_free_coherent (ehci->hcd.self.controller,
ehci->periodic_size * sizeof (u32),
ehci->periodic, ehci->periodic_dma);
ehci->periodic = 0;
int i;
/* QTDs for control/bulk/intr transfers */
- ehci->qtd_pool = pci_pool_create ("ehci_qtd", ehci->hcd.pdev,
+ ehci->qtd_pool = dma_pool_create ("ehci_qtd",
+ ehci->hcd.self.controller,
sizeof (struct ehci_qtd),
32 /* byte alignment (for hw parts) */,
4096 /* can't cross 4K */);
}
/* QHs for control/bulk/intr transfers */
- ehci->qh_pool = pci_pool_create ("ehci_qh", ehci->hcd.pdev,
+ ehci->qh_pool = dma_pool_create ("ehci_qh",
+ ehci->hcd.self.controller,
sizeof (struct ehci_qh),
32 /* byte alignment (for hw parts) */,
4096 /* can't cross 4K */);
}
/* ITD for high speed ISO transfers */
- ehci->itd_pool = pci_pool_create ("ehci_itd", ehci->hcd.pdev,
+ ehci->itd_pool = dma_pool_create ("ehci_itd",
+ ehci->hcd.self.controller,
sizeof (struct ehci_itd),
32 /* byte alignment (for hw parts) */,
4096 /* can't cross 4K */);
}
/* SITD for full/low speed split ISO transfers */
- ehci->sitd_pool = pci_pool_create ("ehci_sitd", ehci->hcd.pdev,
+ ehci->sitd_pool = dma_pool_create ("ehci_sitd",
+ ehci->hcd.self.controller,
sizeof (struct ehci_sitd),
32 /* byte alignment (for hw parts) */,
4096 /* can't cross 4K */);
/* Hardware periodic table */
ehci->periodic = (u32 *)
- pci_alloc_consistent (ehci->hcd.pdev,
+ dma_alloc_coherent (ehci->hcd.self.controller,
ehci->periodic_size * sizeof (u32),
- &ehci->periodic_dma);
+ &ehci->periodic_dma, 0);
if (ehci->periodic == 0) {
goto fail;
}
qh = (struct ehci_qh *) *ptr;
if (unlikely (qh == 0)) {
/* can't sleep here, we have ehci->lock... */
- qh = qh_make (ehci, urb, SLAB_ATOMIC);
+ qh = qh_make (ehci, urb, GFP_ATOMIC);
*ptr = qh;
}
if (likely (qh != 0)) {
/* ... or C-mask? */
if (q->qh->hw_info2 & cpu_to_le32 (1 << (8 + uframe)))
usecs += q->qh->c_usecs;
+ hw_p = &q->qh->hw_next;
q = &q->qh->qh_next;
break;
case Q_TYPE_FSTN:
* bandwidth from the previous frame
*/
if (q->fstn->hw_prev != EHCI_LIST_END) {
- dbg ("not counting FSTN bandwidth yet ...");
+ ehci_dbg (ehci, "ignoring FSTN cost ...\n");
}
+ hw_p = &q->fstn->hw_next;
q = &q->fstn->fstn_next;
break;
case Q_TYPE_ITD:
usecs += q->itd->usecs [uframe];
+ hw_p = &q->itd->hw_next;
q = &q->itd->itd_next;
break;
#ifdef have_split_iso
case Q_TYPE_SITD:
- temp = q->sitd->hw_fullspeed_ep &
- __constant_cpu_to_le32 (1 << 31);
-
- // FIXME: this doesn't count data bytes right...
-
/* is it in the S-mask? (count SPLIT, DATA) */
if (q->sitd->hw_uframe & cpu_to_le32 (1 << uframe)) {
- if (temp)
- usecs += HS_USECS (188);
- else
- usecs += HS_USECS (1);
+ if (q->sitd->hw_fullspeed_ep &
+ __constant_cpu_to_le32 (1<<31))
+ usecs += q->sitd->stream->usecs;
+ else /* worst case for OUT start-split */
+ usecs += HS_USECS_ISO (188);
}
/* ... C-mask? (count CSPLIT, DATA) */
if (q->sitd->hw_uframe &
cpu_to_le32 (1 << (8 + uframe))) {
- if (temp)
- usecs += HS_USECS (0);
- else
- usecs += HS_USECS (188);
+ /* worst case for IN complete-split */
+ usecs += q->sitd->stream->c_usecs;
}
+
+ hw_p = &q->sitd->hw_next;
q = &q->sitd->sitd_next;
break;
#endif /* have_split_iso */
/*-------------------------------------------------------------------------*/
+static int same_tt (struct usb_device *dev1, struct usb_device *dev2)
+{
+ if (!dev1->tt || !dev2->tt)
+ return 0;
+ if (dev1->tt != dev2->tt)
+ return 0;
+ if (dev1->tt->multi)
+ return dev1->ttport == dev2->ttport;
+ else
+ return 1;
+}
+
+/* return true iff the device's transaction translator is available
+ * for a periodic transfer starting at the specified frame, using
+ * all the uframes in the mask.
+ */
+static int tt_no_collision (
+ struct ehci_hcd *ehci,
+ unsigned period,
+ struct usb_device *dev,
+ unsigned frame,
+ u32 uf_mask
+)
+{
+ if (period == 0) /* error */
+ return 0;
+
+ /* note bandwidth wastage: split never follows csplit
+ * (different dev or endpoint) until the next uframe.
+ * calling convention doesn't make that distinction.
+ */
+ for (; frame < ehci->periodic_size; frame += period) {
+ union ehci_shadow here;
+ u32 type;
+
+ here = ehci->pshadow [frame];
+ type = Q_NEXT_TYPE (ehci->periodic [frame]);
+ while (here.ptr) {
+ switch (type) {
+ case Q_TYPE_ITD:
+ type = Q_NEXT_TYPE (here.itd->hw_next);
+ here = here.itd->itd_next;
+ continue;
+ case Q_TYPE_QH:
+ if (same_tt (dev, here.qh->dev)) {
+ u32 mask;
+
+ mask = le32_to_cpu (here.qh->hw_info2);
+ /* "knows" no gap is needed */
+ mask |= mask >> 8;
+ if (mask & uf_mask)
+ break;
+ }
+ type = Q_NEXT_TYPE (here.qh->hw_next);
+ here = here.qh->qh_next;
+ continue;
+ case Q_TYPE_SITD:
+ if (same_tt (dev, here.itd->urb->dev)) {
+ u16 mask;
+
+ mask = le32_to_cpu (here.sitd->hw_uframe);
+ /* FIXME assumes no gap for IN! */
+ mask |= mask >> 8;
+ if (mask & uf_mask)
+ break;
+ }
+ type = Q_NEXT_TYPE (here.qh->hw_next);
+ here = here.sitd->sitd_next;
+ break;
+ // case Q_TYPE_FSTN:
+ default:
+ ehci_dbg (ehci,
+ "periodic frame %d bogus type %d\n",
+ frame, type);
+ }
+
+ /* collision or error */
+ return 0;
+ }
+ }
+
+ /* no collision */
+ return 1;
+}
+
+/*-------------------------------------------------------------------------*/
+
static int enable_periodic (struct ehci_hcd *ehci)
{
u32 cmd;
return status;
}
-static unsigned
-intr_complete (
- struct ehci_hcd *ehci,
- unsigned frame,
- struct ehci_qh *qh,
- struct pt_regs *regs
-) {
- unsigned count;
-
- /* nothing to report? */
- if (likely ((qh->hw_token & __constant_cpu_to_le32 (QTD_STS_ACTIVE))
- != 0))
- return 0;
- if (unlikely (list_empty (&qh->qtd_list))) {
- dbg ("intr qh %p no TDs?", qh);
- return 0;
- }
-
- /* handle any completions */
- count = qh_completions (ehci, qh, regs);
-
- if (unlikely (list_empty (&qh->qtd_list)))
- intr_deschedule (ehci, qh, 0);
-
- return count;
-}
-
/*-------------------------------------------------------------------------*/
-static inline struct ehci_iso_stream *
+/* ehci_iso_stream ops work with both ITD and SITD */
+
+static struct ehci_iso_stream *
iso_stream_alloc (int mem_flags)
{
struct ehci_iso_stream *stream;
stream = kmalloc(sizeof *stream, mem_flags);
if (likely (stream != 0)) {
memset (stream, 0, sizeof(*stream));
- INIT_LIST_HEAD(&stream->itd_list);
- INIT_LIST_HEAD(&stream->free_itd_list);
+ INIT_LIST_HEAD(&stream->td_list);
+ INIT_LIST_HEAD(&stream->free_list);
stream->next_uframe = -1;
stream->refcount = 1;
}
return stream;
}
-static inline void
+static void
iso_stream_init (
struct ehci_iso_stream *stream,
struct usb_device *dev,
unsigned interval
)
{
+ static const u8 smask_out [] = { 0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f };
+
u32 buf1;
- unsigned epnum, maxp, multi;
+ unsigned epnum, maxp;
int is_input;
long bandwidth;
buf1 = 0;
}
- multi = hb_mult(maxp);
- maxp = max_packet(maxp);
- buf1 |= maxp;
- maxp *= multi;
+ /* knows about ITD vs SITD */
+ if (dev->speed == USB_SPEED_HIGH) {
+ unsigned multi = hb_mult(maxp);
- stream->dev = (struct hcd_dev *)dev->hcpriv;
+ stream->highspeed = 1;
- stream->bEndpointAddress = is_input | epnum;
- stream->interval = interval;
- stream->maxp = maxp;
+ maxp = max_packet(maxp);
+ buf1 |= maxp;
+ maxp *= multi;
- stream->buf0 = cpu_to_le32 ((epnum << 8) | dev->devnum);
- stream->buf1 = cpu_to_le32 (buf1);
- stream->buf2 = cpu_to_le32 (multi);
+ stream->buf0 = cpu_to_le32 ((epnum << 8) | dev->devnum);
+ stream->buf1 = cpu_to_le32 (buf1);
+ stream->buf2 = cpu_to_le32 (multi);
- /* usbfs wants to report the average usecs per frame tied up
- * when transfers on this endpoint are scheduled ...
- */
- stream->usecs = HS_USECS_ISO (maxp);
- bandwidth = stream->usecs * 8;
- bandwidth /= 1 << (interval - 1);
+ /* usbfs wants to report the average usecs per frame tied up
+ * when transfers on this endpoint are scheduled ...
+ */
+ stream->usecs = HS_USECS_ISO (maxp);
+ bandwidth = stream->usecs * 8;
+ bandwidth /= 1 << (interval - 1);
+
+ } else {
+ u32 addr;
+
+ addr = dev->ttport << 24;
+ addr |= dev->tt->hub->devnum << 16;
+ addr |= epnum << 8;
+ addr |= dev->devnum;
+ stream->usecs = HS_USECS_ISO (maxp);
+ if (is_input) {
+ u32 tmp;
+
+ addr |= 1 << 31;
+ stream->c_usecs = stream->usecs;
+ stream->usecs = HS_USECS_ISO (1);
+ stream->raw_mask = 1;
+
+ /* pessimistic c-mask */
+ tmp = usb_calc_bus_time (USB_SPEED_FULL, 1, 0, maxp)
+ / (125 * 1000);
+ stream->raw_mask |= 3 << (tmp + 9);
+ } else
+ stream->raw_mask = smask_out [maxp / 188];
+ bandwidth = stream->usecs + stream->c_usecs;
+ bandwidth /= 1 << (interval + 2);
+
+ /* stream->splits gets created from raw_mask later */
+ stream->address = cpu_to_le32 (addr);
+ }
stream->bandwidth = bandwidth;
+
+ stream->udev = dev;
+
+ stream->bEndpointAddress = is_input | epnum;
+ stream->interval = interval;
+ stream->maxp = maxp;
}
static void
* not like a QH -- no persistent state (toggle, halt)
*/
if (stream->refcount == 1) {
- int is_in;
+ int is_in;
+ struct hcd_dev *dev = stream->udev->hcpriv;
- // BUG_ON (!list_empty(&stream->itd_list));
+ // BUG_ON (!list_empty(&stream->td_list));
- while (!list_empty (&stream->free_itd_list)) {
+ while (!list_empty (&stream->free_list)) {
struct ehci_itd *itd;
- itd = list_entry (stream->free_itd_list.next,
+ itd = list_entry (stream->free_list.next,
struct ehci_itd, itd_list);
list_del (&itd->itd_list);
- pci_pool_free (ehci->itd_pool, itd, itd->itd_dma);
+ dma_pool_free (ehci->itd_pool, itd, itd->itd_dma);
}
is_in = (stream->bEndpointAddress & USB_DIR_IN) ? 0x10 : 0;
stream->bEndpointAddress &= 0x0f;
- stream->dev->ep [is_in + stream->bEndpointAddress] = 0;
+ dev->ep [is_in + stream->bEndpointAddress] = 0;
if (stream->rescheduled) {
ehci_info (ehci, "ep%d%s-iso rescheduled "
/*-------------------------------------------------------------------------*/
-static inline struct ehci_itd_sched *
-itd_sched_alloc (unsigned packets, int mem_flags)
+/* ehci_iso_sched ops can be shared, ITD-only, or SITD-only */
+
+static struct ehci_iso_sched *
+iso_sched_alloc (unsigned packets, int mem_flags)
{
- struct ehci_itd_sched *itd_sched;
- int size = sizeof *itd_sched;
-
- size += packets * sizeof (struct ehci_iso_uframe);
- itd_sched = kmalloc (size, mem_flags);
- if (likely (itd_sched != 0)) {
- memset(itd_sched, 0, size);
- INIT_LIST_HEAD (&itd_sched->itd_list);
+ struct ehci_iso_sched *iso_sched;
+ int size = sizeof *iso_sched;
+
+ size += packets * sizeof (struct ehci_iso_packet);
+ iso_sched = kmalloc (size, mem_flags);
+ if (likely (iso_sched != 0)) {
+ memset(iso_sched, 0, size);
+ INIT_LIST_HEAD (&iso_sched->td_list);
}
- return itd_sched;
+ return iso_sched;
}
-static int
+static inline void
itd_sched_init (
- struct ehci_itd_sched *itd_sched,
+ struct ehci_iso_sched *iso_sched,
struct ehci_iso_stream *stream,
struct urb *urb
)
dma_addr_t dma = urb->transfer_dma;
/* how many uframes are needed for these transfers */
- itd_sched->span = urb->number_of_packets * stream->interval;
+ iso_sched->span = urb->number_of_packets * stream->interval;
/* figure out per-uframe itd fields that we'll need later
* when we fit new itds into the schedule.
*/
for (i = 0; i < urb->number_of_packets; i++) {
- struct ehci_iso_uframe *uframe = &itd_sched->packet [i];
+ struct ehci_iso_packet *uframe = &iso_sched->packet [i];
unsigned length;
dma_addr_t buf;
u32 trans;
trans = EHCI_ISOC_ACTIVE;
trans |= buf & 0x0fff;
- if (unlikely ((i + 1) == urb->number_of_packets))
+ if (unlikely (((i + 1) == urb->number_of_packets))
+ && !(urb->transfer_flags & URB_NO_INTERRUPT))
trans |= EHCI_ITD_IOC;
trans |= length << 16;
uframe->transaction = cpu_to_le32 (trans);
if (unlikely ((uframe->bufp != (buf & ~(u64)0x0fff))))
uframe->cross = 1;
}
- return 0;
}
static void
-itd_sched_free (
+iso_sched_free (
struct ehci_iso_stream *stream,
- struct ehci_itd_sched *itd_sched
+ struct ehci_iso_sched *iso_sched
)
{
- list_splice (&itd_sched->itd_list, &stream->free_itd_list);
- kfree (itd_sched);
+ if (!iso_sched)
+ return;
+ // caller must hold ehci->lock!
+ list_splice (&iso_sched->td_list, &stream->free_list);
+ kfree (iso_sched);
}
static int
)
{
struct ehci_itd *itd;
- int status;
dma_addr_t itd_dma;
int i;
unsigned num_itds;
- struct ehci_itd_sched *itd_sched;
+ struct ehci_iso_sched *sched;
- itd_sched = itd_sched_alloc (urb->number_of_packets, mem_flags);
- if (unlikely (itd_sched == 0))
+ sched = iso_sched_alloc (urb->number_of_packets, mem_flags);
+ if (unlikely (sched == 0))
return -ENOMEM;
- status = itd_sched_init (itd_sched, stream, urb);
- if (unlikely (status != 0)) {
- itd_sched_free (stream, itd_sched);
- return status;
- }
+ itd_sched_init (sched, stream, urb);
if (urb->interval < 8)
- num_itds = 1 + (itd_sched->span + 7) / 8;
+ num_itds = 1 + (sched->span + 7) / 8;
else
num_itds = urb->number_of_packets;
/* allocate/init ITDs */
for (i = 0; i < num_itds; i++) {
- /* free_itd_list.next might be cache-hot ... but maybe
+ /* free_list.next might be cache-hot ... but maybe
* the HC caches it too. avoid that issue for now.
*/
/* prefer previously-allocated itds */
- if (likely (!list_empty(&stream->free_itd_list))) {
- itd = list_entry (stream->free_itd_list.prev,
+ if (likely (!list_empty(&stream->free_list))) {
+ itd = list_entry (stream->free_list.prev,
struct ehci_itd, itd_list);
list_del (&itd->itd_list);
itd_dma = itd->itd_dma;
} else
- itd = pci_pool_alloc (ehci->itd_pool, mem_flags,
+ itd = dma_pool_alloc (ehci->itd_pool, mem_flags,
&itd_dma);
if (unlikely (0 == itd)) {
- itd_sched_free (stream, itd_sched);
+ iso_sched_free (stream, sched);
return -ENOMEM;
}
memset (itd, 0, sizeof *itd);
itd->itd_dma = itd_dma;
- list_add (&itd->itd_list, &itd_sched->itd_list);
+ list_add (&itd->itd_list, &sched->td_list);
}
/* temporarily store schedule info in hcpriv */
- urb->hcpriv = itd_sched;
+ urb->hcpriv = sched;
urb->error_count = 0;
return 0;
}
+/*-------------------------------------------------------------------------*/
+
+static inline int
+itd_slot_ok (
+ struct ehci_hcd *ehci,
+ u32 mod,
+ u32 uframe,
+ u32 end,
+ u8 usecs,
+ u32 period
+)
+{
+ do {
+ /* can't commit more than 80% periodic == 100 usec */
+ if (periodic_usecs (ehci, uframe >> 3, uframe & 0x7)
+ > (100 - usecs))
+ return 0;
+
+ /* we know urb->interval is 2^N uframes */
+ uframe += period;
+ uframe %= mod;
+ } while (uframe != end);
+ return 1;
+}
+
+static inline int
+sitd_slot_ok (
+ struct ehci_hcd *ehci,
+ u32 mod,
+ struct ehci_iso_stream *stream,
+ u32 uframe,
+ u32 end,
+ struct ehci_iso_sched *sched,
+ u32 period_uframes
+)
+{
+ u32 mask, tmp;
+ u32 frame, uf;
+
+ mask = stream->raw_mask << (uframe & 7);
+
+ /* for IN, don't wrap CSPLIT into the next frame */
+ if (mask & ~0xffff)
+ return 0;
+
+ /* this multi-pass logic is simple, but performance may
+ * suffer when the schedule data isn't cached.
+ */
+
+ /* check bandwidth */
+ do {
+ u32 max_used;
+
+ frame = uframe >> 3;
+ uf = uframe & 7;
+
+ /* check starts (OUT uses more than one) */
+ max_used = 100 - stream->usecs;
+ for (tmp = stream->raw_mask & 0xff; tmp; tmp >>= 1, uf++) {
+ if (periodic_usecs (ehci, frame, uf) > max_used)
+ return 0;
+ }
+
+ /* for IN, check CSPLIT */
+ if (stream->c_usecs) {
+ max_used = 100 - stream->c_usecs;
+ do {
+ /* tt is busy in the gap before CSPLIT */
+ tmp = 1 << uf;
+ mask |= tmp;
+ tmp <<= 8;
+ if (stream->raw_mask & tmp)
+ break;
+ } while (++uf < 8);
+ if (periodic_usecs (ehci, frame, uf) > max_used)
+ return 0;
+ }
+
+ /* we know urb->interval is 2^N uframes */
+ uframe += period_uframes;
+ uframe %= mod;
+ } while (uframe != end);
+
+ /* tt must be idle for start(s), any gap, and csplit */
+ if (!tt_no_collision (ehci, period_uframes, stream->udev, frame, mask))
+ return 0;
+
+ stream->splits = stream->raw_mask << (uframe & 7);
+ cpu_to_le32s (&stream->splits);
+ return 1;
+}
+
/*
* This scheduler plans almost as far into the future as it has actual
* periodic schedule slots. (Affected by TUNE_FLS, which defaults to
* "as small as possible" to be cache-friendlier.) That limits the size
* transfers you can stream reliably; avoid more than 64 msec per urb.
- * Also avoid queue depths of less than the system's worst irq latency.
+ * Also avoid queue depths of less than ehci's worst irq latency (affected
+ * by the per-urb URB_NO_INTERRUPT hint, the log2_irq_thresh module parameter,
+ * and other factors); or more than about 230 msec total (for portability,
+ * given EHCI_TUNE_FLS and the slop). Or, write a smarter scheduler!
*/
#define SCHEDULE_SLOP 10 /* frames */
static int
-itd_stream_schedule (
+iso_stream_schedule (
struct ehci_hcd *ehci,
struct urb *urb,
struct ehci_iso_stream *stream
)
{
- u32 now, start, end, max;
+ u32 now, start, end, max, period;
int status;
unsigned mod = ehci->periodic_size << 3;
- struct ehci_itd_sched *itd_sched = urb->hcpriv;
+ struct ehci_iso_sched *sched = urb->hcpriv;
- if (unlikely (itd_sched->span > (mod - 8 * SCHEDULE_SLOP))) {
+ if (sched->span > (mod - 8 * SCHEDULE_SLOP)) {
ehci_dbg (ehci, "iso request %p too long\n", urb);
status = -EFBIG;
goto fail;
}
+ if ((stream->depth + sched->span) > mod) {
+ ehci_dbg (ehci, "request %p would overflow (%d+%d>%d)\n",
+ urb, stream->depth, sched->span, mod);
+ status = -EFBIG;
+ goto fail;
+ }
+
now = readl (&ehci->regs->frame_index) % mod;
/* when's the last uframe this urb could start? */
max = now + mod;
- max -= itd_sched->span;
+ max -= sched->span;
max -= 8 * SCHEDULE_SLOP;
/* typical case: reuse current schedule. stream is still active,
* and no gaps from host falling behind (irq delays etc)
*/
- if (likely (!list_empty (&stream->itd_list))) {
-
+ if (likely (!list_empty (&stream->td_list))) {
start = stream->next_uframe;
if (start < now)
start += mod;
if (likely (start < max))
goto ready;
-
- /* two cases:
- * (a) we missed some uframes ... can reschedule
- * (b) trying to overcommit the schedule
- * FIXME (b) should be a hard failure
- */
+ /* else fell behind; try to reschedule */
}
/* need to schedule; when's the next (u)frame we could start?
* jump until after the queue is primed.
*/
start = SCHEDULE_SLOP * 8 + (now & ~0x07);
+ start %= mod;
end = start;
- ehci_vdbg (ehci, "%s schedule from %d (%d..%d), was %d\n",
- __FUNCTION__, now, start, max,
- stream->next_uframe);
-
/* NOTE: assumes URB_ISO_ASAP, to limit complexity/bugs */
- if (likely (max > (start + urb->interval)))
- max = start + urb->interval;
+ period = urb->interval;
+ if (!stream->highspeed)
+ period <<= 3;
+ if (max > (start + period))
+ max = start + period;
/* hack: account for itds already scheduled to this endpoint */
- if (unlikely (list_empty (&stream->itd_list)))
+ if (list_empty (&stream->td_list))
end = max;
/* within [start..max] find a uframe slot with enough bandwidth */
end %= mod;
do {
- unsigned uframe;
- int enough_space = 1;
+ int enough_space;
/* check schedule: enough space? */
- uframe = start;
- do {
- uframe %= mod;
-
- /* can't commit more than 80% periodic == 100 usec */
- if (periodic_usecs (ehci, uframe >> 3, uframe & 0x7)
- > (100 - stream->usecs)) {
- enough_space = 0;
- break;
- }
-
- /* we know urb->interval is 2^N uframes */
- uframe += urb->interval;
- } while (uframe != end);
+ if (stream->highspeed)
+ enough_space = itd_slot_ok (ehci, mod, start, end,
+ stream->usecs, period);
+ else {
+ if ((start % 8) >= 6)
+ continue;
+ enough_space = sitd_slot_ok (ehci, mod, stream,
+ start, end, sched, period);
+ }
/* (re)schedule it here if there's enough bandwidth */
if (enough_space) {
start %= mod;
- if (unlikely (!list_empty (&stream->itd_list))) {
+ if (unlikely (!list_empty (&stream->td_list))) {
/* host fell behind ... maybe irq latencies
* delayed this request queue for too long.
*/
/* no room in the schedule */
ehci_dbg (ehci, "iso %ssched full %p (now %d end %d max %d)\n",
- list_empty (&stream->itd_list) ? "" : "re",
+ list_empty (&stream->td_list) ? "" : "re",
urb, now, end, max);
status = -ENOSPC;
fail:
- itd_sched_free (stream, itd_sched);
+ iso_sched_free (stream, sched);
urb->hcpriv = 0;
return status;
static inline void
itd_patch (
struct ehci_itd *itd,
- struct ehci_itd_sched *itd_sched,
+ struct ehci_iso_sched *iso_sched,
unsigned index,
u16 uframe,
int first
)
{
- struct ehci_iso_uframe *uf = &itd_sched->packet [index];
+ struct ehci_iso_packet *uf = &iso_sched->packet [index];
unsigned pg = itd->pg;
// BUG_ON (pg == 6 && uf->cross);
{
int packet, first = 1;
unsigned next_uframe, uframe, frame;
- struct ehci_itd_sched *itd_sched = urb->hcpriv;
+ struct ehci_iso_sched *iso_sched = urb->hcpriv;
struct ehci_itd *itd;
next_uframe = stream->next_uframe % mod;
- if (unlikely (list_empty(&stream->itd_list))) {
+ if (unlikely (list_empty(&stream->td_list))) {
hcd_to_bus (&ehci->hcd)->bandwidth_allocated
+= stream->bandwidth;
ehci_vdbg (ehci,
for (packet = 0, itd = 0; packet < urb->number_of_packets; ) {
if (itd == 0) {
/* ASSERT: we have all necessary itds */
- // BUG_ON (list_empty (&itd_sched->itd_list));
+ // BUG_ON (list_empty (&iso_sched->td_list));
/* ASSERT: no itds for this endpoint in this uframe */
- itd = list_entry (itd_sched->itd_list.next,
+ itd = list_entry (iso_sched->td_list.next,
struct ehci_itd, itd_list);
- list_move_tail (&itd->itd_list, &stream->itd_list);
+ list_move_tail (&itd->itd_list, &stream->td_list);
itd->stream = iso_stream_get (stream);
itd->urb = usb_get_urb (urb);
first = 1;
frame = next_uframe >> 3;
itd->usecs [uframe] = stream->usecs;
- itd_patch (itd, itd_sched, packet, uframe, first);
+ itd_patch (itd, iso_sched, packet, uframe, first);
first = 0;
next_uframe += stream->interval;
+ stream->depth += stream->interval;
next_uframe %= mod;
packet++;
stream->next_uframe = next_uframe;
/* don't need that schedule data any more */
- itd_sched_free (stream, itd_sched);
+ iso_sched_free (stream, iso_sched);
urb->hcpriv = 0;
if (unlikely (!ehci->periodic_sched++))
t = le32_to_cpup (&itd->hw_transaction [uframe]);
itd->hw_transaction [uframe] = 0;
+ stream->depth -= stream->interval;
/* report transfer status */
if (unlikely (t & ISO_ERRS)) {
usb_put_urb (urb);
itd->urb = 0;
itd->stream = 0;
- list_move (&itd->itd_list, &stream->free_itd_list);
+ list_move (&itd->itd_list, &stream->free_list);
iso_stream_put (ehci, stream);
/* handle completion now? */
return 0;
/* ASSERT: it's really the last itd for this urb
- list_for_each_entry (itd, &stream->itd_list, itd_list)
+ list_for_each_entry (itd, &stream->td_list, itd_list)
BUG_ON (itd->urb == urb);
*/
(void) disable_periodic (ehci);
hcd_to_bus (&ehci->hcd)->bandwidth_isoc_reqs--;
- if (unlikely (list_empty (&stream->itd_list))) {
+ if (unlikely (list_empty (&stream->td_list))) {
hcd_to_bus (&ehci->hcd)->bandwidth_allocated
-= stream->bandwidth;
ehci_vdbg (ehci,
/* schedule ... need to lock */
spin_lock_irqsave (&ehci->lock, flags);
- status = itd_stream_schedule (ehci, urb, stream);
+ status = iso_stream_schedule (ehci, urb, stream);
if (likely (status == 0))
itd_link_urb (ehci, urb, ehci->periodic_size << 3, stream);
spin_unlock_irqrestore (&ehci->lock, flags);
scan_periodic (struct ehci_hcd *ehci, struct pt_regs *regs)
{
unsigned frame, clock, now_uframe, mod;
- unsigned count = 0;
+ unsigned modified;
mod = ehci->periodic_size << 3;
*/
now_uframe = ehci->next_uframe;
if (HCD_IS_RUNNING (ehci->hcd.state))
- clock = readl (&ehci->regs->frame_index) % mod;
+ clock = readl (&ehci->regs->frame_index);
else
clock = now_uframe + mod - 1;
+ clock %= mod;
for (;;) {
union ehci_shadow q, *q_p;
u32 type, *hw_p;
unsigned uframes;
+ /* don't scan past the live uframe */
frame = now_uframe >> 3;
-restart:
- /* scan schedule to _before_ current frame index */
- if ((frame == (clock >> 3))
- && HCD_IS_RUNNING (ehci->hcd.state))
+ if (frame == (clock >> 3))
uframes = now_uframe & 0x07;
- else
+ else {
+ /* safe to scan the whole frame at once */
+ now_uframe |= 0x07;
uframes = 8;
+ }
+restart:
+ /* scan each element in frame's queue for completions */
q_p = &ehci->pshadow [frame];
hw_p = &ehci->periodic [frame];
q.ptr = q_p->ptr;
type = Q_NEXT_TYPE (*hw_p);
+ modified = 0;
- /* scan each element in frame's queue for completions */
while (q.ptr != 0) {
- int last;
unsigned uf;
union ehci_shadow temp;
switch (type) {
case Q_TYPE_QH:
- last = (q.qh->hw_next == EHCI_LIST_END);
- temp = q.qh->qh_next;
+ /* handle any completions */
+ temp.qh = qh_get (q.qh);
type = Q_NEXT_TYPE (q.qh->hw_next);
- count += intr_complete (ehci, frame,
- qh_get (q.qh), regs);
- qh_put (ehci, q.qh);
- q = temp;
+ q = q.qh->qh_next;
+ modified = qh_completions (ehci, temp.qh, regs);
+ if (unlikely (list_empty (&temp.qh->qtd_list)))
+ intr_deschedule (ehci, temp.qh, 0);
+ qh_put (ehci, temp.qh);
break;
case Q_TYPE_FSTN:
- last = (q.fstn->hw_next == EHCI_LIST_END);
/* for "save place" FSTNs, look at QH entries
* in the previous frame for completions.
*/
q = q.fstn->fstn_next;
break;
case Q_TYPE_ITD:
- last = (q.itd->hw_next == EHCI_LIST_END);
-
/* skip itds for later in the frame */
rmb ();
for (uf = uframes; uf < 8; uf++) {
if (0 == (q.itd->hw_transaction [uf]
- & ISO_ACTIVE))
+ & ITD_ACTIVE))
continue;
q_p = &q.itd->itd_next;
hw_p = &q.itd->hw_next;
*/
*q_p = q.itd->itd_next;
*hw_p = q.itd->hw_next;
+ type = Q_NEXT_TYPE (q.itd->hw_next);
wmb();
-
- /* always rescan here; simpler */
- count += itd_complete (ehci, q.itd, regs);
- goto restart;
+ modified = itd_complete (ehci, q.itd, regs);
+ q = *q_p;
+ break;
#ifdef have_split_iso
case Q_TYPE_SITD:
- last = (q.sitd->hw_next == EHCI_LIST_END);
- sitd_complete (ehci, q.sitd);
+ if (q.sitd->hw_results & SITD_ACTIVE) {
+ q_p = &q.sitd->sitd_next;
+ hw_p = &q.sitd->hw_next;
+ type = Q_NEXT_TYPE (q.sitd->hw_next);
+ q = *q_p;
+ break;
+ }
+ *q_p = q.sitd->sitd_next;
+ *hw_p = q.sitd->hw_next;
type = Q_NEXT_TYPE (q.sitd->hw_next);
-
- // FIXME unlink SITD after split completes
- q = q.sitd->sitd_next;
+ wmb();
+ modified = sitd_complete (ehci, q.sitd, regs);
+ q = *q_p;
break;
#endif /* have_split_iso */
default:
dbg ("corrupt type %d frame %d shadow %p",
type, frame, q.ptr);
// BUG ();
- last = 1;
q.ptr = 0;
}
- /* did completion remove an interior q entry? */
- if (unlikely (q.ptr == 0 && !last))
+ /* assume completion callbacks modify the queue */
+ if (unlikely (modified))
goto restart;
}
/* rescan the rest of this frame, then ... */
clock = now;
} else {
- /* FIXME sometimes we can scan the next frame
- * right away, not always inching up on it ...
- */
now_uframe++;
now_uframe %= mod;
}
struct ehci_regs *regs;
u32 hcs_params; /* cached register copy */
- /* per-HC memory pools (could be per-PCI-bus, but ...) */
- struct pci_pool *qh_pool; /* qh per active urb */
- struct pci_pool *qtd_pool; /* one or more per qh */
- struct pci_pool *itd_pool; /* itd per iso urb */
- struct pci_pool *sitd_pool; /* sitd per split iso urb */
+ /* per-HC memory pools (could be per-bus, but ...) */
+ struct dma_pool *qh_pool; /* qh per active urb */
+ struct dma_pool *qtd_pool; /* one or more per qh */
+ struct dma_pool *itd_pool; /* itd per iso urb */
+ struct dma_pool *sitd_pool; /* sitd per split iso urb */
struct timer_list watchdog;
struct notifier_block reboot_notifier;
/*-------------------------------------------------------------------------*/
-/* description of one iso highspeed transaction (up to 3 KB data) */
-struct ehci_iso_uframe {
+/* description of one iso transaction (up to 3 KB data if highspeed) */
+struct ehci_iso_packet {
/* These will be copied to iTD when scheduling */
u64 bufp; /* itd->hw_bufp{,_hi}[pg] |= */
u32 transaction; /* itd->hw_transaction[i] |= */
u8 cross; /* buf crosses pages */
+ /* for full speed OUT splits */
+ u16 buf1;
};
-/* temporary schedule data for highspeed packets from iso urbs
- * each packet is one uframe's usb transactions, in some itd,
+/* temporary schedule data for packets from iso urbs (both speeds)
+ * each packet is one logical usb transaction to the device (not TT),
* beginning at stream->next_uframe
*/
-struct ehci_itd_sched {
- struct list_head itd_list;
+struct ehci_iso_sched {
+ struct list_head td_list;
unsigned span;
- struct ehci_iso_uframe packet [0];
+ struct ehci_iso_packet packet [0];
};
/*
u32 refcount;
u8 bEndpointAddress;
- struct list_head itd_list; /* queued itds */
- struct list_head free_itd_list; /* list of unused itds */
- struct hcd_dev *dev;
+ u8 highspeed;
+ u16 depth; /* depth in uframes */
+ struct list_head td_list; /* queued itds/sitds */
+ struct list_head free_list; /* list of unused itds/sitds */
+ struct usb_device *udev;
/* output of (re)scheduling */
unsigned long start; /* jiffies */
unsigned long rescheduled;
int next_uframe;
+ u32 splits;
/* the rest is derived from the endpoint descriptor,
- * trusting urb->interval == (1 << (epdesc->bInterval - 1)),
+ * trusting urb->interval == f(epdesc->bInterval) and
* including the extra info for hw_bufp[0..2]
*/
u8 interval;
- u8 usecs;
+ u8 usecs, c_usecs;
u16 maxp;
+ u16 raw_mask;
unsigned bandwidth;
/* This is used to initialize iTD's hw_bufp fields */
u32 buf1;
u32 buf2;
- /* ... sITD won't use buf[012], and needs TT access ... */
+ /* this is used to initialize sITD's tt info */
+ u32 address;
};
/*-------------------------------------------------------------------------*/
#define EHCI_ITD_LENGTH(tok) (((tok)>>16) & 0x0fff)
#define EHCI_ITD_IOC (1 << 15) /* interrupt on complete */
-#define ISO_ACTIVE __constant_cpu_to_le32(EHCI_ISOC_ACTIVE)
+#define ITD_ACTIVE __constant_cpu_to_le32(EHCI_ISOC_ACTIVE)
u32 hw_bufp [7]; /* see EHCI 3.3.3 */
u32 hw_bufp_hi [7]; /* Appendix B */
/* first part defined by EHCI spec */
u32 hw_next;
/* uses bit field macros above - see EHCI 0.95 Table 3-8 */
- u32 hw_fullspeed_ep; /* see EHCI table 3-9 */
- u32 hw_uframe; /* see EHCI table 3-10 */
- u32 hw_tx_results1; /* see EHCI table 3-11 */
- u32 hw_tx_results2; /* see EHCI table 3-12 */
- u32 hw_tx_results3; /* see EHCI table 3-12 */
- u32 hw_backpointer; /* see EHCI table 3-13 */
- u32 hw_buf_hi [2]; /* Appendix B */
+ u32 hw_fullspeed_ep; /* see EHCI table 3-9 */
+ u32 hw_uframe; /* see EHCI table 3-10 */
+ u32 hw_results; /* see EHCI table 3-11 */
+#define SITD_IOC (1 << 31) /* interrupt on completion */
+#define SITD_PAGE (1 << 30) /* buffer 0/1 */
+#define SITD_LENGTH(x) (0x3ff & ((x)>>16))
+#define SITD_STS_ACTIVE (1 << 7) /* HC may execute this */
+#define SITD_STS_ERR (1 << 6) /* error from TT */
+#define SITD_STS_DBE (1 << 5) /* data buffer error (in HC) */
+#define SITD_STS_BABBLE (1 << 4) /* device was babbling */
+#define SITD_STS_XACT (1 << 3) /* illegal IN response */
+#define SITD_STS_MMF (1 << 2) /* incomplete split transaction */
+#define SITD_STS_STS (1 << 1) /* split transaction state */
+
+#define SITD_ACTIVE __constant_cpu_to_le32(SITD_STS_ACTIVE)
+
+ u32 hw_buf [2]; /* see EHCI table 3-12 */
+ u32 hw_backpointer; /* see EHCI table 3-13 */
+ u32 hw_buf_hi [2]; /* Appendix B */
/* the rest is HCD-private */
dma_addr_t sitd_dma;
union ehci_shadow sitd_next; /* ptr to periodic q entry */
- struct urb *urb;
- dma_addr_t buf_dma; /* buffer address */
- unsigned short usecs; /* start bandwidth */
- unsigned short c_usecs; /* completion bandwidth */
+ struct urb *urb;
+ struct ehci_iso_stream *stream; /* endpoint's queue */
+ struct list_head sitd_list; /* list of stream's sitds */
+ unsigned frame;
+ unsigned index;
} __attribute__ ((aligned (32)));
/*-------------------------------------------------------------------------*/
do { \
if (next) { \
unsigned s_len; \
- s_len = snprintf (*next, *size, format, ## arg ); \
+ s_len = scnprintf (*next, *size, format, ## arg ); \
*size -= s_len; *next += s_len; \
} else \
ohci_dbg(ohci,format, ## arg ); \
struct list_head *entry;
struct td *td;
- temp = snprintf (buf, size,
+ temp = scnprintf (buf, size,
"ed/%p %cs dev%d ep%d%s max %d %08x%s%s %s",
ed,
(info & ED_LOWSPEED) ? 'l' : 'f',
scratch = cpu_to_le32p (&td->hwINFO);
cbp = le32_to_cpup (&td->hwCBP);
be = le32_to_cpup (&td->hwBE);
- temp = snprintf (buf, size,
+ temp = scnprintf (buf, size,
"\n\ttd %p %s %d cc=%x urb %p (%08x)",
td,
({ char *pid;
buf += temp;
}
- temp = snprintf (buf, size, "\n");
+ temp = scnprintf (buf, size, "\n");
size -= temp;
buf += temp;
next = buf;
size = PAGE_SIZE;
- temp = snprintf (next, size, "size = %d\n", NUM_INTS);
+ temp = scnprintf (next, size, "size = %d\n", NUM_INTS);
size -= temp;
next += temp;
if (!(ed = ohci->periodic [i]))
continue;
- temp = snprintf (next, size, "%2d [%3d]:", i, ohci->load [i]);
+ temp = scnprintf (next, size, "%2d [%3d]:", i, ohci->load [i]);
size -= temp;
next += temp;
do {
- temp = snprintf (next, size, " ed%d/%p",
+ temp = scnprintf (next, size, " ed%d/%p",
ed->interval, ed);
size -= temp;
next += temp;
list_for_each (entry, &ed->td_list)
qlen++;
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
" (%cs dev%d ep%d%s-%s qlen %u"
" max %d %08x%s%s)",
(info & ED_LOWSPEED) ? 'l' : 'f',
} while (ed);
- temp = snprintf (next, size, "\n");
+ temp = scnprintf (next, size, "\n");
size -= temp;
next += temp;
}
/* other registers mostly affect frame timings */
rdata = readl (®s->fminterval);
- temp = snprintf (next, size,
+ temp = scnprintf (next, size,
"fmintvl 0x%08x %sFSMPS=0x%04x FI=0x%04x\n",
rdata, (rdata >> 31) ? " FIT" : "",
(rdata >> 16) & 0xefff, rdata & 0xffff);
next += temp;
rdata = readl (®s->fmremaining);
- temp = snprintf (next, size, "fmremaining 0x%08x %sFR=0x%04x\n",
+ temp = scnprintf (next, size, "fmremaining 0x%08x %sFR=0x%04x\n",
rdata, (rdata >> 31) ? " FRT" : "",
rdata & 0x3fff);
size -= temp;
next += temp;
rdata = readl (®s->periodicstart);
- temp = snprintf (next, size, "periodicstart 0x%04x\n",
+ temp = scnprintf (next, size, "periodicstart 0x%04x\n",
rdata & 0x3fff);
size -= temp;
next += temp;
rdata = readl (®s->lsthresh);
- temp = snprintf (next, size, "lsthresh 0x%04x\n",
+ temp = scnprintf (next, size, "lsthresh 0x%04x\n",
rdata & 0x3fff);
size -= temp;
next += temp;
*
* History:
*
+ * 2004/02/04 use generic dma_* functions instead of pci_* (dsaxena@plexity.net)
* 2003/02/24 show registers in sysfs (Kevin Brosius)
*
* 2002/09/03 get rid of ed hashtables, rework periodic scheduling and
#include <linux/interrupt.h> /* for in_interrupt () */
#include <linux/usb.h>
#include "../core/hcd.h"
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h> /* needed by ohci-mem.c when no PCI */
#include <asm/io.h>
#include <asm/irq.h>
/* ASSERT: any requests/urbs are being unlinked */
/* ASSERT: nobody can be submitting urbs for this any more */
- if (!HCD_IS_RUNNING (ohci->hcd.state)) {
- ed->state = ED_IDLE;
- finish_unlinks (ohci, 0, 0);
- }
-
epnum <<= 1;
if (epnum != 0 && !(ep & USB_DIR_IN))
epnum |= 1;
disable (ohci);
ohci_err (ohci, "OHCI Unrecoverable Error, disabled\n");
// e.g. due to PCI Master/Target Abort
-#if 1
- if (hcd->pdev) {
- u16 status;
-
- pci_read_config_word(hcd->pdev, PCI_STATUS, &status);
- printk(KERN_ERR "OHCI PCI Status: 0x%04x\n", status);
- }
-#endif
ohci_dump (ohci, 1);
- hc_reset (ohci);
+ hc_reset (ohci);
}
if (ints & OHCI_INTR_WDH) {
remove_debug_files (ohci);
ohci_mem_cleanup (ohci);
if (ohci->hcca) {
- pci_free_consistent (ohci->hcd.pdev, sizeof *ohci->hcca,
- ohci->hcca, ohci->hcca_dma);
+ dma_free_coherent (ohci->hcd.self.controller,
+ sizeof *ohci->hcca,
+ ohci->hcca, ohci->hcca_dma);
ohci->hcca = NULL;
ohci->hcca_dma = 0;
}
struct ohci_hcd *ohci = hcd_to_ohci (hcd);
int ports, i, changed = 0, length = 1;
- if (HCD_IS_SUSPENDED(hcd->state)) {
- printk("ohci_hub_status_data() : sleeping\n");
- return 0;
- }
-
ports = roothub_a (ohci) & RH_A_NDP;
if (ports > MAX_ROOT_PORTS) {
if (!HCD_IS_RUNNING(ohci->hcd.state))
u32 temp;
int retval = 0;
- if (HCD_IS_SUSPENDED(hcd->state)) {
- printk("ohci_hub_control() : sleeping\n");
- return -ENODEV;
- }
-
switch (typeReq) {
case ClearHubFeature:
switch (wValue) {
* There's basically three types of memory:
* - data used only by the HCD ... kmalloc is fine
* - async and periodic schedules, shared by HC and HCD ... these
- * need to use pci_pool or pci_alloc_consistent
+ * need to use dma_pool or dma_alloc_coherent
* - driver buffers, read/written by HC ... the hcd glue or the
* device driver provides us with dma addresses
*
static int ohci_mem_init (struct ohci_hcd *ohci)
{
- ohci->td_cache = pci_pool_create ("ohci_td", ohci->hcd.pdev,
+ ohci->td_cache = dma_pool_create ("ohci_td", ohci->hcd.self.controller,
sizeof (struct td),
32 /* byte alignment */,
0 /* no page-crossing issues */);
if (!ohci->td_cache)
return -ENOMEM;
- ohci->ed_cache = pci_pool_create ("ohci_ed", ohci->hcd.pdev,
+ ohci->ed_cache = dma_pool_create ("ohci_ed", ohci->hcd.self.controller,
sizeof (struct ed),
16 /* byte alignment */,
0 /* no page-crossing issues */);
if (!ohci->ed_cache) {
- pci_pool_destroy (ohci->td_cache);
+ dma_pool_destroy (ohci->td_cache);
return -ENOMEM;
}
return 0;
static void ohci_mem_cleanup (struct ohci_hcd *ohci)
{
if (ohci->td_cache) {
- pci_pool_destroy (ohci->td_cache);
+ dma_pool_destroy (ohci->td_cache);
ohci->td_cache = 0;
}
if (ohci->ed_cache) {
- pci_pool_destroy (ohci->ed_cache);
+ dma_pool_destroy (ohci->ed_cache);
ohci->ed_cache = 0;
}
}
dma_addr_t dma;
struct td *td;
- td = pci_pool_alloc (hc->td_cache, mem_flags, &dma);
+ td = dma_pool_alloc (hc->td_cache, mem_flags, &dma);
if (td) {
/* in case hc fetches it, make it look dead */
memset (td, 0, sizeof *td);
*prev = td->td_hash;
else if ((td->hwINFO & TD_DONE) != 0)
ohci_dbg (hc, "no hash for td %p\n", td);
- pci_pool_free (hc->td_cache, td, td->td_dma);
+ dma_pool_free (hc->td_cache, td, td->td_dma);
}
/*-------------------------------------------------------------------------*/
dma_addr_t dma;
struct ed *ed;
- ed = pci_pool_alloc (hc->ed_cache, mem_flags, &dma);
+ ed = dma_pool_alloc (hc->ed_cache, mem_flags, &dma);
if (ed) {
memset (ed, 0, sizeof (*ed));
INIT_LIST_HEAD (&ed->td_list);
static void
ed_free (struct ohci_hcd *hc, struct ed *ed)
{
- pci_pool_free (hc->ed_cache, ed, ed->dma);
+ dma_pool_free (hc->ed_cache, ed, ed->dma);
}
hcd->description = driver->description;
hcd->irq = dev->irq[0];
hcd->regs = dev->mapbase;
- hcd->pdev = OMAP_FAKE_PCIDEV;
hcd->self.controller = &dev->dev;
- hcd->controller = hcd->self.controller;
retval = hcd_buffer_create (hcd);
if (retval != 0) {
struct ohci_hcd *ohci = hcd_to_ohci (hcd);
int ret;
- if (hcd->pdev) {
- ohci->hcca = pci_alloc_consistent (hcd->pdev,
- sizeof *ohci->hcca, &ohci->hcca_dma);
- if (!ohci->hcca)
- return -ENOMEM;
- }
+ ohci->hcca = dma_alloc_consistent (hcd->self.controller,
+ sizeof *ohci->hcca, &ohci->hcca_dma);
+ if (!ohci->hcca)
+ return -ENOMEM;
memset (ohci->hcca, 0, sizeof (struct ohci_hcca));
if ((ret = ohci_mem_init (ohci)) < 0) {
*
* This file is licenced under the GPL.
*/
+
+#ifdef CONFIG_PMAC_PBOOK
+#include <asm/machdep.h>
+#include <asm/pmac_feature.h>
+#include <asm/pci-bridge.h>
+#include <asm/prom.h>
+#ifndef CONFIG_PM
+# define CONFIG_PM
+#endif
+#endif
#ifndef CONFIG_PCI
#error "This file is PCI bus glue. CONFIG_PCI must be defined."
struct ohci_hcd *ohci = hcd_to_ohci (hcd);
int ret;
- if (hcd->pdev) {
-#if 1
- u16 status;
-
- pci_read_config_word(hcd->pdev, PCI_STATUS, &status);
- printk(KERN_ERR "OHCI PCI Status: 0x%04x\n", status);
- if (status & 0xf900) {
- printk(KERN_ERR "Initial error ! clearing ...\n");
- pci_write_config_word(hcd->pdev, PCI_STATUS, status);
- }
-#endif
+ ohci->hcca = dma_alloc_coherent (hcd->self.controller,
+ sizeof *ohci->hcca, &ohci->hcca_dma, 0);
+ if (!ohci->hcca)
+ return -ENOMEM;
- ohci->hcca = pci_alloc_consistent (hcd->pdev,
- sizeof *ohci->hcca, &ohci->hcca_dma);
- if (!ohci->hcca)
- return -ENOMEM;
+ if(hcd->self.controller && hcd->self.controller->bus == &pci_bus_type) {
+ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
/* AMD 756, for most chips (early revs), corrupts register
* values on read ... so enable the vendor workaround.
*/
- if (hcd->pdev->vendor == PCI_VENDOR_ID_AMD
- && hcd->pdev->device == 0x740c) {
+ if (pdev->vendor == PCI_VENDOR_ID_AMD
+ && pdev->device == 0x740c) {
ohci->flags = OHCI_QUIRK_AMD756;
ohci_info (ohci, "AMD756 erratum 4 workaround\n");
}
* for this chip. Evidently control and bulk lists
* can get confused. (B&W G3 models, and ...)
*/
- else if (hcd->pdev->vendor == PCI_VENDOR_ID_OPTI
- && hcd->pdev->device == 0xc861) {
+ else if (pdev->vendor == PCI_VENDOR_ID_OPTI
+ && pdev->device == 0xc861) {
ohci_info (ohci,
"WARNING: OPTi workarounds unavailable\n");
}
* identify the USB (fn2). This quirk might apply to more or
* even all NSC stuff.
*/
- else if (hcd->pdev->vendor == PCI_VENDOR_ID_NS) {
- struct pci_dev *b, *hc;
+ else if (pdev->vendor == PCI_VENDOR_ID_NS) {
+ struct pci_dev *b;
- hc = hcd->pdev;
- b = pci_find_slot (hc->bus->number,
- PCI_DEVFN (PCI_SLOT (hc->devfn), 1));
+ b = pci_find_slot (pdev->bus->number,
+ PCI_DEVFN (PCI_SLOT (pdev->devfn), 1));
if (b && b->device == PCI_DEVICE_ID_NS_87560_LIO
&& b->vendor == PCI_VENDOR_ID_NS) {
ohci->flags |= OHCI_QUIRK_SUPERIO;
mdelay (1);
if (!readl (&ohci->regs->intrstatus) & OHCI_INTR_SF)
mdelay (1);
+
+#ifdef CONFIG_PMAC_PBOOK
+ if (_machine == _MACH_Pmac)
+ disable_irq ((to_pci_dev(hcd->self.controller))->irq);
+ /* else, 2.4 assumes shared irqs -- don't disable */
+#endif
/* Enable remote wakeup */
writel (readl (&ohci->regs->intrenable) | OHCI_INTR_RD,
* memory during sleep. We disable its bus master bit during
* suspend
*/
- pci_read_config_word (hcd->pdev, PCI_COMMAND, &cmd);
+ pci_read_config_word (to_pci_dev(hcd->self.controller), PCI_COMMAND,
+ &cmd);
cmd &= ~PCI_COMMAND_MASTER;
- pci_write_config_word (hcd->pdev, PCI_COMMAND, cmd);
-
+ pci_write_config_word (to_pci_dev(hcd->self.controller), PCI_COMMAND,
+ cmd);
+#ifdef CONFIG_PMAC_PBOOK
+ {
+ struct device_node *of_node;
+
+ /* Disable USB PAD & cell clock */
+ of_node = pci_device_to_OF_node (to_pci_dev(hcd->self.controller));
+ if (of_node)
+ pmac_call_feature(PMAC_FTR_USB_ENABLE, of_node, 0, 0);
+ }
+#endif
return 0;
}
int temp;
int retval = 0;
+#ifdef CONFIG_PMAC_PBOOK
+ {
+ struct device_node *of_node;
+
+ /* Re-enable USB PAD & cell clock */
+ of_node = pci_device_to_OF_node (to_pci_dev(hcd->self.controller));
+ if (of_node)
+ pmac_call_feature (PMAC_FTR_USB_ENABLE, of_node, 0, 1);
+ }
+#endif
/* did we suspend, or were we powered off? */
ohci->hc_control = readl (&ohci->regs->control);
temp = ohci->hc_control & OHCI_CTRL_HCFS;
#endif
/* Re-enable bus mastering */
- pci_set_master (ohci->hcd.pdev);
+ pci_set_master (to_pci_dev(ohci->hcd.self.controller));
switch (temp) {
(void) readl (&ohci->regs->intrdisable);
spin_unlock_irq (&ohci->lock);
+#ifdef CONFIG_PMAC_PBOOK
+ if (_machine == _MACH_Pmac)
+ enable_irq (to_pci_dev(hcd->self.controller)->irq);
+#endif
/* Check for a pending done list */
if (ohci->hcca->done_head)
if (!(ed = dev->ep [ep])) {
struct td *td;
- ed = ed_alloc (ohci, SLAB_ATOMIC);
+ ed = ed_alloc (ohci, GFP_ATOMIC);
if (!ed) {
/* out of memory */
goto done;
dev->ep [ep] = ed;
/* dummy td; end of td list for ed */
- td = td_alloc (ohci, SLAB_ATOMIC);
+ td = td_alloc (ohci, GFP_ATOMIC);
if (!td) {
/* out of memory */
ed_free (ohci, ed);
hcd->description = driver->description;
hcd->irq = dev->irq[1];
hcd->regs = dev->mapbase;
- hcd->pdev = SA1111_FAKE_PCIDEV;
hcd->self.controller = &dev->dev;
- hcd->controller = hcd->self.controller;
retval = hcd_buffer_create (hcd);
if (retval != 0) {
struct ohci_hcd *ohci = hcd_to_ohci (hcd);
int ret;
- if (hcd->pdev) {
- ohci->hcca = pci_alloc_consistent (hcd->pdev,
- sizeof *ohci->hcca, &ohci->hcca_dma);
- if (!ohci->hcca)
- return -ENOMEM;
- }
-
- memset (ohci->hcca, 0, sizeof (struct ohci_hcca));
+ ohci->hcca = dma_alloc_coherent (hcd->self.controller,
+ sizeof *ohci->hcca, &ohci->hcca_dma, 0);
+ if (!ohci->hcca)
+ return -ENOMEM;
+
+ memset (ohci->hcca, 0, sizeof (struct ohci_hcca));
if ((ret = ohci_mem_init (ohci)) < 0) {
ohci_stop (hcd);
return ret;
/*
* memory management for queue data structures
*/
- struct pci_pool *td_cache;
- struct pci_pool *ed_cache;
+ struct dma_pool *td_cache;
+ struct dma_pool *ed_cache;
struct td *td_hash [TD_HASH_SIZE];
/*
#endif /* DEBUG */
#define ohci_dbg(ohci, fmt, args...) \
- dev_dbg ((ohci)->hcd.controller , fmt , ## args )
+ dev_dbg ((ohci)->hcd.self.controller , fmt , ## args )
#define ohci_err(ohci, fmt, args...) \
- dev_err ((ohci)->hcd.controller , fmt , ## args )
+ dev_err ((ohci)->hcd.self.controller , fmt , ## args )
#define ohci_info(ohci, fmt, args...) \
- dev_info ((ohci)->hcd.controller , fmt , ## args )
+ dev_info ((ohci)->hcd.self.controller , fmt , ## args )
#define ohci_warn(ohci, fmt, args...) \
- dev_warn ((ohci)->hcd.controller , fmt , ## args )
+ dev_warn ((ohci)->hcd.self.controller , fmt , ## args )
#ifdef OHCI_VERBOSE_DEBUG
# define ohci_vdbg ohci_dbg
char *out = buf;
/* Try to make sure there's enough memory */
- if (len < 80)
+ if (len < 160)
return 0;
- out += sprintf(out, " stat%d = %04x %s%s%s%s%s%s%s%s\n",
+ out += sprintf(out, " stat%d = %04x %s%s%s%s%s%s%s%s%s%s\n",
port,
status,
- (status & USBPORTSC_SUSP) ? "PortSuspend " : "",
- (status & USBPORTSC_PR) ? "PortReset " : "",
- (status & USBPORTSC_LSDA) ? "LowSpeed " : "",
- (status & USBPORTSC_RD) ? "ResumeDetect " : "",
- (status & USBPORTSC_PEC) ? "EnableChange " : "",
- (status & USBPORTSC_PE) ? "PortEnabled " : "",
- (status & USBPORTSC_CSC) ? "ConnectChange " : "",
- (status & USBPORTSC_CCS) ? "PortConnected " : "");
+ (status & USBPORTSC_SUSP) ? " Suspend" : "",
+ (status & USBPORTSC_OCC) ? " OverCurrentChange" : "",
+ (status & USBPORTSC_OC) ? " OverCurrent" : "",
+ (status & USBPORTSC_PR) ? " Reset" : "",
+ (status & USBPORTSC_LSDA) ? " LowSpeed" : "",
+ (status & USBPORTSC_RD) ? " ResumeDetect" : "",
+ (status & USBPORTSC_PEC) ? " EnableChange" : "",
+ (status & USBPORTSC_PE) ? " Enabled" : "",
+ (status & USBPORTSC_CSC) ? " ConnectChange" : "",
+ (status & USBPORTSC_CCS) ? " Connected" : "");
return out - buf;
}
out += sprintf(out, "%s", (urbp->fsbr ? "FSBR " : ""));
out += sprintf(out, "%s", (urbp->fsbr_timeout ? "FSBR_TO " : ""));
- if (urbp->status != -EINPROGRESS)
- out += sprintf(out, "Status=%d ", urbp->status);
+ if (urbp->urb->status != -EINPROGRESS)
+ out += sprintf(out, "Status=%d ", urbp->urb->status);
//out += sprintf(out, "Inserttime=%lx ",urbp->inserttime);
//out += sprintf(out, "FSBRtime=%lx ",urbp->fsbrtime);
head = &uhci->complete_list;
tmp = head->next;
while (tmp != head) {
- struct urb_priv *urbp = list_entry(tmp, struct urb_priv, complete_list);
+ struct urb_priv *urbp = list_entry(tmp, struct urb_priv, urb_list);
out += sprintf(out, " %d: ", ++count);
out += uhci_show_urbp(uhci, urbp, out, len - (out - buf));
{
unsigned long flags;
char *out = buf;
- int i;
+ int i, j;
struct uhci_qh *qh;
struct uhci_td *td;
struct list_head *tmp, *head;
continue;
}
+ j = (i < 7) ? 7 : i+1; /* Next skeleton */
if (list_empty(&qh->list)) {
if (i < UHCI_NUM_SKELQH - 1) {
if (qh->link !=
- (cpu_to_le32(uhci->skelqh[i + 1]->dma_handle) | UHCI_PTR_QH)) {
+ (cpu_to_le32(uhci->skelqh[j]->dma_handle) | UHCI_PTR_QH)) {
show_qh_name();
out += sprintf(out, " skeleton QH not linked to next skeleton QH!\n");
}
if (i < UHCI_NUM_SKELQH - 1) {
if (qh->link !=
- (cpu_to_le32(uhci->skelqh[i + 1]->dma_handle) | UHCI_PTR_QH))
+ (cpu_to_le32(uhci->skelqh[j]->dma_handle) | UHCI_PTR_QH))
out += sprintf(out, " last QH not linked to next skeleton!\n");
}
}
* (C) Copyright 2000 Yggdrasil Computing, Inc. (port of new PCI interface
* support from usb-ohci.c by Adam Richter, adam@yggdrasil.com).
* (C) Copyright 1999 Gregory P. Smith (from usb-ohci.c)
+ * (C) Copyright 2004 Alan Stern, stern@rowland.harvard.edu
*
* Intel documents this fairly well, and as far as I know there
* are no royalties or anything like that, but even so there are
*/
#include <linux/config.h>
+#ifdef CONFIG_USB_DEBUG
+#define DEBUG
+#else
+#undef DEBUG
+#endif
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/proc_fs.h>
-#ifdef CONFIG_USB_DEBUG
-#define DEBUG
-#else
-#undef DEBUG
-#endif
+#include <linux/pm.h>
+#include <linux/dmapool.h>
+#include <linux/dma-mapping.h>
#include <linux/usb.h>
+#include <asm/bitops.h>
#include <asm/uaccess.h>
#include <asm/io.h>
#include <asm/irq.h>
#include "../core/hcd.h"
#include "uhci-hcd.h"
-#include <linux/pm.h>
-
/*
* Version Information
*/
-#define DRIVER_VERSION "v2.1"
-#define DRIVER_AUTHOR "Linus 'Frodo Rabbit' Torvalds, Johannes Erdfelt, Randy Dunlap, Georg Acher, Deti Fliegl, Thomas Sailer, Roman Weissgaerber"
+#define DRIVER_VERSION "v2.2"
+#define DRIVER_AUTHOR "Linus 'Frodo Rabbit' Torvalds, Johannes Erdfelt, \
+Randy Dunlap, Georg Acher, Deti Fliegl, Thomas Sailer, Roman Weissgaerber, \
+Alan Stern"
#define DRIVER_DESC "USB Universal Host Controller Interface driver"
/*
MODULE_PARM(debug, "i");
MODULE_PARM_DESC(debug, "Debug level");
static char *errbuf;
-#define ERRBUF_LEN (PAGE_SIZE * 8)
+#define ERRBUF_LEN (32 * 1024)
#include "uhci-hub.c"
#include "uhci-debug.c"
static inline void uhci_clear_next_interrupt(struct uhci_hcd *uhci)
{
- unsigned long flags;
-
- spin_lock_irqsave(&uhci->frame_list_lock, flags);
+ spin_lock(&uhci->frame_list_lock);
uhci->term_td->status &= ~cpu_to_le32(TD_CTRL_IOC);
- spin_unlock_irqrestore(&uhci->frame_list_lock, flags);
+ spin_unlock(&uhci->frame_list_lock);
}
-static inline void uhci_add_complete(struct uhci_hcd *uhci, struct urb *urb)
+static inline void uhci_moveto_complete(struct uhci_hcd *uhci,
+ struct urb_priv *urbp)
{
- struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
- unsigned long flags;
-
- spin_lock_irqsave(&uhci->complete_list_lock, flags);
- list_add_tail(&urbp->complete_list, &uhci->complete_list);
- spin_unlock_irqrestore(&uhci->complete_list_lock, flags);
+ spin_lock(&uhci->complete_list_lock);
+ list_move_tail(&urbp->urb_list, &uhci->complete_list);
+ spin_unlock(&uhci->complete_list_lock);
}
static struct uhci_td *uhci_alloc_td(struct uhci_hcd *uhci, struct usb_device *dev)
dma_addr_t dma_handle;
struct uhci_td *td;
- td = pci_pool_alloc(uhci->td_pool, GFP_ATOMIC, &dma_handle);
+ td = dma_pool_alloc(uhci->td_pool, GFP_ATOMIC, &dma_handle);
if (!td)
return NULL;
static void uhci_free_td(struct uhci_hcd *uhci, struct uhci_td *td)
{
if (!list_empty(&td->list))
- dbg("td %p is still in list!", td);
+ dev_warn(uhci_dev(uhci), "td %p still in list!\n", td);
if (!list_empty(&td->remove_list))
- dbg("td %p still in remove_list!", td);
+ dev_warn(uhci_dev(uhci), "td %p still in remove_list!\n", td);
if (!list_empty(&td->fl_list))
- dbg("td %p is still in fl_list!", td);
+ dev_warn(uhci_dev(uhci), "td %p still in fl_list!\n", td);
if (td->dev)
usb_put_dev(td->dev);
- pci_pool_free(uhci->td_pool, td, td->dma_handle);
+ dma_pool_free(uhci->td_pool, td, td->dma_handle);
}
static struct uhci_qh *uhci_alloc_qh(struct uhci_hcd *uhci, struct usb_device *dev)
dma_addr_t dma_handle;
struct uhci_qh *qh;
- qh = pci_pool_alloc(uhci->qh_pool, GFP_ATOMIC, &dma_handle);
+ qh = dma_pool_alloc(uhci->qh_pool, GFP_ATOMIC, &dma_handle);
if (!qh)
return NULL;
static void uhci_free_qh(struct uhci_hcd *uhci, struct uhci_qh *qh)
{
if (!list_empty(&qh->list))
- dbg("qh %p list not empty!", qh);
+ dev_warn(uhci_dev(uhci), "qh %p list not empty!\n", qh);
if (!list_empty(&qh->remove_list))
- dbg("qh %p still in remove_list!", qh);
+ dev_warn(uhci_dev(uhci), "qh %p still in remove_list!\n", qh);
if (qh->dev)
usb_put_dev(qh->dev);
- pci_pool_free(uhci->qh_pool, qh, qh->dma_handle);
+ dma_pool_free(uhci->qh_pool, qh, qh->dma_handle);
}
/*
struct urb_priv *urbp;
urbp = kmem_cache_alloc(uhci_up_cachep, SLAB_ATOMIC);
- if (!urbp) {
- err("uhci_alloc_urb_priv: couldn't allocate memory for urb_priv\n");
+ if (!urbp)
return NULL;
- }
memset((void *)urbp, 0, sizeof(*urbp));
INIT_LIST_HEAD(&urbp->td_list);
INIT_LIST_HEAD(&urbp->queue_list);
- INIT_LIST_HEAD(&urbp->complete_list);
INIT_LIST_HEAD(&urbp->urb_list);
list_add_tail(&urbp->urb_list, &uhci->urb_list);
return;
if (!list_empty(&urbp->urb_list))
- warn("uhci_destroy_urb_priv: urb %p still on uhci->urb_list or uhci->remove_list", urb);
-
- if (!list_empty(&urbp->complete_list))
- warn("uhci_destroy_urb_priv: urb %p still on uhci->complete_list", urb);
+ dev_warn(uhci_dev(uhci), "urb %p still on uhci->urb_list "
+ "or uhci->remove_list!\n", urb);
spin_lock_irqsave(&uhci->td_remove_list_lock, flags);
uhci_insert_tds_in_qh(qh, urb, UHCI_PTR_BREADTH);
- /* Low speed transfers get a different queue, and won't hog the bus */
+ /* Low-speed transfers get a different queue, and won't hog the bus */
if (urb->dev->speed == USB_SPEED_LOW)
skelqh = uhci->skel_ls_control_qh;
else {
}
urbp->qh = uhci_alloc_qh(uhci, urb->dev);
- if (!urbp->qh) {
- err("unable to allocate new QH for control retrigger");
+ if (!urbp->qh)
return -ENOMEM;
- }
urbp->qh->urbp = urbp;
/* One TD, who cares about Breadth first? */
uhci_insert_tds_in_qh(urbp->qh, urb, UHCI_PTR_DEPTH);
- /* Low speed transfers get a different queue */
+ /* Low-speed transfers get a different queue */
if (urb->dev->speed == USB_SPEED_LOW)
uhci_insert_qh(uhci, uhci->skel_ls_control_qh, urb);
else
err:
if ((debug == 1 && ret != -EPIPE) || debug > 1) {
/* Some debugging code */
- dbg("uhci_result_control() failed with status %x", status);
+ dev_dbg(uhci_dev(uhci), "%s: failed with status %x\n",
+ __FUNCTION__, status);
if (errbuf) {
/* Print the chain for debugging purposes */
#if 0
if ((debug == 1 && ret != -EPIPE) || debug > 1) {
/* Some debugging code */
- dbg("uhci_result_common() failed with status %x", status);
+ dev_dbg(uhci_dev(uhci), "%s: failed with status %x\n",
+ __FUNCTION__, status);
if (errbuf) {
/* Print the chain for debugging purposes */
{
int ret;
- /* Can't have low speed bulk transfers */
+ /* Can't have low-speed bulk transfers */
if (urb->dev->speed == USB_SPEED_LOW)
return -EINVAL;
spin_lock_irqsave(&uhci->urb_list_lock, flags);
+ if (urb->status != -EINPROGRESS) /* URB already unlinked! */
+ goto out;
+
eurb = uhci_find_urb_ep(uhci, urb);
if (!uhci_alloc_urb_priv(uhci, urb)) {
- spin_unlock_irqrestore(&uhci->urb_list_lock, flags);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out;
}
switch (usb_pipetype(urb->pipe)) {
return ret;
}
+ ret = 0;
+out:
spin_unlock_irqrestore(&uhci->urb_list_lock, flags);
-
- return 0;
+ return ret;
}
/*
*/
static void uhci_transfer_result(struct uhci_hcd *uhci, struct urb *urb)
{
- int ret = -EINVAL;
- unsigned long flags;
+ int ret = -EINPROGRESS;
struct urb_priv *urbp;
- spin_lock_irqsave(&urb->lock, flags);
+ spin_lock(&urb->lock);
urbp = (struct urb_priv *)urb->hcpriv;
- if (urb->status != -EINPROGRESS) {
- info("uhci_transfer_result: called for URB %p not in flight?", urb);
+ if (urb->status != -EINPROGRESS) /* URB already dequeued */
goto out;
- }
switch (usb_pipetype(urb->pipe)) {
case PIPE_CONTROL:
break;
}
- urbp->status = ret;
-
if (ret == -EINPROGRESS)
goto out;
+ urb->status = ret;
switch (usb_pipetype(urb->pipe)) {
case PIPE_CONTROL:
uhci_unlink_generic(uhci, urb);
break;
default:
- info("uhci_transfer_result: unknown pipe type %d for urb %p\n",
- usb_pipetype(urb->pipe), urb);
+ dev_info(uhci_dev(uhci), "%s: unknown pipe type %d "
+ "for urb %p\n",
+ __FUNCTION__, usb_pipetype(urb->pipe), urb);
}
- /* Remove it from uhci->urb_list */
- list_del_init(&urbp->urb_list);
-
- uhci_add_complete(uhci, urb);
+ /* Move it from uhci->urb_list to uhci->complete_list */
+ uhci_moveto_complete(uhci, urbp);
out:
- spin_unlock_irqrestore(&urb->lock, flags);
+ spin_unlock(&urb->lock);
}
/*
struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
int prevactive = 1;
- /* We can get called when urbp allocation fails, so check */
- if (!urbp)
- return;
-
uhci_dec_fsbr(uhci, urb); /* Safe since it checks */
/*
unsigned long flags;
struct urb_priv *urbp = urb->hcpriv;
- /* If this is an interrupt URB that is being killed in urb->complete, */
- /* then just set its status and return */
- if (!urbp) {
- urb->status = -ECONNRESET;
- return 0;
- }
-
spin_lock_irqsave(&uhci->urb_list_lock, flags);
list_del_init(&urbp->urb_list);
/* If we're the first, set the next interrupt bit */
if (list_empty(&uhci->urb_remove_list))
uhci_set_next_interrupt(uhci);
- list_add(&urbp->urb_list, &uhci->urb_remove_list);
+ list_add_tail(&urbp->urb_list, &uhci->urb_remove_list);
spin_unlock(&uhci->urb_remove_list_lock);
spin_unlock_irqrestore(&uhci->urb_list_lock, flags);
static void uhci_free_pending_qhs(struct uhci_hcd *uhci)
{
struct list_head *tmp, *head;
- unsigned long flags;
- spin_lock_irqsave(&uhci->qh_remove_list_lock, flags);
+ spin_lock(&uhci->qh_remove_list_lock);
head = &uhci->qh_remove_list;
tmp = head->next;
while (tmp != head) {
uhci_free_qh(uhci, qh);
}
- spin_unlock_irqrestore(&uhci->qh_remove_list_lock, flags);
+ spin_unlock(&uhci->qh_remove_list_lock);
}
static void uhci_free_pending_tds(struct uhci_hcd *uhci)
{
struct list_head *tmp, *head;
- unsigned long flags;
- spin_lock_irqsave(&uhci->td_remove_list_lock, flags);
+ spin_lock(&uhci->td_remove_list_lock);
head = &uhci->td_remove_list;
tmp = head->next;
while (tmp != head) {
uhci_free_td(uhci, td);
}
- spin_unlock_irqrestore(&uhci->td_remove_list_lock, flags);
+ spin_unlock(&uhci->td_remove_list_lock);
}
static void uhci_finish_urb(struct usb_hcd *hcd, struct urb *urb, struct pt_regs *regs)
{
- struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
struct uhci_hcd *uhci = hcd_to_uhci(hcd);
- int status;
- unsigned long flags;
- spin_lock_irqsave(&urb->lock, flags);
- status = urbp->status;
+ spin_lock(&urb->lock);
uhci_destroy_urb_priv(uhci, urb);
-
- if (urb->status != -ENOENT && urb->status != -ECONNRESET)
- urb->status = status;
- spin_unlock_irqrestore(&urb->lock, flags);
+ spin_unlock(&urb->lock);
usb_hcd_giveback_urb(hcd, urb, regs);
}
{
struct uhci_hcd *uhci = hcd_to_uhci(hcd);
struct list_head *tmp, *head;
- unsigned long flags;
- spin_lock_irqsave(&uhci->complete_list_lock, flags);
+ spin_lock(&uhci->complete_list_lock);
head = &uhci->complete_list;
tmp = head->next;
while (tmp != head) {
- struct urb_priv *urbp = list_entry(tmp, struct urb_priv, complete_list);
+ struct urb_priv *urbp = list_entry(tmp, struct urb_priv, urb_list);
struct urb *urb = urbp->urb;
- list_del_init(&urbp->complete_list);
- spin_unlock_irqrestore(&uhci->complete_list_lock, flags);
+ list_del_init(&urbp->urb_list);
+ spin_unlock(&uhci->complete_list_lock);
uhci_finish_urb(hcd, urb, regs);
- spin_lock_irqsave(&uhci->complete_list_lock, flags);
+ spin_lock(&uhci->complete_list_lock);
head = &uhci->complete_list;
tmp = head->next;
}
- spin_unlock_irqrestore(&uhci->complete_list_lock, flags);
+ spin_unlock(&uhci->complete_list_lock);
}
-static void uhci_remove_pending_qhs(struct uhci_hcd *uhci)
+static void uhci_remove_pending_urbps(struct uhci_hcd *uhci)
{
struct list_head *tmp, *head;
- unsigned long flags;
- spin_lock_irqsave(&uhci->urb_remove_list_lock, flags);
+ spin_lock(&uhci->urb_remove_list_lock);
head = &uhci->urb_remove_list;
tmp = head->next;
while (tmp != head) {
struct urb_priv *urbp = list_entry(tmp, struct urb_priv, urb_list);
- struct urb *urb = urbp->urb;
tmp = tmp->next;
-
- list_del_init(&urbp->urb_list);
-
- urbp->status = urb->status = -ECONNRESET;
-
- uhci_add_complete(uhci, urb);
+ uhci_moveto_complete(uhci, urbp);
}
- spin_unlock_irqrestore(&uhci->urb_remove_list_lock, flags);
+ spin_unlock(&uhci->urb_remove_list_lock);
}
static irqreturn_t uhci_irq(struct usb_hcd *hcd, struct pt_regs *regs)
/*
* Read the interrupt status, and write it back to clear the
- * interrupt cause
+ * interrupt cause. Contrary to the UHCI specification, the
+ * "HC Halted" status bit is persistent: it is RO, not R/WC.
*/
status = inw(io_addr + USBSTS);
- if (!status) /* shared interrupt, not mine */
+ if (!(status & ~USBSTS_HCH)) /* shared interrupt, not mine */
return IRQ_NONE;
outw(status, io_addr + USBSTS); /* Clear it */
if (status & ~(USBSTS_USBINT | USBSTS_ERROR | USBSTS_RD)) {
if (status & USBSTS_HSE)
- err("%x: host system error, PCI problems?", io_addr);
+ dev_err(uhci_dev(uhci), "host system error, "
+ "PCI problems?\n");
if (status & USBSTS_HCPE)
- err("%x: host controller process error. something bad happened", io_addr);
+ dev_err(uhci_dev(uhci), "host controller process "
+ "error, something bad happened!\n");
if ((status & USBSTS_HCH) && uhci->state > 0) {
- err("%x: host controller halted. very bad", io_addr);
+ dev_err(uhci_dev(uhci), "host controller halted, "
+ "very bad!\n");
/* FIXME: Reset the controller, fix the offending TD */
}
}
uhci_free_pending_tds(uhci);
- uhci_remove_pending_qhs(uhci);
+ uhci_remove_pending_urbps(uhci);
uhci_clear_next_interrupt(uhci);
{
unsigned int io_addr = uhci->io_addr;
- dbg("%x: suspend_hc", io_addr);
+ dev_dbg(uhci_dev(uhci), "%s\n", __FUNCTION__);
uhci->state = UHCI_SUSPENDED;
uhci->resume_detect = 0;
outw(USBCMD_EGSM, io_addr + USBCMD);
switch (uhci->state) {
case UHCI_SUSPENDED: /* Start the resume */
- dbg("%x: wakeup_hc", io_addr);
+ dev_dbg(uhci_dev(uhci), "%s\n", __FUNCTION__);
/* Global resume for >= 20ms */
outw(USBCMD_FGR | USBCMD_EGSM, io_addr + USBCMD);
unsigned int io_addr = uhci->io_addr;
int i;
- if (!uhci->hcd.pdev || uhci->hcd.pdev->vendor != PCI_VENDOR_ID_INTEL)
+ if (to_pci_dev(uhci_dev(uhci))->vendor != PCI_VENDOR_ID_INTEL)
return 1;
/* Some of Intel's USB controllers have a bug that causes false
outw(USBCMD_HCRESET, io_addr + USBCMD);
while (inw(io_addr + USBCMD) & USBCMD_HCRESET) {
if (!--timeout) {
- printk(KERN_ERR "uhci: USBCMD_HCRESET timed out!\n");
+ dev_err(uhci_dev(uhci), "USBCMD_HCRESET timed out!\n");
break;
}
}
}
if (uhci->qh_pool) {
- pci_pool_destroy(uhci->qh_pool);
+ dma_pool_destroy(uhci->qh_pool);
uhci->qh_pool = NULL;
}
if (uhci->td_pool) {
- pci_pool_destroy(uhci->td_pool);
+ dma_pool_destroy(uhci->td_pool);
uhci->td_pool = NULL;
}
if (uhci->fl) {
- pci_free_consistent(uhci->hcd.pdev, sizeof(*uhci->fl), uhci->fl, uhci->fl->dma_handle);
+ dma_free_coherent(uhci_dev(uhci), sizeof(*uhci->fl),
+ uhci->fl, uhci->fl->dma_handle);
uhci->fl = NULL;
}
* interrupts from any previous setup.
*/
reset_hc(uhci);
- pci_write_config_word(hcd->pdev, USBLEGSUP, USBLEGSUP_DEFAULT);
+ pci_write_config_word(to_pci_dev(uhci_dev(uhci)), USBLEGSUP,
+ USBLEGSUP_DEFAULT);
return 0;
}
* of the queues. We don't do that here, because
* we'll create the actual TD entries on demand.
* - The first queue is the interrupt queue.
- * - The second queue is the control queue, split into low and high speed
+ * - The second queue is the control queue, split into low- and full-speed
* - The third queue is bulk queue.
* - The fourth queue is the bandwidth reclamation queue, which loops back
- * to the high speed control queue.
+ * to the full-speed control queue.
*/
static int uhci_start(struct usb_hcd *hcd)
{
struct proc_dir_entry *ent;
#endif
- io_size = pci_resource_len(hcd->pdev, hcd->region);
+ io_size = pci_resource_len(to_pci_dev(uhci_dev(uhci)), hcd->region);
#ifdef CONFIG_PROC_FS
ent = create_proc_entry(hcd->self.bus_name, S_IFREG|S_IRUGO|S_IWUSR, uhci_proc_root);
if (!ent) {
- err("couldn't create uhci proc entry");
+ dev_err(uhci_dev(uhci), "couldn't create uhci proc entry\n");
retval = -ENOMEM;
goto err_create_proc_entry;
}
spin_lock_init(&uhci->frame_list_lock);
- uhci->fl = pci_alloc_consistent(hcd->pdev, sizeof(*uhci->fl), &dma_handle);
+ uhci->fl = dma_alloc_coherent(uhci_dev(uhci), sizeof(*uhci->fl),
+ &dma_handle, 0);
if (!uhci->fl) {
- err("unable to allocate consistent memory for frame list");
+ dev_err(uhci_dev(uhci), "unable to allocate "
+ "consistent memory for frame list\n");
goto err_alloc_fl;
}
uhci->fl->dma_handle = dma_handle;
- uhci->td_pool = pci_pool_create("uhci_td", hcd->pdev,
- sizeof(struct uhci_td), 16, 0);
+ uhci->td_pool = dma_pool_create("uhci_td", uhci_dev(uhci),
+ sizeof(struct uhci_td), 16, 0);
if (!uhci->td_pool) {
- err("unable to create td pci_pool");
+ dev_err(uhci_dev(uhci), "unable to create td dma_pool\n");
goto err_create_td_pool;
}
- uhci->qh_pool = pci_pool_create("uhci_qh", hcd->pdev,
- sizeof(struct uhci_qh), 16, 0);
+ uhci->qh_pool = dma_pool_create("uhci_qh", uhci_dev(uhci),
+ sizeof(struct uhci_qh), 16, 0);
if (!uhci->qh_pool) {
- err("unable to create qh pci_pool");
+ dev_err(uhci_dev(uhci), "unable to create qh dma_pool\n");
goto err_create_qh_pool;
}
break;
}
if (debug)
- info("detected %d ports", port);
+ dev_info(uhci_dev(uhci), "detected %d ports\n", port);
/* This is experimental so anything less than 2 or greater than 8 is */
/* something weird and we'll ignore it */
- if (port < 2 || port > 8) {
- info("port count misdetected? forcing to 2 ports");
+ if (port < 2 || port > UHCI_RH_MAXCHILD) {
+ dev_info(uhci_dev(uhci), "port count misdetected? "
+ "forcing to 2 ports\n");
port = 2;
}
hcd->self.root_hub = udev = usb_alloc_dev(NULL, &hcd->self, 0);
if (!udev) {
- err("unable to allocate root hub");
+ dev_err(uhci_dev(uhci), "unable to allocate root hub\n");
goto err_alloc_root_hub;
}
uhci->term_td = uhci_alloc_td(uhci, udev);
if (!uhci->term_td) {
- err("unable to allocate terminating TD");
+ dev_err(uhci_dev(uhci), "unable to allocate terminating TD\n");
goto err_alloc_term_td;
}
for (i = 0; i < UHCI_NUM_SKELQH; i++) {
uhci->skelqh[i] = uhci_alloc_qh(uhci, udev);
if (!uhci->skelqh[i]) {
- err("unable to allocate QH %d", i);
+ dev_err(uhci_dev(uhci), "unable to allocate QH\n");
goto err_alloc_skelqh;
}
}
/*
- * 8 Interrupt queues; link int2 to int1, int4 to int2, etc
+ * 8 Interrupt queues; link all higher int queues to int1,
* then link int1 to control and control to bulk
*/
- uhci->skel_int128_qh->link = cpu_to_le32(uhci->skel_int64_qh->dma_handle) | UHCI_PTR_QH;
- uhci->skel_int64_qh->link = cpu_to_le32(uhci->skel_int32_qh->dma_handle) | UHCI_PTR_QH;
- uhci->skel_int32_qh->link = cpu_to_le32(uhci->skel_int16_qh->dma_handle) | UHCI_PTR_QH;
- uhci->skel_int16_qh->link = cpu_to_le32(uhci->skel_int8_qh->dma_handle) | UHCI_PTR_QH;
- uhci->skel_int8_qh->link = cpu_to_le32(uhci->skel_int4_qh->dma_handle) | UHCI_PTR_QH;
- uhci->skel_int4_qh->link = cpu_to_le32(uhci->skel_int2_qh->dma_handle) | UHCI_PTR_QH;
- uhci->skel_int2_qh->link = cpu_to_le32(uhci->skel_int1_qh->dma_handle) | UHCI_PTR_QH;
+ uhci->skel_int128_qh->link =
+ uhci->skel_int64_qh->link =
+ uhci->skel_int32_qh->link =
+ uhci->skel_int16_qh->link =
+ uhci->skel_int8_qh->link =
+ uhci->skel_int4_qh->link =
+ uhci->skel_int2_qh->link =
+ cpu_to_le32(uhci->skel_int1_qh->dma_handle) | UHCI_PTR_QH;
uhci->skel_int1_qh->link = cpu_to_le32(uhci->skel_ls_control_qh->dma_handle) | UHCI_PTR_QH;
uhci->skel_ls_control_qh->link = cpu_to_le32(uhci->skel_hs_control_qh->dma_handle) | UHCI_PTR_QH;
uhci->skel_term_qh->element = cpu_to_le32(uhci->term_td->dma_handle);
/*
- * Fill the frame list: make all entries point to
- * the proper interrupt queue.
+ * Fill the frame list: make all entries point to the proper
+ * interrupt queue.
*
- * This is probably silly, but it's a simple way to
- * scatter the interrupt queues in a way that gives
- * us a reasonable dynamic range for irq latencies.
+ * The interrupt queues will be interleaved as evenly as possible.
+ * There's not much to be done about period-1 interrupts; they have
+ * to occur in every frame. But we can schedule period-2 interrupts
+ * in odd-numbered frames, period-4 interrupts in frames congruent
+ * to 2 (mod 4), and so on. This way each frame only has two
+ * interrupt QHs, which will help spread out bandwidth utilization.
*/
for (i = 0; i < UHCI_NUMFRAMES; i++) {
- int irq = 0;
-
- if (i & 1) {
- irq++;
- if (i & 2) {
- irq++;
- if (i & 4) {
- irq++;
- if (i & 8) {
- irq++;
- if (i & 16) {
- irq++;
- if (i & 32) {
- irq++;
- if (i & 64)
- irq++;
- }
- }
- }
- }
- }
- }
+ int irq;
+
+ /*
+ * ffs (Find First bit Set) does exactly what we need:
+ * 1,3,5,... => ffs = 0 => use skel_int2_qh = skelqh[6],
+ * 2,6,10,... => ffs = 1 => use skel_int4_qh = skelqh[5], etc.
+ * ffs > 6 => not on any high-period queue, so use
+ * skel_int1_qh = skelqh[7].
+ * Add UHCI_NUMFRAMES to insure at least one bit is set.
+ */
+ irq = 6 - (int) __ffs(i + UHCI_NUMFRAMES);
+ if (irq < 0)
+ irq = 7;
/* Only place we don't use the frame list routines */
- uhci->fl->frame[i] = cpu_to_le32(uhci->skelqh[7 - irq]->dma_handle);
+ uhci->fl->frame[i] = cpu_to_le32(uhci->skelqh[irq]->dma_handle);
}
start_hc(uhci);
udev->speed = USB_SPEED_FULL;
- if (usb_register_root_hub(udev, &hcd->pdev->dev) != 0) {
- err("unable to start root hub");
+ if (usb_register_root_hub(udev, uhci_dev(uhci)) != 0) {
+ dev_err(uhci_dev(uhci), "unable to start root hub\n");
retval = -ENOMEM;
goto err_start_root_hub;
}
hcd->self.root_hub = NULL;
err_alloc_root_hub:
- pci_pool_destroy(uhci->qh_pool);
+ dma_pool_destroy(uhci->qh_pool);
uhci->qh_pool = NULL;
err_create_qh_pool:
- pci_pool_destroy(uhci->td_pool);
+ dma_pool_destroy(uhci->td_pool);
uhci->td_pool = NULL;
err_create_td_pool:
- pci_free_consistent(hcd->pdev, sizeof(*uhci->fl), uhci->fl, uhci->fl->dma_handle);
+ dma_free_coherent(uhci_dev(uhci), sizeof(*uhci->fl),
+ uhci->fl, uhci->fl->dma_handle);
uhci->fl = NULL;
err_alloc_fl:
static void uhci_stop(struct usb_hcd *hcd)
{
struct uhci_hcd *uhci = hcd_to_uhci(hcd);
+ unsigned long flags;
del_timer_sync(&uhci->stall_timer);
* At this point, we're guaranteed that no new connects can be made
* to this bus since there are no more parents
*/
+ local_irq_save(flags);
uhci_free_pending_qhs(uhci);
uhci_free_pending_tds(uhci);
- uhci_remove_pending_qhs(uhci);
+ uhci_remove_pending_urbps(uhci);
reset_hc(uhci);
uhci_free_pending_qhs(uhci);
uhci_free_pending_tds(uhci);
-
+ local_irq_restore(flags);
+
release_uhci(uhci);
}
{
struct uhci_hcd *uhci = hcd_to_uhci(hcd);
- pci_set_master(uhci->hcd.pdev);
+ pci_set_master(to_pci_dev(uhci_dev(uhci)));
if (uhci->state == UHCI_SUSPENDED)
uhci->resume_detect = 1;
{
int retval = -ENOMEM;
- info(DRIVER_DESC " " DRIVER_VERSION);
+ printk(KERN_INFO DRIVER_DESC " " DRIVER_VERSION "\n");
if (usb_disabled())
return -ENODEV;
init_failed:
if (kmem_cache_destroy(uhci_up_cachep))
- printk(KERN_INFO "uhci: not all urb_priv's were freed\n");
+ warn("not all urb_priv's were freed!");
up_failed:
pci_unregister_driver(&uhci_pci_driver);
if (kmem_cache_destroy(uhci_up_cachep))
- printk(KERN_INFO "uhci: not all urb_priv's were freed\n");
+ warn("not all urb_priv's were freed!");
#ifdef CONFIG_PROC_FS
remove_proc_entry("driver/uhci", 0);
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_LICENSE("GPL");
-
#define USBPORTSC_CSC 0x0002 /* Connect Status Change */
#define USBPORTSC_PE 0x0004 /* Port Enable */
#define USBPORTSC_PEC 0x0008 /* Port Enable Change */
-#define USBPORTSC_LS 0x0030 /* Line Status */
+#define USBPORTSC_DPLUS 0x0010 /* D+ high (line status) */
+#define USBPORTSC_DMINUS 0x0020 /* D- high (line status) */
#define USBPORTSC_RD 0x0040 /* Resume Detect */
+#define USBPORTSC_RES1 0x0080 /* reserved, always 1 */
#define USBPORTSC_LSDA 0x0100 /* Low Speed Device Attached */
#define USBPORTSC_PR 0x0200 /* Port Reset */
+/* OC and OCC from Intel 430TX and later (not UHCI 1.1d spec) */
#define USBPORTSC_OC 0x0400 /* Over Current condition */
+#define USBPORTSC_OCC 0x0800 /* Over Current Change R/WC */
#define USBPORTSC_SUSP 0x1000 /* Suspend */
+#define USBPORTSC_RES2 0x2000 /* reserved, write zeroes */
+#define USBPORTSC_RES3 0x4000 /* reserved, write zeroes */
+#define USBPORTSC_RES4 0x8000 /* reserved, write zeroes */
/* Legacy support register */
#define USBLEGSUP 0xc0
* The UHCI driver places Interrupt, Control and Bulk into QH's both
* to group together TD's for one transfer, and also to faciliate queuing
* of URB's. To make it easy to insert entries into the schedule, we have
- * a skeleton of QH's for each predefined Interrupt latency, low speed
- * control, high speed control and terminating QH (see explanation for
+ * a skeleton of QH's for each predefined Interrupt latency, low-speed
+ * control, full-speed control and terminating QH (see explanation for
* the terminating QH below).
*
* When we want to add a new QH, we add it to the end of the list for the
* skel int32 QH
* ...
* skel int1 QH
- * skel low speed control QH
+ * skel low-speed control QH
* dev 5 control QH
- * skel high speed control QH
+ * skel full-speed control QH
* skel bulk QH
* dev 1 bulk QH
* dev 2 bulk QH
* The terminating QH is used for 2 reasons:
* - To place a terminating TD which is used to workaround a PIIX bug
* (see Intel errata for explanation)
- * - To loop back to the high speed control queue for full speed bandwidth
+ * - To loop back to the full-speed control queue for full-speed bandwidth
* reclamation
*
* Isochronous transfers are stored before the start of the skeleton
};
#define hcd_to_uhci(hcd_ptr) container_of(hcd_ptr, struct uhci_hcd, hcd)
+#define uhci_dev(u) ((u)->hcd.self.controller)
/*
* This describes the full uhci information.
/* Grabbed from PCI */
unsigned long io_addr;
- struct pci_pool *qh_pool;
- struct pci_pool *td_pool;
+ struct dma_pool *qh_pool;
+ struct dma_pool *td_pool;
struct usb_bus *bus;
spinlock_t frame_list_lock;
struct uhci_frame_list *fl; /* P: uhci->frame_list_lock */
- int fsbr; /* Full speed bandwidth reclamation */
+ int fsbr; /* Full-speed bandwidth reclamation */
unsigned long fsbrtimeout; /* FSBR delay */
enum uhci_state state; /* FIXME: needs a spinlock */
/* a control transfer, retrigger */
/* the status phase */
- int status; /* Final status */
-
unsigned long inserttime; /* In jiffies */
unsigned long fsbrtime; /* In jiffies */
struct list_head queue_list; /* P: uhci->frame_list_lock */
- struct list_head complete_list; /* P: uhci->complete_list_lock */
};
/*
*/
#endif
-
/*
* Universal Host Controller Interface driver for USB.
*
- * Maintainer: Johannes Erdfelt <johannes@erdfelt.com>
+ * Maintainer: Alan Stern <stern@rowland.harvard.edu>
*
* (C) Copyright 1999 Linus Torvalds
* (C) Copyright 1999-2002 Johannes Erdfelt, johannes@erdfelt.com
* (C) Copyright 1999 Georg Acher, acher@in.tum.de
* (C) Copyright 1999 Deti Fliegl, deti@fliegl.de
* (C) Copyright 1999 Thomas Sailer, sailer@ife.ee.ethz.ch
+ * (C) Copyright 2004 Alan Stern, stern@rowland.harvard.edu
*/
static __u8 root_hub_hub_des[] =
0x09, /* __u8 bLength; */
0x29, /* __u8 bDescriptorType; Hub-descriptor */
0x02, /* __u8 bNbrPorts; */
- 0x00, /* __u16 wHubCharacteristics; */
- 0x00,
+ 0x0a, /* __u16 wHubCharacteristics; */
+ 0x00, /* (per-port OC, no power switching) */
0x01, /* __u8 bPwrOn2pwrGood; 2ms */
0x00, /* __u8 bHubContrCurrent; 0 mA */
0x00, /* __u8 DeviceRemovable; *** 7 Ports max *** */
0xff /* __u8 PortPwrCtrlMask; *** 7 ports max *** */
};
+#define UHCI_RH_MAXCHILD 7
+
+/* must write as zeroes */
+#define WZ_BITS (USBPORTSC_RES2 | USBPORTSC_RES3 | USBPORTSC_RES4)
+
+/* status change bits: nonzero writes will clear */
+#define RWC_BITS (USBPORTSC_OCC | USBPORTSC_PEC | USBPORTSC_CSC)
+
static int uhci_hub_status_data(struct usb_hcd *hcd, char *buf)
{
struct uhci_hcd *uhci = hcd_to_uhci(hcd);
unsigned int io_addr = uhci->io_addr;
- int i, len = 1;
+ int i;
*buf = 0;
for (i = 0; i < uhci->rh_numports; i++) {
- *buf |= ((inw(io_addr + USBPORTSC1 + i * 2) & 0xa) > 0 ? (1 << (i + 1)) : 0);
- len = (i + 1) / 8 + 1;
+ if (inw(io_addr + USBPORTSC1 + i * 2) & RWC_BITS)
+ *buf |= (1 << (i + 1));
}
-
return !!*buf;
}
#define OK(x) len = (x); break
#define CLR_RH_PORTSTAT(x) \
- status = inw(io_addr + USBPORTSC1 + 2 * (wIndex-1)); \
- status = (status & 0xfff5) & ~(x); \
- outw(status, io_addr + USBPORTSC1 + 2 * (wIndex-1))
+ status = inw(port_addr); \
+ status &= ~(RWC_BITS|WZ_BITS); \
+ status &= ~(x); \
+ status |= RWC_BITS & (x); \
+ outw(status, port_addr)
#define SET_RH_PORTSTAT(x) \
- status = inw(io_addr + USBPORTSC1 + 2 * (wIndex-1)); \
- status = (status & 0xfff5) | (x); \
- outw(status, io_addr + USBPORTSC1 + 2 * (wIndex-1))
+ status = inw(port_addr); \
+ status |= (x); \
+ status &= ~(RWC_BITS|WZ_BITS); \
+ outw(status, port_addr)
/* size of returned buffer is part of USB spec */
u16 wIndex, char *buf, u16 wLength)
{
struct uhci_hcd *uhci = hcd_to_uhci(hcd);
- int i, status, retval = 0, len = 0;
- unsigned int io_addr = uhci->io_addr;
- __u16 cstatus;
- char c_p_r[8];
-
- for (i = 0; i < 8; i++)
- c_p_r[i] = 0;
+ int status, retval = 0, len = 0;
+ unsigned int port_addr = uhci->io_addr + USBPORTSC1 + 2 * (wIndex-1);
+ __u16 wPortChange, wPortStatus;
switch (typeReq) {
/* Request Destination:
*(__u32 *)buf = cpu_to_le32(0);
OK(4); /* hub power */
case GetPortStatus:
- status = inw(io_addr + USBPORTSC1 + 2 * (wIndex - 1));
- cstatus = ((status & USBPORTSC_CSC) >> (1 - 0)) |
- ((status & USBPORTSC_PEC) >> (3 - 1)) |
- (c_p_r[wIndex - 1] << (0 + 4));
- status = (status & USBPORTSC_CCS) |
- ((status & USBPORTSC_PE) >> (2 - 1)) |
- ((status & USBPORTSC_SUSP) >> (12 - 2)) |
- ((status & USBPORTSC_PR) >> (9 - 4)) |
- (1 << 8) | /* power on */
- ((status & USBPORTSC_LSDA) << (-8 + 9));
-
- *(__u16 *)buf = cpu_to_le16(status);
- *(__u16 *)(buf + 2) = cpu_to_le16(cstatus);
- OK(4);
- case SetHubFeature:
- switch (wValue) {
- case C_HUB_OVER_CURRENT:
- case C_HUB_LOCAL_POWER:
- break;
- default:
+ if (!wIndex || wIndex > uhci->rh_numports)
goto err;
+ status = inw(port_addr);
+
+ /* Intel controllers report the OverCurrent bit active on.
+ * VIA controllers report it active off, so we'll adjust the
+ * bit value. (It's not standardized in the UHCI spec.)
+ */
+ if (to_pci_dev(hcd->self.controller)->vendor ==
+ PCI_VENDOR_ID_VIA)
+ status ^= USBPORTSC_OC;
+
+ /* UHCI doesn't support C_SUSPEND and C_RESET (always false) */
+ wPortChange = 0;
+ if (status & USBPORTSC_CSC)
+ wPortChange |= 1 << (USB_PORT_FEAT_C_CONNECTION - 16);
+ if (status & USBPORTSC_PEC)
+ wPortChange |= 1 << (USB_PORT_FEAT_C_ENABLE - 16);
+ if (status & USBPORTSC_OCC)
+ wPortChange |= 1 << (USB_PORT_FEAT_C_OVER_CURRENT - 16);
+
+ /* UHCI has no power switching (always on) */
+ wPortStatus = 1 << USB_PORT_FEAT_POWER;
+ if (status & USBPORTSC_CCS)
+ wPortStatus |= 1 << USB_PORT_FEAT_CONNECTION;
+ if (status & USBPORTSC_PE) {
+ wPortStatus |= 1 << USB_PORT_FEAT_ENABLE;
+ if (status & (USBPORTSC_SUSP | USBPORTSC_RD))
+ wPortStatus |= 1 << USB_PORT_FEAT_SUSPEND;
}
- break;
+ if (status & USBPORTSC_OC)
+ wPortStatus |= 1 << USB_PORT_FEAT_OVER_CURRENT;
+ if (status & USBPORTSC_PR)
+ wPortStatus |= 1 << USB_PORT_FEAT_RESET;
+ if (status & USBPORTSC_LSDA)
+ wPortStatus |= 1 << USB_PORT_FEAT_LOWSPEED;
+
+ if (wPortChange)
+ dev_dbg(uhci_dev(uhci), "port %d portsc %04x\n",
+ wIndex, status);
+
+ *(__u16 *)buf = cpu_to_le16(wPortStatus);
+ *(__u16 *)(buf + 2) = cpu_to_le16(wPortChange);
+ OK(4);
+ case SetHubFeature: /* We don't implement these */
case ClearHubFeature:
switch (wValue) {
case C_HUB_OVER_CURRENT:
- OK(0); /* hub power over current */
+ case C_HUB_LOCAL_POWER:
+ OK(0);
default:
goto err;
}
case USB_PORT_FEAT_RESET:
SET_RH_PORTSTAT(USBPORTSC_PR);
mdelay(50); /* USB v1.1 7.1.7.3 */
- c_p_r[wIndex - 1] = 1;
CLR_RH_PORTSTAT(USBPORTSC_PR);
udelay(10);
SET_RH_PORTSTAT(USBPORTSC_PE);
mdelay(10);
- SET_RH_PORTSTAT(0xa);
+ CLR_RH_PORTSTAT(USBPORTSC_PEC|USBPORTSC_CSC);
OK(0);
case USB_PORT_FEAT_POWER:
- OK(0); /* port power ** */
- case USB_PORT_FEAT_ENABLE:
- SET_RH_PORTSTAT(USBPORTSC_PE);
+ /* UHCI has no power switching */
OK(0);
default:
goto err;
CLR_RH_PORTSTAT(USBPORTSC_PE);
OK(0);
case USB_PORT_FEAT_C_ENABLE:
- SET_RH_PORTSTAT(USBPORTSC_PEC);
+ CLR_RH_PORTSTAT(USBPORTSC_PEC);
OK(0);
case USB_PORT_FEAT_SUSPEND:
CLR_RH_PORTSTAT(USBPORTSC_SUSP);
OK(0);
case USB_PORT_FEAT_C_SUSPEND:
- /*** WR_RH_PORTSTAT(RH_PS_PSSC); */
+ /* this driver won't report these */
OK(0);
case USB_PORT_FEAT_POWER:
- OK(0); /* port power */
+ /* UHCI has no power switching */
+ goto err;
case USB_PORT_FEAT_C_CONNECTION:
- SET_RH_PORTSTAT(USBPORTSC_CSC);
+ CLR_RH_PORTSTAT(USBPORTSC_CSC);
OK(0);
case USB_PORT_FEAT_C_OVER_CURRENT:
- OK(0); /* port power over current */
+ CLR_RH_PORTSTAT(USBPORTSC_OCC);
+ OK(0);
case USB_PORT_FEAT_C_RESET:
- c_p_r[wIndex - 1] = 0;
+ /* this driver won't report these */
OK(0);
default:
goto err;
}
break;
case GetHubDescriptor:
- len = min_t(unsigned int, wLength,
- min_t(unsigned int, sizeof(root_hub_hub_des), wLength));
+ len = min_t(unsigned int, sizeof(root_hub_hub_des), wLength);
memcpy(buf, root_hub_hub_des, len);
if (len > 2)
buf[2] = uhci->rh_numports;
return retval;
}
-
stv680->hue = 32767;
stv680->palette = STV_VIDEO_PALETTE;
stv680->depth = 24; /* rgb24 bits */
- swapRGB = 0;
if ((swapRGB_on == 0) && (swapRGB == 0))
PDEBUG (1, "STV(i): swapRGB is (auto) OFF");
- else if ((swapRGB_on == 1) && (swapRGB == 1))
+ else if ((swapRGB_on == 0) && (swapRGB == 1))
PDEBUG (1, "STV(i): swapRGB is (auto) ON");
else if (swapRGB_on == 1)
PDEBUG (1, "STV(i): swapRGB is (forced) ON");
/* Resubmit urb for new data */
urb->status = 0;
urb->dev = stv680->udev;
- if (usb_submit_urb (urb, GFP_KERNEL))
+ if (usb_submit_urb (urb, GFP_ATOMIC))
PDEBUG (0, "STV(e): urb burned down in video irq");
return;
} /* _video_irq */
return -EINVAL;
}
case VIDIOCSFBUF:
- return -EINVAL;
case VIDIOCGTUNER:
case VIDIOCSTUNER:
- return -EINVAL;
case VIDIOCGFREQ:
case VIDIOCSFREQ:
- return -EINVAL;
case VIDIOCGAUDIO:
case VIDIOCSAUDIO:
return -EINVAL;
if (video_register_device (stv680->vdev, VFL_TYPE_GRABBER, video_nr) == -1) {
PDEBUG (0, "STV(e): video_register_device failed");
retval = -EIO;
- goto error;
+ goto error_vdev;
}
PDEBUG (0, "STV(i): registered new video device: video%d", stv680->vdev->minor);
stv680_create_sysfs_files(stv680->vdev);
return 0;
+error_vdev:
+ video_device_release(stv680->vdev);
error:
kfree(stv680);
return retval;
kfree (stv680->sbuf[i].data);
}
for (i = 0; i < STV680_NUMSCRATCH; i++)
- if (stv680->scratch[i].data) {
- kfree (stv680->scratch[i].data);
- }
+ kfree (stv680->scratch[i].data);
PDEBUG (0, "STV(i): %s disconnected", stv680->camera_name);
/* Free the memory */
buf += i;
length -= i;
- i = snprintf (buf, length, " (");
+ i = scnprintf (buf, length, " (");
buf += i;
length -= i;
int last;
};
-#define NUM_SUBCASES 13 /* how many test subcases here? */
+#define NUM_SUBCASES 15 /* how many test subcases here? */
struct subcase {
struct usb_ctrlrequest setup;
req.wValue = cpu_to_le16 (USB_DT_STRING << 8);
// string == 0, for language IDs
len = sizeof (struct usb_interface_descriptor);
+ // may succeed when > 4 languages
expected = EREMOTEIO; // or EPIPE, if no strings
break;
+ case 13: // short read, resembling case 10
+ req.wValue = cpu_to_le16 ((USB_DT_CONFIG << 8) | 0);
+ // last data packet "should" be DATA1, not DATA0
+ len = 1024 - udev->epmaxpacketin [0];
+ expected = -EREMOTEIO;
+ break;
+ case 14: // short read; try to fill the last packet
+ req.wValue = cpu_to_le16 ((USB_DT_DEVICE << 8) | 0);
+ // device descriptor size == 18 bytes
+ len = udev->epmaxpacketin [0];
+ switch (len) {
+ case 8: len = 24; break;
+ case 16: len = 32; break;
+ }
+ expected = -EREMOTEIO;
+ break;
default:
err ("bogus number of ctrl queue testcases!");
context.status = -EINVAL;
usb_free_urb(catc->rx_urb);
if (catc->irq_urb)
usb_free_urb(catc->irq_urb);
- kfree(netdev);
+ free_netdev(netdev);
kfree(catc);
return -ENOMEM;
}
usb_free_urb(catc->tx_urb);
usb_free_urb(catc->rx_urb);
usb_free_urb(catc->irq_urb);
- kfree(netdev);
+ free_netdev(netdev);
kfree(catc);
return -EIO;
}
err_only_tx:
usb_free_urb(kaweth->tx_urb);
err_no_urb:
- kfree(netdev);
+ free_netdev(netdev);
err_no_netdev:
kfree(kaweth);
return -EIO;
usb_set_intfdata(intf, NULL);
free_skb_pool(pegasus);
out3:
- kfree(net);
+ free_netdev(net);
out2:
free_all_urbs(pegasus);
out1:
free_all_urbs(dev);
out:
kfree(dev->intr_buff);
- kfree(netdev);
+ free_netdev(netdev);
kfree(dev);
return -EIO;
}
if (dev->driver_info->unbind)
dev->driver_info->unbind (dev, intf);
- kfree(dev->net);
+ free_netdev(dev->net);
kfree (dev);
usb_put_dev (xdev);
}
if (info->unbind)
info->unbind (dev, udev);
out2:
- kfree(net);
+ free_netdev(net);
out1:
kfree(dev);
out:
* See http://ftdi-usb-sio.sourceforge.net for upto date testing info
* and extra documentation
*
+ * (09/Feb/2004) Ian Abbott
+ * Changed full name of USB-UIRT device to avoid "/" character.
+ * Added FTDI's alternate PID (0x6006) for FT232/245 devices.
+ * Added PID for "ELV USB Module UO100" from Stefan Frings.
+ *
* (21/Oct/2003) Ian Abbott
* Renamed some VID/PID macros for Matrix Orbital and Perle Systems
* devices. Removed Matrix Orbital and Perle Systems devices from the
static struct usb_device_id id_table_8U232AM [] = {
{ USB_DEVICE_VER(FTDI_VID, FTDI_8U232AM_PID, 0, 0x3ff) },
+ { USB_DEVICE_VER(FTDI_VID, FTDI_8U232AM_ALT_PID, 0, 0x3ff) },
{ USB_DEVICE_VER(FTDI_VID, FTDI_RELAIS_PID, 0, 0x3ff) },
{ USB_DEVICE_VER(FTDI_NF_RIC_VID, FTDI_NF_RIC_PID, 0, 0x3ff) },
{ USB_DEVICE_VER(FTDI_VID, FTDI_XF_632_PID, 0, 0x3ff) },
{ USB_DEVICE_VER(FTDI_VID, PROTEGO_R2X0, 0, 0x3ff) },
{ USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_3, 0, 0x3ff) },
{ USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_4, 0, 0x3ff) },
+ { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0, 0x3ff) },
{ } /* Terminating entry */
};
static struct usb_device_id id_table_FT232BM [] = {
{ USB_DEVICE_VER(FTDI_VID, FTDI_8U232AM_PID, 0x400, 0xffff) },
+ { USB_DEVICE_VER(FTDI_VID, FTDI_8U232AM_ALT_PID, 0x400, 0xffff) },
{ USB_DEVICE_VER(FTDI_VID, FTDI_RELAIS_PID, 0x400, 0xffff) },
{ USB_DEVICE_VER(FTDI_NF_RIC_VID, FTDI_NF_RIC_PID, 0x400, 0xffff) },
{ USB_DEVICE_VER(FTDI_VID, FTDI_XF_632_PID, 0x400, 0xffff) },
{ USB_DEVICE_VER(FTDI_VID, PROTEGO_R2X0, 0x400, 0xffff) },
{ USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_3, 0x400, 0xffff) },
{ USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_4, 0x400, 0xffff) },
+ { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0x400, 0xffff) },
{ } /* Terminating entry */
};
static struct usb_device_id id_table_combined [] = {
{ USB_DEVICE(FTDI_VID, FTDI_SIO_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_8U232AM_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_8U232AM_ALT_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_RELAIS_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_XF_632_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_XF_634_PID) },
{ USB_DEVICE(FTDI_VID, PROTEGO_R2X0) },
{ USB_DEVICE(FTDI_VID, PROTEGO_SPECIAL_3) },
{ USB_DEVICE(FTDI_VID, PROTEGO_SPECIAL_4) },
+ { USB_DEVICE(FTDI_VID, FTDI_ELV_UO100_PID) },
{ } /* Terminating entry */
};
static struct usb_serial_device_type ftdi_USB_UIRT_device = {
.owner = THIS_MODULE,
- .name = "USB-UIRT Infrared Receiver/Transmitter",
+ .name = "USB-UIRT Infrared Tranceiver",
.id_table = id_table_USB_UIRT,
.num_interrupt_in = 0,
.num_bulk_in = 1,
#define FTDI_VID 0x0403 /* Vendor Id */
#define FTDI_SIO_PID 0x8372 /* Product Id SIO application of 8U100AX */
#define FTDI_8U232AM_PID 0x6001 /* Similar device to SIO above */
+#define FTDI_8U232AM_ALT_PID 0x6006 /* FTDI's alternate PID for above */
#define FTDI_RELAIS_PID 0xFA10 /* Relais device from Rudolf Gugler */
#define FTDI_NF_RIC_VID 0x0DCD /* Vendor Id */
#define FTDI_NF_RIC_PID 0x0001 /* Product Id */
/* http://home.earthlink.net/~jrhees/USBUIRT/index.htm */
#define FTDI_USB_UIRT_PID 0xF850 /* Product Id */
+/* ELV USB Module UO100 (PID sent by Stefan Frings) */
+#define FTDI_ELV_UO100_PID 0xFB58 /* Product Id */
+
/*
* Definitions for ID TECH (www.idt-net.com) devices
*/
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
+/*
+ * Known vendor commands: 12 bytes, first byte is opcode
+ *
+ * E7: read scatter gather
+ * E8: read
+ * E9: write
+ * EA: erase
+ * EB: reset
+ * EC: read status
+ * ED: read ID
+ * EE: write CIS (?)
+ * EF: compute checksum (?)
+ */
+
#include "transport.h"
#include "protocol.h"
#include "usb.h"
*
* Always precisely one block is erased; bytes 2-5 and 10-11 are ignored.
* The byte address being erased is 2*Eaddress.
+ * The CIS cannot be erased.
*/
static int
sddr09_erase(struct us_data *us, unsigned long Eaddress) {
}
/*
+ * Write CIS Command: 12 bytes.
+ * byte 0: opcode: EE
+ * bytes 2-5: write address in shorts
+ * bytes 10-11: sector count
+ *
+ * This writes at the indicated address. Don't know how it differs
+ * from E9. Maybe it does not erase? However, it will also write to
+ * the CIS.
+ *
+ * When two such commands on the same page follow each other directly,
+ * the second one is not done.
+ */
+
+/*
* Write Command: 12 bytes.
* byte 0: opcode: E9
* bytes 2-5: write address (big-endian, counting shorts, sector aligned).
"mode page 0x%x\n", modepage);
memcpy(ptr, mode_page_01, sizeof(mode_page_01));
- ((u16*)ptr)[0] = sizeof(mode_page_01) - 2;
+ ((u16*)ptr)[0] = cpu_to_be16(sizeof(mode_page_01) - 2);
ptr[3] = (info->flags & SDDR09_WP) ? 0x80 : 0;
usb_stor_set_xfer_buf(ptr, sizeof(mode_page_01), srb);
return USB_STOR_TRANSPORT_GOOD;
return;
}
+ srb->result = SAM_STAT_GOOD;
+
/* Determine if we need to auto-sense
*
* I normally don't use a flag like this, but it's almost impossible
/*
* If we're running the CB transport, which is incapable
- * of determining status on it's own, we need to auto-sense almost
- * every time.
+ * of determining status on its own, we need to auto-sense
+ * unless the operation involved a data-in transfer. Devices
+ * can signal data-in errors by stalling the bulk-in pipe.
*/
- if (us->protocol == US_PR_CB || us->protocol == US_PR_DPCM_USB) {
+ if ((us->protocol == US_PR_CB || us->protocol == US_PR_DPCM_USB) &&
+ srb->sc_data_direction != SCSI_DATA_READ) {
US_DEBUGP("-- CB transport device requiring auto-sense\n");
need_auto_sense = 1;
-
- /* There are some exceptions to this. Notably, if this is
- * a UFI device and the command is REQUEST_SENSE or INQUIRY,
- * then it is impossible to truly determine status.
- */
- if (us->subclass == US_SC_UFI &&
- ((srb->cmnd[0] == REQUEST_SENSE) ||
- (srb->cmnd[0] == INQUIRY))) {
- US_DEBUGP("** no auto-sense for a special command\n");
- need_auto_sense = 0;
- }
}
/*
}
/*
- * Also, if we have a short transfer on a command that can't have
- * a short transfer, we're going to do this.
+ * A short transfer on a command where we don't expect it
+ * is unusual, but it doesn't mean we need to auto-sense.
*/
if ((srb->resid > 0) &&
!((srb->cmnd[0] == REQUEST_SENSE) ||
(srb->cmnd[0] == LOG_SENSE) ||
(srb->cmnd[0] == MODE_SENSE_10))) {
US_DEBUGP("-- unexpectedly short transfer\n");
- need_auto_sense = 1;
}
/* Now, if we need to do the auto-sense, let's do it */
unsigned char old_cmd_len;
unsigned char old_cmnd[MAX_COMMAND_SIZE];
unsigned long old_serial_number;
+ int old_resid;
US_DEBUGP("Issuing auto-REQUEST_SENSE\n");
srb->serial_number ^= 0x80000000;
/* issue the auto-sense command */
+ old_resid = srb->resid;
+ srb->resid = 0;
temp_result = us->transport(us->srb, us);
/* let's clean up right away */
+ srb->resid = old_resid;
srb->request_buffer = old_request_buffer;
srb->request_bufflen = old_request_bufflen;
srb->use_sg = old_sg;
/* set the result so the higher layers expect this data */
srb->result = SAM_STAT_CHECK_CONDITION;
- /* If things are really okay, then let's show that */
- if ((srb->sense_buffer[2] & 0xf) == 0x0)
+ /* If things are really okay, then let's show that. Zero
+ * out the sense buffer so the higher layers won't realize
+ * we did an unsolicited auto-sense. */
+ if (result == USB_STOR_TRANSPORT_GOOD &&
+ (srb->sense_buffer[2] & 0xf) == 0x0) {
srb->result = SAM_STAT_GOOD;
- } else /* if (need_auto_sense) */
- srb->result = SAM_STAT_GOOD;
-
- /* Regardless of auto-sense, if we _know_ we have an error
- * condition, show that in the result code
- */
- if (result == USB_STOR_TRANSPORT_FAILED)
- srb->result = SAM_STAT_CHECK_CONDITION;
-
- /* If we think we're good, then make sure the sense data shows it.
- * This is necessary because the auto-sense for some devices always
- * sets byte 0 == 0x70, even if there is no error
- */
- if ((us->protocol == US_PR_CB || us->protocol == US_PR_DPCM_USB) &&
- (result == USB_STOR_TRANSPORT_GOOD) &&
- ((srb->sense_buffer[2] & 0xf) == 0x0))
- srb->sense_buffer[0] = 0x0;
+ srb->sense_buffer[0] = 0x0;
+ }
+ }
return;
/* abort processing: the bulk-only transport requires a reset
srb->request_buffer, transfer_length,
srb->use_sg, &srb->resid);
US_DEBUGP("CBI data stage result is 0x%x\n", result);
+
+ /* if we stalled the data transfer it means command failed */
+ if (result == USB_STOR_XFER_STALLED)
+ return USB_STOR_TRANSPORT_FAILED;
if (result > USB_STOR_XFER_STALLED)
return USB_STOR_TRANSPORT_ERROR;
}
srb->request_buffer, transfer_length,
srb->use_sg, &srb->resid);
US_DEBUGP("CB data stage result is 0x%x\n", result);
+
+ /* if we stalled the data transfer it means command failed */
+ if (result == USB_STOR_XFER_STALLED)
+ return USB_STOR_TRANSPORT_FAILED;
if (result > USB_STOR_XFER_STALLED)
return USB_STOR_TRANSPORT_ERROR;
}
unsigned int residue;
int result;
int fake_sense = 0;
+ unsigned int cswlen;
/* set up the command wrapper */
bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN);
/* get CSW for device status */
US_DEBUGP("Attempting to get CSW...\n");
result = usb_stor_bulk_transfer_buf(us, us->recv_bulk_pipe,
- bcs, US_BULK_CS_WRAP_LEN, NULL);
+ bcs, US_BULK_CS_WRAP_LEN, &cswlen);
+
+ /* Some broken devices add unnecessary zero-length packets to the
+ * end of their data transfers. Such packets show up as 0-length
+ * CSWs. If we encounter such a thing, try to read the CSW again.
+ */
+ if (result == USB_STOR_XFER_SHORT && cswlen == 0) {
+ US_DEBUGP("Received 0-length CSW; retrying...\n");
+ result = usb_stor_bulk_transfer_buf(us, us->recv_bulk_pipe,
+ bcs, US_BULK_CS_WRAP_LEN, &cswlen);
+ }
/* did the attempt to read the CSW fail? */
if (result == USB_STOR_XFER_STALLED) {
"Finecam S5",
US_SC_DEVICE, US_PR_DEVICE, NULL, US_FL_FIX_INQUIRY),
+/* Patch for Kyocera Finecam L3
+ * Submitted by Michael Krauth <michael.krauth@web.de>
+ */
+UNUSUAL_DEV( 0x0482, 0x0105, 0x0100, 0x0100,
+ "Kyocera",
+ "Finecam L3",
+ US_SC_SCSI, US_PR_BULK, NULL,
+ US_FL_FIX_INQUIRY),
+
/* Reported by Paul Stewart <stewart@wetlogic.net>
* This entry is needed because the device reports Sub=ff */
UNUSUAL_DEV( 0x04a4, 0x0004, 0x0001, 0x0001,
UNUSUAL_DEV( 0x04cb, 0x0100, 0x0000, 0x2210,
"Fujifilm",
"FinePix 1400Zoom",
- US_SC_DEVICE, US_PR_DEVICE, NULL, US_FL_FIX_INQUIRY),
+ US_SC_UFI, US_PR_DEVICE, NULL, US_FL_FIX_INQUIRY),
/* Reported by Peter Wächtler <pwaechtler@loewe-komp.de>
* The device needs the flags only.
UNUSUAL_DEV( 0x04e6, 0x0002, 0x0100, 0x0100,
"Shuttle",
"eUSCSI Bridge",
- US_SC_SCSI, US_PR_BULK, usb_stor_euscsi_init,
+ US_SC_DEVICE, US_PR_DEVICE, usb_stor_euscsi_init,
US_FL_SCM_MULT_TARG ),
#ifdef CONFIG_USB_STORAGE_SDDR09
"Memorystick MSC-U01N",
US_SC_DEVICE, US_PR_DEVICE, NULL,
US_FL_SINGLE_LUN ),
+
+/* Submitted by Michal Mlotek <mlotek@foobar.pl> */
+UNUSUAL_DEV( 0x054c, 0x0058, 0x0000, 0x9999,
+ "Sony",
+ "PEG N760c Memorystick",
+ US_SC_DEVICE, US_PR_DEVICE, NULL,
+ US_FL_FIX_INQUIRY ),
UNUSUAL_DEV( 0x054c, 0x0069, 0x0000, 0x9999,
"Sony",
US_FL_SINGLE_LUN ),
#endif
+/* Following three Minolta cameras reported by Martin Pool
+ * <mbp@sourcefrog.net>. Originally discovered by Kedar Petankar,
+ * Matthew Geier, Mikael Lofj"ard, Marcel de Boer.
+ */
+UNUSUAL_DEV( 0x0686, 0x4006, 0x0001, 0x0001,
+ "Minolta",
+ "DiMAGE 7",
+ US_SC_SCSI, US_PR_DEVICE, NULL,
+ 0 ),
+
+UNUSUAL_DEV( 0x0686, 0x400b, 0x0001, 0x0001,
+ "Minolta",
+ "DiMAGE 7i",
+ US_SC_SCSI, US_PR_DEVICE, NULL,
+ 0 ),
+
+UNUSUAL_DEV( 0x0686, 0x400f, 0x0001, 0x0001,
+ "Minolta",
+ "DiMAGE 7Hi",
+ US_SC_SCSI, US_PR_DEVICE, NULL,
+ 0 ),
+
/* Submitted by Benny Sjostrand <benny@hostmobility.com> */
UNUSUAL_DEV( 0x0686, 0x4011, 0x0001, 0x0001,
"Minolta",
"DIMAGE E223",
US_SC_SCSI, US_PR_DEVICE, NULL, 0 ),
-/* Following three Minolta cameras reported by Martin Pool
- * <mbp@sourcefrog.net>. Originally discovered by Kedar Petankar,
- * Matthew Geier, Mikael Lofj"ard, Marcel de Boer.
- */
-UNUSUAL_DEV( 0x0686, 0x4006, 0x0001, 0x0001,
- "Minolta",
- "DiMAGE 7",
- US_SC_SCSI, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0686, 0x400b, 0x0001, 0x0001,
- "Minolta",
- "DiMAGE 7i",
- US_SC_SCSI, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0686, 0x400f, 0x0001, 0x0001,
- "Minolta",
- "DiMAGE 7Hi",
- US_SC_SCSI, US_PR_DEVICE, NULL,
- 0 ),
-
UNUSUAL_DEV( 0x0693, 0x0002, 0x0100, 0x0100,
"Hagiwara",
"FlashGate SmartMedia",
UNUSUAL_DEV( 0x07cf, 0x1001, 0x1000, 0x9009,
"Casio",
"QV DigitalCamera",
- US_SC_8070, US_PR_CB, NULL,
+ US_SC_DEVICE, US_PR_CB, NULL,
US_FL_FIX_INQUIRY ),
/* Later Casio cameras apparently tell the truth */
US_SC_DEVICE, US_PR_DEVICE, NULL,
US_FL_MODE_XLATE ),
-/*Medion 6047 Digital Camera
-Davide Andrian <_nessuno_@katamail.com>
-*/
-UNUSUAL_DEV( 0x08ca, 0x2011, 0x0001, 0x0001,
- "3MegaCam",
- "3MegaCam",
- US_SC_DEVICE, US_PR_BULK, NULL,
- US_FL_MODE_XLATE ),
-
/* Trumpion Microelectronics MP3 player (felipe_alfaro@linuxmail.org) */
UNUSUAL_DEV( 0x090a, 0x1200, 0x0000, 0x9999,
"Trumpion",
"JD 5200 z3",
US_SC_DEVICE, US_PR_DEVICE, NULL, US_FL_FIX_INQUIRY),
+/* Reported by Lubomir Blaha <tritol@trilogic.cz>
+ * I _REALLY_ don't know what 3rd, 4th number and all defines mean, but this
+ * works for me. Can anybody correct these values? (I able to test corrected
+ * version.)
+ */
+UNUSUAL_DEV( 0x0dd8, 0x1060, 0x0000, 0xffff,
+ "Netac",
+ "USB-CF-Card",
+ US_SC_DEVICE, US_PR_DEVICE, NULL,
+ US_FL_FIX_INQUIRY ),
+
/* Submitted by Antoine Mairesse <antoine.mairesse@free.fr> */
UNUSUAL_DEV( 0x0ed1, 0x6660, 0x0100, 0x0300,
"USB",
For most cases you probably want to say N.
+#
+# Intermezzo broke when we added the expanded NGROUPS patches
+#
config INTERMEZZO_FS
tristate "InterMezzo file system support (replicating fs) (EXPERIMENTAL)"
- depends on INET && EXPERIMENTAL
+ depends on INET && EXPERIMENTAL && BROKEN
help
InterMezzo is a networked file system with disconnected operation
and kernel level write back caching. It is most often used for
help
Support FLAT format compressed binaries
+config BINFMT_SHARED_FLAT
+ bool "Enable shared FLAT support"
+ depends on BINFMT_FLAT
+ help
+ Support FLAT shared libraries
+
config BINFMT_AOUT
tristate "Kernel support for a.out and ECOFF binaries"
depends on (X86 && !X86_64) || ALPHA || ARM || M68K || MIPS || SPARC
/* Inode stuff */
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
int adfs_get_block(struct inode *inode, sector_t block,
struct buffer_head *bh, int create);
-#else
-int adfs_bmap(struct inode *inode, int block);
-#endif
struct inode *adfs_iget(struct super_block *sb, struct object_info *obj);
void adfs_read_inode(struct inode *inode);
void adfs_write_inode(struct inode *inode,int unused);
unsigned char *buf;
z_stream strm;
loff_t fpos;
- int ret;
+ int ret, retval;
DBG_FLT("decompress_exec(offset=%x,buf=%x,len=%x)\n",(int)offset, (int)dst, (int)len);
buf = kmalloc(LBUFSIZE, GFP_KERNEL);
if (buf == NULL) {
DBG_FLT("binfmt_flat: no memory for read buffer\n");
- return -ENOMEM;
+ retval = -ENOMEM;
+ goto out_free;
}
/* Read in first chunk of data and parse gzip header. */
strm.avail_in = ret;
strm.total_in = 0;
+ retval = -ENOEXEC;
+
/* Check minimum size -- gzip header */
if (ret < 10) {
DBG_FLT("binfmt_flat: file too small?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
/* Check gzip magic number */
if ((buf[0] != 037) || ((buf[1] != 0213) && (buf[1] != 0236))) {
DBG_FLT("binfmt_flat: unknown compression magic?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
/* Check gzip method */
if (buf[2] != 8) {
DBG_FLT("binfmt_flat: unknown compression method?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
/* Check gzip flags */
if ((buf[3] & ENCRYPTED) || (buf[3] & CONTINUATION) ||
(buf[3] & RESERVED)) {
DBG_FLT("binfmt_flat: unknown flags?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
ret = 10;
ret += 2 + buf[10] + (buf[11] << 8);
if (unlikely(LBUFSIZE == ret)) {
DBG_FLT("binfmt_flat: buffer overflow (EXTRA)?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
}
if (buf[3] & ORIG_NAME) {
;
if (unlikely(LBUFSIZE == ret)) {
DBG_FLT("binfmt_flat: buffer overflow (ORIG_NAME)?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
}
if (buf[3] & COMMENT) {
;
if (unlikely(LBUFSIZE == ret)) {
DBG_FLT("binfmt_flat: buffer overflow (COMMENT)?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
}
if (zlib_inflateInit2(&strm, -MAX_WBITS) != Z_OK) {
DBG_FLT("binfmt_flat: zlib init failed?\n");
- return -ENOEXEC;
+ goto out_free_buf;
}
while ((ret = zlib_inflate(&strm, Z_NO_FLUSH)) == Z_OK) {
if (ret < 0) {
DBG_FLT("binfmt_flat: decompression failed (%d), %s\n",
ret, strm.msg);
- return -ENOEXEC;
+ goto out_zlib;
}
+ retval = 0;
+out_zlib:
zlib_inflateEnd(&strm);
+out_free_buf:
kfree(buf);
+out_free:
kfree(strm.workspace);
- return 0;
+out:
+ return retval;
}
#endif /* CONFIG_BINFMT_ZFLAT */
* to be used for internal purposes. If you ever need it - reconsider
* your API.
*/
-struct block_device *open_by_devnum(dev_t dev, unsigned mode, int kind)
+struct block_device *open_by_devnum(dev_t dev, unsigned mode)
{
struct block_device *bdev = bdget(dev);
int err = -ENOMEM;
int flags = mode & FMODE_WRITE ? O_RDWR : O_RDONLY;
if (bdev)
- err = blkdev_get(bdev, mode, flags, kind);
+ err = blkdev_get(bdev, mode, flags);
return err ? ERR_PTR(err) : bdev;
}
static void bd_set_size(struct block_device *bdev, loff_t size)
{
unsigned bsize = bdev_hardsect_size(bdev);
- i_size_write(bdev->bd_inode, size);
+
+ bdev->bd_inode->i_size = size;
while (bsize < PAGE_CACHE_SIZE) {
if (size & bsize)
break;
ret = -ENOMEM;
if (!whole)
goto out_first;
- ret = blkdev_get(whole, file->f_mode, file->f_flags, BDEV_RAW);
+ ret = blkdev_get(whole, file->f_mode, file->f_flags);
if (ret)
goto out_first;
bdev->bd_contains = whole;
bdev->bd_disk = NULL;
bdev->bd_inode->i_data.backing_dev_info = &default_backing_dev_info;
if (bdev != bdev->bd_contains)
- blkdev_put(bdev->bd_contains, BDEV_RAW);
+ blkdev_put(bdev->bd_contains);
bdev->bd_contains = NULL;
put_disk(disk);
module_put(owner);
return ret;
}
-int blkdev_get(struct block_device *bdev, mode_t mode, unsigned flags, int kind)
+int blkdev_get(struct block_device *bdev, mode_t mode, unsigned flags)
{
/*
* This crockload is due to bad choice of ->open() type.
if (!(res = bd_claim(bdev, filp)))
return 0;
- blkdev_put(bdev, BDEV_FILE);
+ blkdev_put(bdev);
return res;
}
EXPORT_SYMBOL(blkdev_open);
-int blkdev_put(struct block_device *bdev, int kind)
+int blkdev_put(struct block_device *bdev)
{
int ret = 0;
struct inode *bd_inode = bdev->bd_inode;
bdev->bd_disk = NULL;
bdev->bd_inode->i_data.backing_dev_info = &default_backing_dev_info;
if (bdev != bdev->bd_contains) {
- blkdev_put(bdev->bd_contains, BDEV_RAW);
+ blkdev_put(bdev->bd_contains);
}
bdev->bd_contains = NULL;
}
struct block_device *bdev = I_BDEV(filp->f_mapping->host);
if (bdev->bd_holder == filp)
bd_release(bdev);
- return blkdev_put(bdev, BDEV_FILE);
+ return blkdev_put(bdev);
}
static ssize_t blkdev_file_write(struct file *file, const char __user *buf,
*
* @path: special file representing the block device
* @flags: %MS_RDONLY for opening read-only
- * @kind: usage (same as the 4th paramter to blkdev_get)
* @holder: owner for exclusion
*
* Open the blockdevice described by the special file at @path, claim it
- * for the @holder and properly set it up for @kind usage.
+ * for the @holder.
*/
-struct block_device *open_bdev_excl(const char *path, int flags,
- int kind, void *holder)
+struct block_device *open_bdev_excl(const char *path, int flags, void *holder)
{
struct block_device *bdev;
mode_t mode = FMODE_READ;
if (!(flags & MS_RDONLY))
mode |= FMODE_WRITE;
- error = blkdev_get(bdev, mode, 0, kind);
+ error = blkdev_get(bdev, mode, 0);
if (error)
return ERR_PTR(error);
error = -EACCES;
return bdev;
blkdev_put:
- blkdev_put(bdev, BDEV_FS);
+ blkdev_put(bdev);
return ERR_PTR(error);
}
* close_bdev_excl - release a blockdevice openen by open_bdev_excl()
*
* @bdev: blockdevice to close
- * @kind: usage (same as the 4th paramter to blkdev_get)
*
* This is the counterpart to open_bdev_excl().
*/
-void close_bdev_excl(struct block_device *bdev, int kind)
+void close_bdev_excl(struct block_device *bdev)
{
bd_release(bdev);
- blkdev_put(bdev, kind);
+ blkdev_put(bdev);
}
EXPORT_SYMBOL(close_bdev_excl);
}
}
-static void buffer_init_cpu(int cpu)
-{
- struct bh_accounting *bha = &per_cpu(bh_accounting, cpu);
- struct bh_lru *bhl = &per_cpu(bh_lrus, cpu);
-
- bha->nr = 0;
- bha->ratelimit = 0;
- memset(bhl, 0, sizeof(*bhl));
-}
-
-static int __devinit buffer_cpu_notify(struct notifier_block *self,
- unsigned long action, void *hcpu)
-{
- long cpu = (long)hcpu;
- switch(action) {
- case CPU_UP_PREPARE:
- buffer_init_cpu(cpu);
- break;
- default:
- break;
- }
- return NOTIFY_OK;
-}
-
-static struct notifier_block __devinitdata buffer_nb = {
- .notifier_call = buffer_cpu_notify,
-};
void __init buffer_init(void)
{
*/
nrpages = (nr_free_buffer_pages() * 10) / 100;
max_buffer_heads = nrpages * (PAGE_SIZE / sizeof(struct buffer_head));
- buffer_cpu_notify(&buffer_nb, (unsigned long)CPU_UP_PREPARE,
- (void *)(long)smp_processor_id());
- register_cpu_notifier(&buffer_nb);
}
EXPORT_SYMBOL(__bforget);
return ino;
}
+static __initdata unsigned long dhash_entries;
+static int __init set_dhash_entries(char *str)
+{
+ if (!str)
+ return 0;
+ dhash_entries = simple_strtoul(str, &str, 0);
+ return 1;
+}
+__setup("dhash_entries=", set_dhash_entries);
+
static void __init dcache_init(unsigned long mempages)
{
struct hlist_head *d;
set_shrinker(DEFAULT_SEEKS, shrink_dcache_memory);
-#if PAGE_SHIFT < 13
- mempages >>= (13 - PAGE_SHIFT);
-#endif
- mempages *= sizeof(struct hlist_head);
- for (order = 0; ((1UL << order) << PAGE_SHIFT) < mempages; order++)
+ if (!dhash_entries)
+ dhash_entries = PAGE_SHIFT < 13 ?
+ mempages >> (13 - PAGE_SHIFT) :
+ mempages << (PAGE_SHIFT - 13);
+
+ dhash_entries *= sizeof(struct hlist_head);
+ for (order = 0; ((1UL << order) << PAGE_SHIFT) < dhash_entries; order++)
;
do {
#include <linux/smp.h>
#include <linux/rwsem.h>
#include <linux/sched.h>
+#include <linux/namei.h>
#include <asm/uaccess.h>
#include <asm/io.h>
#include <asm/bitops.h>
#include <asm/atomic.h>
-#include "internal.h"
-
-#define DEVFS_VERSION "1.22 (20021013)"
+#define DEVFS_VERSION "2004-01-31"
#define DEVFS_NAME "devfs"
unsigned char no_more_additions:1;
};
-struct bdev_type
-{
- dev_t dev;
-};
-
-struct cdev_type
-{
- struct file_operations *ops;
- dev_t dev;
- unsigned char autogen:1;
-};
-
struct symlink_type
{
unsigned int length; /* Not including the NULL-termimator */
union
{
struct directory_type dir;
- struct bdev_type bdev;
- struct cdev_type cdev;
+ dev_t dev;
struct symlink_type symlink;
const char *name; /* Only used for (mode == 0) */
}
struct devfs_inode inode;
umode_t mode;
unsigned short namelen; /* I think 64k+ filenames are a way off... */
- unsigned char vfs_deletable:1;/* Whether the VFS may delete the entry */
+ unsigned char vfs:1;/* Whether the VFS may delete the entry */
char name[1]; /* This is just a dummy: the allocated array
is bigger. This is NULL-terminated */
};
de->name, de, de->parent,
de->parent ? de->parent->name : "no parent");
if ( S_ISLNK (de->mode) ) kfree (de->u.symlink.linkname);
- if ( S_ISCHR (de->mode) && de->u.cdev.autogen )
- devfs_dealloc_devnum (de->mode, de->u.cdev.dev);
WRITE_ENTRY_MAGIC (de, 0);
#ifdef CONFIG_DEVFS_DEBUG
spin_lock (&stat_lock);
return retval;
} /* End Function _devfs_append_entry */
-
/**
* _devfs_get_root_entry - Get the root devfs entry.
*
* Returns the root devfs entry on success, else %NULL.
+ *
+ * TODO it must be called asynchronously due to the fact
+ * that devfs is initialized relatively late. Proper way
+ * is to remove module_init from init_devfs_fs and manually
+ * call it early enough during system init
*/
-static struct devfs_entry *_devfs_get_root_entry (void)
+static struct devfs_entry *_devfs_get_root_entry(void)
{
- struct devfs_entry *new;
- static spinlock_t root_lock = SPIN_LOCK_UNLOCKED;
+ struct devfs_entry *new;
+ static spinlock_t root_lock = SPIN_LOCK_UNLOCKED;
- /* Always ensure the root is created */
- if (root_entry) return root_entry;
- if ( ( new = _devfs_alloc_entry (NULL, 0,MODE_DIR) ) == NULL ) return NULL;
- spin_lock (&root_lock);
- if (root_entry)
- {
- spin_unlock (&root_lock);
- devfs_put (new);
- return (root_entry);
- }
- root_entry = new;
- spin_unlock (&root_lock);
- /* And create the entry for ".devfsd" */
- if ( ( new = _devfs_alloc_entry (".devfsd", 0, S_IFCHR |S_IRUSR |S_IWUSR) )
- == NULL ) return NULL;
- new->u.cdev.dev = devfs_alloc_devnum (S_IFCHR |S_IRUSR |S_IWUSR);
- new->u.cdev.ops = &devfsd_fops;
- _devfs_append_entry (root_entry, new, NULL);
-#ifdef CONFIG_DEVFS_DEBUG
- if ( ( new = _devfs_alloc_entry (".stat", 0, S_IFCHR | S_IRUGO | S_IWUGO) )
- == NULL ) return NULL;
- new->u.cdev.dev = devfs_alloc_devnum (S_IFCHR | S_IRUGO | S_IWUGO);
- new->u.cdev.ops = &stat_fops;
- _devfs_append_entry (root_entry, new, NULL);
-#endif
- return root_entry;
-} /* End Function _devfs_get_root_entry */
+ if (root_entry)
+ return root_entry;
+
+ new = _devfs_alloc_entry(NULL, 0, MODE_DIR);
+ if (new == NULL )
+ return NULL;
+ spin_lock(&root_lock);
+ if (root_entry) {
+ spin_unlock(&root_lock);
+ devfs_put(new);
+ return root_entry;
+ }
+ root_entry = new;
+ spin_unlock(&root_lock);
+
+ return root_entry;
+} /* End Function _devfs_get_root_entry */
/**
* _devfs_descend - Descend down a tree using the next component name.
}
if (S_ISLNK (de->mode) && traverse_symlink)
{ /* Need to follow the link: this is a stack chomper */
+ /* FIXME what if it puts outside of mounted tree? */
link = _devfs_walk_path (dir, de->u.symlink.linkname,
de->u.symlink.length, TRUE);
devfs_put (de);
current->egid, &fs_info);
}
-int devfs_mk_bdev(dev_t dev, umode_t mode, const char *fmt, ...)
+static int devfs_mk_dev(dev_t dev, umode_t mode, const char *fmt, va_list args)
{
struct devfs_entry *dir = NULL, *de;
char buf[64];
- va_list args;
int error, n;
- va_start(args, fmt);
- n = vsnprintf(buf, 64, fmt, args);
- if (n >= 64 || !buf[0]) {
- printk(KERN_WARNING "%s: invalid format string\n",
- __FUNCTION__);
+ n = vsnprintf(buf, sizeof(buf), fmt, args);
+ if (n >= sizeof(buf) || !buf[0]) {
+ printk(KERN_WARNING "%s: invalid format string %s\n",
+ __FUNCTION__, fmt);
return -EINVAL;
}
- if (!S_ISBLK(mode)) {
- printk(KERN_WARNING "%s: invalide mode (%u) for %s\n",
- __FUNCTION__, mode, buf);
- return -EINVAL;
- }
-
de = _devfs_prepare_leaf(&dir, buf, mode);
if (!de) {
printk(KERN_WARNING "%s: could not prepare leaf for %s\n",
return -ENOMEM; /* could be more accurate... */
}
- de->u.bdev.dev = dev;
+ de->u.dev = dev;
error = _devfs_append_entry(dir, de, NULL);
if (error) {
return error;
}
+int devfs_mk_bdev(dev_t dev, umode_t mode, const char *fmt, ...)
+{
+ va_list args;
+
+ if (!S_ISBLK(mode)) {
+ printk(KERN_WARNING "%s: invalide mode (%u) for %s\n",
+ __FUNCTION__, mode, fmt);
+ return -EINVAL;
+ }
+
+ va_start(args, fmt);
+ return devfs_mk_dev(dev, mode, fmt, args);
+}
+
EXPORT_SYMBOL(devfs_mk_bdev);
int devfs_mk_cdev(dev_t dev, umode_t mode, const char *fmt, ...)
{
- struct devfs_entry *dir = NULL, *de;
- char buf[64];
va_list args;
- int error, n;
-
- va_start(args, fmt);
- n = vsnprintf(buf, 64, fmt, args);
- if (n >= 64 || !buf[0]) {
- printk(KERN_WARNING "%s: invalid format string\n",
- __FUNCTION__);
- return -EINVAL;
- }
if (!S_ISCHR(mode)) {
printk(KERN_WARNING "%s: invalide mode (%u) for %s\n",
- __FUNCTION__, mode, buf);
+ __FUNCTION__, mode, fmt);
return -EINVAL;
}
- de = _devfs_prepare_leaf(&dir, buf, mode);
- if (!de) {
- printk(KERN_WARNING "%s: could not prepare leaf for %s\n",
- __FUNCTION__, buf);
- return -ENOMEM; /* could be more accurate... */
- }
-
- de->u.cdev.dev = dev;
-
- error = _devfs_append_entry(dir, de, NULL);
- if (error) {
- printk(KERN_WARNING "%s: could not append to parent for %s\n",
- __FUNCTION__, buf);
- goto out;
- }
-
- devfsd_notify(de, DEVFSD_NOTIFY_REGISTERED);
- out:
- devfs_put(dir);
- return error;
+ va_start(args, fmt);
+ return devfs_mk_dev(dev, mode, fmt, args);
}
EXPORT_SYMBOL(devfs_mk_cdev);
err = devfs_do_symlink(NULL, from, to, &de);
if (!err) {
- de->vfs_deletable = TRUE;
+ de->vfs = TRUE;
devfsd_notify(de, DEVFSD_NOTIFY_REGISTERED);
}
int n;
va_start(args, fmt);
- n = vsnprintf(buf, 64, fmt, args);
- if (n < 64 && buf[0]) {
+ n = vsnprintf(buf, sizeof(buf), fmt, args);
+ if (n < sizeof(buf) && buf[0]) {
devfs_handle_t de = _devfs_find_entry(NULL, buf, 0);
if (!de) {
return pos;
} /* End Function devfs_generate_path */
-
-/**
- * devfs_get_ops - Get the device operations for a devfs entry.
- * @de: The handle to the device entry.
- *
- * Returns a pointer to the device operations on success, else NULL.
- * The use count for the module owning the operations will be incremented.
- */
-
-static struct file_operations *devfs_get_ops (devfs_handle_t de)
-{
- struct file_operations *ops = de->u.cdev.ops;
- struct module *owner;
-
- if (!ops)
- return NULL;
- owner = ops->owner;
- read_lock (&de->parent->u.dir.lock); /* Prevent module from unloading */
- if ( (de->next == de) || !try_module_get (owner) )
- { /* Entry is already unhooked or module is unloading */
- read_unlock (&de->parent->u.dir.lock);
- return NULL;
- }
- read_unlock (&de->parent->u.dir.lock); /* Module can continue unloading*/
- return ops;
-} /* End Function devfs_get_ops */
-
/**
* devfs_setup - Process kernel boot options.
* @str: The boot options after the "devfs=".
__setup("devfs=", devfs_setup);
-EXPORT_SYMBOL(devfs_put);
EXPORT_SYMBOL(devfs_mk_symlink);
EXPORT_SYMBOL(devfs_mk_dir);
EXPORT_SYMBOL(devfs_remove);
iput (inode);
return NULL;
}
+ /* FIXME where is devfs_put? */
inode->u.generic_ip = devfs_get (de);
inode->i_ino = de->inode.ino;
DPRINTK (DEBUG_I_GET, "(%d): VFS inode: %p devfs_entry: %p\n",
inode->i_blocks = 0;
inode->i_blksize = FAKE_BLOCK_SIZE;
inode->i_op = &devfs_iops;
- inode->i_fop = &devfs_fops;
- if ( S_ISCHR (de->mode) )
- {
- inode->i_rdev = de->u.cdev.dev;
- }
- else if ( S_ISBLK (de->mode) )
- init_special_inode(inode, de->mode, de->u.bdev.dev);
- else if ( S_ISFIFO (de->mode) )
- inode->i_fop = &def_fifo_fops;
- else if ( S_ISDIR (de->mode) )
- {
- inode->i_op = &devfs_dir_iops;
- inode->i_fop = &devfs_dir_fops;
- }
- else if ( S_ISLNK (de->mode) )
- {
- inode->i_op = &devfs_symlink_iops;
- inode->i_size = de->u.symlink.length;
- }
inode->i_mode = de->mode;
+ if (S_ISDIR(de->mode)) {
+ inode->i_op = &devfs_dir_iops;
+ inode->i_fop = &devfs_dir_fops;
+ } else if (S_ISLNK(de->mode)) {
+ inode->i_op = &devfs_symlink_iops;
+ inode->i_size = de->u.symlink.length;
+ } else if (S_ISCHR(de->mode) || S_ISBLK(de->mode)) {
+ init_special_inode(inode, de->mode, de->u.dev);
+ } else if (S_ISFIFO(de->mode) || S_ISSOCK(de->mode)) {
+ init_special_inode(inode, de->mode, 0);
+ } else {
+ PRINTK("(%s): unknown mode %o de: %p\n",
+ de->name, de->mode, de);
+ iput(inode);
+ devfs_put(de);
+ return NULL;
+ }
+
inode->i_uid = de->inode.uid;
inode->i_gid = de->inode.gid;
inode->i_atime = de->inode.atime;
return stored;
} /* End Function devfs_readdir */
+/* Open devfs specific special files */
static int devfs_open (struct inode *inode, struct file *file)
{
- int err = -ENODEV;
- struct devfs_entry *de;
- struct file_operations *ops;
+ int err;
+ int minor = MINOR(inode->i_rdev);
+ struct file_operations *old_fops, *new_fops;
- de = get_devfs_entry_from_vfs_inode (inode);
- if (de == NULL) return -ENODEV;
- if ( S_ISDIR (de->mode) ) return 0;
- file->private_data = de->info;
- if (S_ISCHR(inode->i_mode)) {
- ops = devfs_get_ops (de); /* Now have module refcount */
- file->f_op = ops;
- if (file->f_op)
- {
- lock_kernel ();
- err = file->f_op->open ? (*file->f_op->open) (inode, file) : 0;
- unlock_kernel ();
+ switch (minor) {
+ case 0: /* /dev/.devfsd */
+ new_fops = fops_get(&devfsd_fops);
+ break;
+#ifdef CONFIG_DEVFS_DEBUG
+ case 1: /* /dev/.stat */
+ new_fops = fops_get(&stat_fops);
+ break;
+#endif
+ default:
+ return -ENODEV;
}
- else
- err = chrdev_open (inode, file);
- }
- return err;
+
+ if (new_fops == NULL)
+ return -ENODEV;
+ old_fops = file->f_op;
+ file->f_op = new_fops;
+ err = new_fops->open ? new_fops->open(inode, file) : 0;
+ if (err) {
+ file->f_op = old_fops;
+ fops_put(new_fops);
+ } else
+ fops_put(old_fops);
+ return err;
} /* End Function devfs_open */
static struct file_operations devfs_fops =
{
.read = generic_read_dir,
.readdir = devfs_readdir,
- .open = devfs_open,
};
devfs_handle_t parent = get_devfs_entry_from_vfs_inode (dir);
struct devfs_lookup_struct *lookup_info = dentry->d_fsdata;
DECLARE_WAITQUEUE (wait, current);
+ int need_lock;
+
+ /*
+ * FIXME HACK
+ *
+ * make sure that
+ * d_instantiate always runs under lock
+ * we release i_sem lock before going to sleep
+ *
+ * unfortunately sometimes d_revalidate is called with
+ * and sometimes without i_sem lock held. The following checks
+ * attempt to deduce when we need to add (and drop resp.) lock
+ * here. This relies on current (2.6.2) calling coventions:
+ *
+ * lookup_hash is always run under i_sem and is passing NULL
+ * as nd
+ *
+ * open(...,O_CREATE,...) calls _lookup_hash under i_sem
+ * and sets flags to LOOKUP_OPEN|LOOKUP_CREATE
+ *
+ * all other invocations of ->d_revalidate seem to happen
+ * outside of i_sem
+ */
+ need_lock = nd &&
+ (!(nd->flags & LOOKUP_CREATE) || (nd->flags & LOOKUP_PARENT));
+
+ if (need_lock)
+ down(&dir->i_sem);
if ( is_devfsd_or_child (fs_info) )
{
"(%s): dentry: %p inode: %p de: %p by: \"%s\"\n",
dentry->d_name.name, dentry, dentry->d_inode, de,
current->comm);
- if (dentry->d_inode) return 1;
+ if (dentry->d_inode)
+ goto out;
if (de == NULL)
{
read_lock (&parent->u.dir.lock);
de = _devfs_search_dir (parent, dentry->d_name.name,
dentry->d_name.len);
read_unlock (&parent->u.dir.lock);
- if (de == NULL) return 1;
+ if (de == NULL)
+ goto out;
lookup_info->de = de;
}
/* Create an inode, now that the driver information is available */
inode = _devfs_get_vfs_inode (dir->i_sb, de, dentry);
- if (!inode) return 1;
+ if (!inode)
+ goto out;
DPRINTK (DEBUG_I_LOOKUP,
"(%s): new VFS inode(%u): %p de: %p by: \"%s\"\n",
de->name, de->inode.ino, inode, de, current->comm);
d_instantiate (dentry, inode);
- return 1;
+ goto out;
}
- if (lookup_info == NULL) return 1; /* Early termination */
+ if (lookup_info == NULL)
+ goto out; /* Early termination */
read_lock (&parent->u.dir.lock);
if (dentry->d_fsdata)
{
set_current_state (TASK_UNINTERRUPTIBLE);
add_wait_queue (&lookup_info->wait_queue, &wait);
read_unlock (&parent->u.dir.lock);
+ /* at this point it is always (hopefully) locked */
+ up(&dir->i_sem);
schedule ();
+ down(&dir->i_sem);
/*
* This does not need nor should remove wait from wait_queue.
* Wait queue head is never reused - nothing is ever added to it
}
else read_unlock (&parent->u.dir.lock);
+
+out:
+ if (need_lock)
+ up(&dir->i_sem);
return 1;
} /* End Function devfs_d_revalidate_wait */
revalidation */
up (&dir->i_sem);
wait_for_devfsd_finished (fs_info); /* If I'm not devfsd, must wait */
+ down (&dir->i_sem); /* Grab it again because them's the rules */
de = lookup_info.de;
/* If someone else has been so kind as to make the inode, we go home
early */
dentry->d_fsdata = NULL;
wake_up (&lookup_info.wait_queue);
write_unlock (&parent->u.dir.lock);
- down (&dir->i_sem); /* Grab it again because them's the rules */
devfs_put (de);
return retval;
} /* End Function devfs_lookup */
de = get_devfs_entry_from_vfs_inode (inode);
DPRINTK (DEBUG_I_UNLINK, "(%s): de: %p\n", dentry->d_name.name, de);
if (de == NULL) return -ENOENT;
- if (!de->vfs_deletable) return -EPERM;
+ if (!de->vfs) return -EPERM;
write_lock (&de->parent->u.dir.lock);
unhooked = _devfs_unhook (de);
write_unlock (&de->parent->u.dir.lock);
DPRINTK (DEBUG_DISABLED, "(%s): errcode from <devfs_do_symlink>: %d\n",
dentry->d_name.name, err);
if (err < 0) return err;
- de->vfs_deletable = TRUE;
+ de->vfs = TRUE;
de->inode.uid = current->euid;
de->inode.gid = current->egid;
de->inode.atime = CURRENT_TIME;
if (parent == NULL) return -ENOENT;
de = _devfs_alloc_entry (dentry->d_name.name, dentry->d_name.len, mode);
if (!de) return -ENOMEM;
- de->vfs_deletable = TRUE;
+ de->vfs = TRUE;
if ( ( err = _devfs_append_entry (parent, de, NULL) ) != 0 )
return err;
de->inode.uid = current->euid;
de = get_devfs_entry_from_vfs_inode (inode);
if (de == NULL) return -ENOENT;
if ( !S_ISDIR (de->mode) ) return -ENOTDIR;
- if (!de->vfs_deletable) return -EPERM;
+ if (!de->vfs) return -EPERM;
/* First ensure the directory is empty and will stay that way */
write_lock (&de->u.dir.lock);
if (de->u.dir.first) err = -ENOTEMPTY;
if (parent == NULL) return -ENOENT;
de = _devfs_alloc_entry (dentry->d_name.name, dentry->d_name.len, mode);
if (!de) return -ENOMEM;
- de->vfs_deletable = TRUE;
- if (S_ISCHR (mode))
- de->u.cdev.dev = rdev;
- else if (S_ISBLK (mode))
- de->u.bdev.dev = rdev;
+ de->vfs = TRUE;
+ if (S_ISCHR(mode) || S_ISBLK(mode))
+ de->u.dev = rdev;
if ( ( err = _devfs_append_entry (parent, de, NULL) ) != 0 )
return err;
de->inode.uid = current->euid;
info->uid = entry->uid;
info->gid = entry->gid;
de = entry->de;
- if (S_ISCHR(de->mode)) {
- info->major = MAJOR(de->u.cdev.dev);
- info->minor = MINOR(de->u.cdev.dev);
- } else if (S_ISBLK (de->mode)) {
- info->major = MAJOR(de->u.bdev.dev);
- info->minor = MINOR(de->u.bdev.dev);
+ if (S_ISCHR(de->mode) || S_ISBLK(de->mode)) {
+ info->major = MAJOR(de->u.dev);
+ info->minor = MINOR(de->u.dev);
}
pos = devfs_generate_path (de, info->devname, DEVFS_PATHLEN);
if (pos < 0) return pos;
} /* End Function stat_read */
#endif
-
-static int __init init_devfs_fs (void)
+static int __init init_devfs_fs(void)
{
- int err;
+ int err;
+ int major;
+ struct devfs_entry *devfsd;
+#ifdef CONFIG_DEVFS_DEBUG
+ struct devfs_entry *stat;
+#endif
+
+ if (_devfs_get_root_entry() == NULL)
+ return -ENOMEM;
- printk (KERN_INFO "%s: v%s Richard Gooch (rgooch@atnf.csiro.au)\n",
- DEVFS_NAME, DEVFS_VERSION);
- devfsd_buf_cache = kmem_cache_create ("devfsd_event",
+ printk(KERN_INFO "%s: %s Richard Gooch (rgooch@atnf.csiro.au)\n",
+ DEVFS_NAME, DEVFS_VERSION);
+ devfsd_buf_cache = kmem_cache_create("devfsd_event",
sizeof (struct devfsd_buf_entry),
0, 0, NULL, NULL);
- if (!devfsd_buf_cache) OOPS ("(): unable to allocate event slab\n");
+ if (!devfsd_buf_cache)
+ OOPS("(): unable to allocate event slab\n");
#ifdef CONFIG_DEVFS_DEBUG
- devfs_debug = devfs_debug_init;
- printk (KERN_INFO "%s: devfs_debug: 0x%0x\n", DEVFS_NAME, devfs_debug);
+ devfs_debug = devfs_debug_init;
+ printk(KERN_INFO "%s: devfs_debug: 0x%0x\n", DEVFS_NAME, devfs_debug);
#endif
- printk (KERN_INFO "%s: boot_options: 0x%0x\n", DEVFS_NAME, boot_options);
- err = register_filesystem (&devfs_fs_type);
- if (!err)
- {
- struct vfsmount *devfs_mnt = kern_mount (&devfs_fs_type);
- err = PTR_ERR (devfs_mnt);
- if ( !IS_ERR (devfs_mnt) ) err = 0;
- }
- return err;
+ printk(KERN_INFO "%s: boot_options: 0x%0x\n", DEVFS_NAME, boot_options);
+
+ /* register special device for devfsd communication */
+ major = register_chrdev(0, "devfs", &devfs_fops);
+ if (major < 0)
+ return major;
+
+ /* And create the entry for ".devfsd" */
+ devfsd = _devfs_alloc_entry(".devfsd", 0, S_IFCHR|S_IRUSR|S_IWUSR);
+ if (devfsd == NULL )
+ return -ENOMEM;
+ devfsd->u.dev = MKDEV(major, 0);
+ _devfs_append_entry(root_entry, devfsd, NULL);
+
+#ifdef CONFIG_DEVFS_DEBUG
+ stat = _devfs_alloc_entry(".stat", 0, S_IFCHR|S_IRUGO);
+ if (stat == NULL )
+ return -ENOMEM;
+ stat->u.dev = MKDEV(major, 1);
+ _devfs_append_entry (root_entry, stat, NULL);
+#endif
+
+ err = register_filesystem(&devfs_fs_type);
+ return err;
} /* End Function init_devfs_fs */
void __init mount_devfs_fs (void)
+++ /dev/null
-
-extern dev_t devfs_alloc_devnum(umode_t mode);
-extern void devfs_dealloc_devnum(umode_t mode, dev_t devnum);
#include <linux/vmalloc.h>
#include <linux/genhd.h>
#include <asm/bitops.h>
-#include "internal.h"
int devfs_register_tape(const char *name)
}
EXPORT_SYMBOL(devfs_unregister_tape);
-
-struct major_list
-{
- spinlock_t lock;
- unsigned long bits[256 / BITS_PER_LONG];
-};
-#if BITS_PER_LONG == 32
-# define INITIALISER64(low,high) (low), (high)
-#else
-# define INITIALISER64(low,high) ( (unsigned long) (high) << 32 | (low) )
-#endif
-
-/* Block majors already assigned:
- 0-3, 7-9, 11-63, 65-99, 101-113, 120-127, 199, 201, 240-255
- Total free: 122
-*/
-static struct major_list block_major_list =
-{SPIN_LOCK_UNLOCKED,
- {INITIALISER64 (0xfffffb8f, 0xffffffff), /* Majors 0-31, 32-63 */
- INITIALISER64 (0xfffffffe, 0xff03ffef), /* Majors 64-95, 96-127 */
- INITIALISER64 (0x00000000, 0x00000000), /* Majors 128-159, 160-191 */
- INITIALISER64 (0x00000280, 0xffff0000), /* Majors 192-223, 224-255 */
- }
-};
-
-/* Char majors already assigned:
- 0-7, 9-151, 154-158, 160-211, 216-221, 224-230, 240-255
- Total free: 19
-*/
-static struct major_list char_major_list =
-{SPIN_LOCK_UNLOCKED,
- {INITIALISER64 (0xfffffeff, 0xffffffff), /* Majors 0-31, 32-63 */
- INITIALISER64 (0xffffffff, 0xffffffff), /* Majors 64-95, 96-127 */
- INITIALISER64 (0x7cffffff, 0xffffffff), /* Majors 128-159, 160-191 */
- INITIALISER64 (0x3f0fffff, 0xffff007f), /* Majors 192-223, 224-255 */
- }
-};
-
-
-/**
- * devfs_alloc_major - Allocate a major number.
- * @mode: The file mode (must be block device or character device).
- * Returns the allocated major, else -1 if none are available.
- * This routine is thread safe and does not block.
- */
-
-
-struct minor_list
-{
- int major;
- unsigned long bits[256 / BITS_PER_LONG];
- struct minor_list *next;
-};
-
-static struct device_list {
- struct minor_list *first;
- struct minor_list *last;
- int none_free;
-} block_list, char_list;
-
-static DECLARE_MUTEX(device_list_mutex);
-
-
-/**
- * devfs_alloc_devnum - Allocate a device number.
- * @mode: The file mode (must be block device or character device).
- *
- * Returns the allocated device number, else NODEV if none are available.
- * This routine is thread safe and may block.
- */
-
-dev_t devfs_alloc_devnum(umode_t mode)
-{
- struct device_list *list;
- struct major_list *major_list;
- struct minor_list *entry;
- int minor;
-
- if (S_ISCHR(mode)) {
- major_list = &char_major_list;
- list = &char_list;
- } else {
- major_list = &block_major_list;
- list = &block_list;
- }
-
- down(&device_list_mutex);
- if (list->none_free)
- goto out_unlock;
-
- for (entry = list->first; entry; entry = entry->next) {
- minor = find_first_zero_bit (entry->bits, 256);
- if (minor >= 256)
- continue;
- goto out_done;
- }
-
- /* Need to allocate a new major */
- entry = kmalloc (sizeof *entry, GFP_KERNEL);
- if (!entry)
- goto out_full;
- memset(entry, 0, sizeof *entry);
-
- spin_lock(&major_list->lock);
- entry->major = find_first_zero_bit(major_list->bits, 256);
- if (entry->major >= 256) {
- spin_unlock(&major_list->lock);
- kfree(entry);
- goto out_full;
- }
- __set_bit(entry->major, major_list->bits);
- spin_unlock(&major_list->lock);
-
- if (!list->first)
- list->first = entry;
- else
- list->last->next = entry;
- list->last = entry;
-
- minor = 0;
- out_done:
- __set_bit(minor, entry->bits);
- up(&device_list_mutex);
- return MKDEV(entry->major, minor);
- out_full:
- list->none_free = 1;
- out_unlock:
- up(&device_list_mutex);
- return 0;
-}
-
-
-/**
- * devfs_dealloc_devnum - Dellocate a device number.
- * @mode: The file mode (must be block device or character device).
- * @devnum: The device number.
- *
- * This routine is thread safe and may block.
- */
-
-void devfs_dealloc_devnum(umode_t mode, dev_t devnum)
-{
- struct device_list *list = S_ISCHR(mode) ? &char_list : &block_list;
- struct minor_list *entry;
-
- if (!devnum)
- return;
-
- down(&device_list_mutex);
- for (entry = list->first; entry; entry = entry->next) {
- if (entry->major == MAJOR(devnum)) {
- if (__test_and_clear_bit(MINOR(devnum), entry->bits))
- list->none_free = 0;
- break;
- }
- }
- up(&device_list_mutex);
-}
if (waitqueue_active(&ep->poll_wait))
pwake++;
}
- } else if (EP_IS_LINKED(&epi->rdllink))
- EP_LIST_DEL(&epi->rdllink);
+ }
}
write_unlock_irqrestore(&ep->lock, flags);
goto fail_unlock;
format_corename(corename, core_pattern, signr);
- file = filp_open(corename, O_CREAT | 2 | O_NOFOLLOW, 0600);
+ file = filp_open(corename, O_CREAT | 2 | O_NOFOLLOW | O_LARGEFILE, 0600);
if (IS_ERR(file))
goto fail_unlock;
inode = file->f_dentry->d_inode;
struct block_device *bdev;
char b[BDEVNAME_SIZE];
- bdev = open_by_devnum(dev, FMODE_READ|FMODE_WRITE, BDEV_FS);
+ bdev = open_by_devnum(dev, FMODE_READ|FMODE_WRITE);
if (IS_ERR(bdev))
goto fail;
return bdev;
static int ext3_blkdev_put(struct block_device *bdev)
{
bd_release(bdev);
- return blkdev_put(bdev, BDEV_FS);
+ return blkdev_put(bdev);
}
static int ext3_blkdev_remove(struct ext3_sb_info *sbi)
if (bd_claim(bdev, sb)) {
printk(KERN_ERR
"EXT3: failed to claim external journal device.\n");
- blkdev_put(bdev, BDEV_FS);
+ blkdev_put(bdev);
return NULL;
}
wake_up_all(wq);
}
+static __initdata unsigned long ihash_entries;
+static int __init set_ihash_entries(char *str)
+{
+ if (!str)
+ return 0;
+ ihash_entries = simple_strtoul(str, &str, 0);
+ return 1;
+}
+__setup("ihash_entries=", set_ihash_entries);
+
/*
* Initialize the waitqueues and inode hash table.
*/
for (i = 0; i < ARRAY_SIZE(i_wait_queue_heads); i++)
init_waitqueue_head(&i_wait_queue_heads[i].wqh);
- mempages >>= (14 - PAGE_SHIFT);
- mempages *= sizeof(struct hlist_head);
- for (order = 0; ((1UL << order) << PAGE_SHIFT) < mempages; order++)
+ if (!ihash_entries)
+ ihash_entries = PAGE_SHIFT < 14 ?
+ mempages >> (14 - PAGE_SHIFT) :
+ mempages << (PAGE_SHIFT - 14);
+
+ ihash_entries *= sizeof(struct hlist_head);
+ for (order = 0; ((1UL << order) << PAGE_SHIFT) < ihash_entries; order++)
;
do {
*/
externalLog:
- bdev = open_by_devnum(JFS_SBI(sb)->logdev,
- FMODE_READ|FMODE_WRITE, BDEV_FS);
+ bdev = open_by_devnum(JFS_SBI(sb)->logdev, FMODE_READ|FMODE_WRITE);
if (IS_ERR(bdev)) {
rc = -PTR_ERR(bdev);
goto free;
bd_release(bdev);
close: /* close external log device */
- blkdev_put(bdev, BDEV_FS);
+ blkdev_put(bdev);
free: /* free log descriptor */
kfree(log);
rc = lmLogShutdown(log);
bd_release(bdev);
- blkdev_put(bdev, BDEV_FS);
+ blkdev_put(bdev);
out:
jfs_info("lmLogClose: exit(%d)", rc);
/*
- * Copyright (c) International Business Machines Corp., 2000-2002
+ * Copyright (C) International Business Machines Corp., 2000-2004
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
#include <linux/fs.h>
#include <linux/slab.h>
-#include "jfs_types.h"
+#include "jfs_incore.h"
#include "jfs_filsys.h"
#include "jfs_unicode.h"
#include "jfs_debug.h"
int i;
int outlen = 0;
- for (i = 0; (i < len) && from[i]; i++) {
- int charlen;
- charlen =
- codepage->uni2char(le16_to_cpu(from[i]), &to[outlen],
- NLS_MAX_CHARSET_SIZE);
- if (charlen > 0) {
- outlen += charlen;
- } else {
- to[outlen++] = '?';
+ if (codepage) {
+ for (i = 0; (i < len) && from[i]; i++) {
+ int charlen;
+ charlen =
+ codepage->uni2char(le16_to_cpu(from[i]),
+ &to[outlen],
+ NLS_MAX_CHARSET_SIZE);
+ if (charlen > 0)
+ outlen += charlen;
+ else
+ to[outlen++] = '?';
}
+ } else {
+ for (i = 0; (i < len) && from[i]; i++)
+ to[i] = (char) (le16_to_cpu(from[i]));
+ outlen = i;
}
to[outlen] = 0;
return outlen;
int charlen;
int i;
- for (i = 0; len && *from; i++, from += charlen, len -= charlen) {
- charlen = codepage->char2uni(from, len, &to[i]);
- if (charlen < 1) {
- jfs_err("jfs_strtoUCS: char2uni returned %d.", charlen);
- jfs_err("charset = %s, char = 0x%x",
- codepage->charset, (unsigned char) *from);
- return charlen;
+ if (codepage) {
+ for (i = 0; len && *from; i++, from += charlen, len -= charlen)
+ {
+ charlen = codepage->char2uni(from, len, &to[i]);
+ if (charlen < 1) {
+ jfs_err("jfs_strtoUCS: char2uni returned %d.",
+ charlen);
+ jfs_err("charset = %s, char = 0x%x",
+ codepage->charset,
+ (unsigned char) *from);
+ return charlen;
+ }
}
+ } else {
+ for (i = 0; (i < len) && from[i]; i++)
+ to[i] = (wchar_t) from[i];
}
to[i] = 0;
* FUNCTION: Allocate and translate to unicode string
*
*/
-int get_UCSname(struct component_name * uniName, struct dentry *dentry,
- struct nls_table *nls_tab)
+int get_UCSname(struct component_name * uniName, struct dentry *dentry)
{
+ struct nls_table *nls_tab = JFS_SBI(dentry->d_sb)->nls_tab;
int length = dentry->d_name.len;
if (length > JFS_NAME_MAX)
extern signed char UniUpperTable[512];
extern UNICASERANGE UniUpperRange[];
-extern int get_UCSname(struct component_name *, struct dentry *,
- struct nls_table *);
+extern int get_UCSname(struct component_name *, struct dentry *);
extern int jfs_strfromUCS_le(char *, const wchar_t *, int, struct nls_table *);
#define free_UCSname(COMP) kfree((COMP)->name)
* search parent directory for entry/freespace
* (dtSearch() returns parent directory page pinned)
*/
- if ((rc = get_UCSname(&dname, dentry, JFS_SBI(dip->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&dname, dentry)))
goto out1;
/*
* search parent directory for entry/freespace
* (dtSearch() returns parent directory page pinned)
*/
- if ((rc = get_UCSname(&dname, dentry, JFS_SBI(dip->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&dname, dentry)))
goto out1;
/*
goto out;
}
- if ((rc = get_UCSname(&dname, dentry, JFS_SBI(dip->i_sb)->nls_tab))) {
+ if ((rc = get_UCSname(&dname, dentry))) {
goto out;
}
jfs_info("jfs_unlink: dip:0x%p name:%s", dip, dentry->d_name.name);
- if ((rc = get_UCSname(&dname, dentry, JFS_SBI(dip->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&dname, dentry)))
goto out;
IWRITE_LOCK(ip);
/*
* scan parent directory for entry/freespace
*/
- if ((rc = get_UCSname(&dname, dentry, JFS_SBI(ip->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&dname, dentry)))
goto out;
if ((rc = dtSearch(dir, &dname, &ino, &btstack, JFS_CREATE)))
* (dtSearch() returns parent directory page pinned)
*/
- if ((rc = get_UCSname(&dname, dentry, JFS_SBI(dip->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&dname, dentry)))
goto out1;
/*
old_ip = old_dentry->d_inode;
new_ip = new_dentry->d_inode;
- if ((rc = get_UCSname(&old_dname, old_dentry,
- JFS_SBI(old_dir->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&old_dname, old_dentry)))
goto out1;
- if ((rc = get_UCSname(&new_dname, new_dentry,
- JFS_SBI(old_dir->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&new_dname, new_dentry)))
goto out2;
/*
jfs_info("jfs_mknod: %s", dentry->d_name.name);
- if ((rc = get_UCSname(&dname, dentry, JFS_SBI(dir->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&dname, dentry)))
goto out;
ip = ialloc(dir, mode);
else if (strcmp(name, "..") == 0)
inum = PARENT(dip);
else {
- if ((rc =
- get_UCSname(&key, dentry, JFS_SBI(dip->i_sb)->nls_tab)))
+ if ((rc = get_UCSname(&key, dentry)))
return ERR_PTR(rc);
rc = dtSearch(dip, &key, &inum, &btstack, JFS_LOOKUP);
free_UCSname(&key);
rc = jfs_umount(sb);
if (rc)
jfs_err("jfs_umount failed with return code %d", rc);
- unload_nls(sbi->nls_tab);
+ if (sbi->nls_tab)
+ unload_nls(sbi->nls_tab);
sbi->nls_tab = NULL;
kfree(sbi);
if (!sb->s_root)
goto out_no_root;
- if (!sbi->nls_tab)
- sbi->nls_tab = load_nls_default();
-
/* logical blocks are represented by 40 bits in pxd_t, etc. */
sb->s_maxbytes = ((u64) sb->s_blocksize) << 40;
#if BITS_PER_LONG == 32
#include <linux/nfsd/nfsd.h>
#define CAP_NFSD_MASK (CAP_FS_MASK|CAP_TO_MASK(CAP_SYS_RESOURCE))
-void
-nfsd_setuser(struct svc_rqst *rqstp, struct svc_export *exp)
+
+int nfsd_setuser(struct svc_rqst *rqstp, struct svc_export *exp)
{
struct svc_cred *cred = &rqstp->rq_cred;
- int i;
+ struct group_info *group_info;
+ int ngroups;
+ int i;
+ int ret;
+
+ ngroups = 0;
+ if (!(exp->ex_flags & NFSEXP_ALLSQUASH)) {
+ for (i = 0; i < SVC_CRED_NGROUPS; i++) {
+ if (cred->cr_groups[i] == (gid_t)NOGROUP)
+ break;
+ ngroups++;
+ }
+ }
+ group_info = groups_alloc(ngroups);
+ if (group_info == NULL)
+ return -ENOMEM;
if (exp->ex_flags & NFSEXP_ALLSQUASH) {
cred->cr_uid = exp->ex_anon_uid;
cred->cr_uid = exp->ex_anon_uid;
if (!cred->cr_gid)
cred->cr_gid = exp->ex_anon_gid;
- for (i = 0; i < NGROUPS; i++)
+ for (i = 0; i < SVC_CRED_NGROUPS; i++)
if (!cred->cr_groups[i])
cred->cr_groups[i] = exp->ex_anon_gid;
}
current->fsgid = cred->cr_gid;
else
current->fsgid = exp->ex_anon_gid;
- for (i = 0; i < NGROUPS; i++) {
+
+ for (i = 0; i < SVC_CRED_NGROUPS; i++) {
gid_t group = cred->cr_groups[i];
if (group == (gid_t) NOGROUP)
break;
- current->groups[i] = group;
+ GROUP_AT(group_info, i) = group;
}
- current->ngroups = i;
- if ((cred->cr_uid)) {
- cap_t(current->cap_effective) &= ~CAP_NFSD_MASK;
- } else {
- cap_t(current->cap_effective) |= (CAP_NFSD_MASK &
- current->cap_permitted);
+ ret = set_current_groups(group_info);
+ if (ret == 0) {
+ if ((cred->cr_uid)) {
+ cap_t(current->cap_effective) &= ~CAP_NFSD_MASK;
+ } else {
+ cap_t(current->cap_effective) |= (CAP_NFSD_MASK &
+ current->cap_permitted);
+ }
}
+ put_group_info(group_info);
+ return ret;
}
target->cr_uid = source->cr_uid;
target->cr_gid = source->cr_gid;
- for(i = 0; i < NGROUPS; i++)
+ for(i = 0; i < SVC_CRED_NGROUPS; i++)
target->cr_groups[i] = source->cr_groups[i];
}
#include <linux/kernel.h>
#include <linux/time.h>
#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
#include <linux/stat.h>
#include <linux/module.h>
};
#endif
-static int
-nfsd_proc_read(char *buffer, char **start, off_t offset, int count,
- int *eof, void *data)
+static int nfsd_proc_show(struct seq_file *seq, void *v)
{
- int len;
- int i;
+ int i;
- len = sprintf(buffer, "rc %u %u %u\nfh %u %u %u %u %u\nio %u %u\n",
+ seq_printf(seq, "rc %u %u %u\nfh %u %u %u %u %u\nio %u %u\n",
nfsdstats.rchits,
nfsdstats.rcmisses,
nfsdstats.rcnocache,
nfsdstats.io_read,
nfsdstats.io_write);
/* thread usage: */
- len += sprintf(buffer+len, "th %u %u", nfsdstats.th_cnt, nfsdstats.th_fullcnt);
+ seq_printf(seq, "th %u %u", nfsdstats.th_cnt, nfsdstats.th_fullcnt);
for (i=0; i<10; i++) {
unsigned int jifs = nfsdstats.th_usage[i];
unsigned int sec = jifs / HZ, msec = (jifs % HZ)*1000/HZ;
- len += sprintf(buffer+len, " %u.%03u", sec, msec);
+ seq_printf(seq, " %u.%03u", sec, msec);
}
/* newline and ra-cache */
- len += sprintf(buffer+len, "\nra %u", nfsdstats.ra_size);
+ seq_printf(seq, "\nra %u", nfsdstats.ra_size);
for (i=0; i<11; i++)
- len += sprintf(buffer+len, " %u", nfsdstats.ra_depth[i]);
- len += sprintf(buffer+len, "\n");
+ seq_printf(seq, " %u", nfsdstats.ra_depth[i]);
+ seq_putc(seq, '\n');
+ /* show my rpc info */
+ svc_seq_show(seq, &nfsd_svcstats);
- /* Assume we haven't hit EOF yet. Will be set by svc_proc_read. */
- *eof = 0;
-
- /*
- * Append generic nfsd RPC statistics if there's room for it.
- */
- if (len <= offset) {
- len = svc_proc_read(buffer, start, offset - len, count,
- eof, data);
- return len;
- }
-
- if (len < count) {
- len += svc_proc_read(buffer + len, start, 0, count - len,
- eof, data);
- }
-
- if (offset >= len) {
- *start = buffer;
- return 0;
- }
+ return 0;
+}
- *start = buffer + offset;
- if ((len -= offset) > count)
- return count;
- return len;
+static int nfsd_proc_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, nfsd_proc_show, NULL);
}
+static struct file_operations nfsd_proc_fops = {
+ .owner = THIS_MODULE,
+ .open = nfsd_proc_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
void
nfsd_stat_init(void)
{
- struct proc_dir_entry *ent;
-
- if ((ent = svc_proc_register(&nfsd_svcstats)) != 0) {
- ent->read_proc = nfsd_proc_read;
- ent->owner = THIS_MODULE;
- }
+ svc_proc_register(&nfsd_svcstats, &nfsd_proc_fops);
}
void
return;
bdev = bdget_disk(disk, 0);
- if (blkdev_get(bdev, FMODE_READ, 0, BDEV_RAW) < 0)
+ if (blkdev_get(bdev, FMODE_READ, 0) < 0)
return;
state = check_partition(disk, bdev);
if (state) {
}
kfree(state);
}
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
}
int rescan_partitions(struct gendisk *disk, struct block_device *bdev)
p->files ? p->files->max_fds : 0);
task_unlock(p);
- for (g = 0; g < p->ngroups; g++)
- buffer += sprintf(buffer, "%d ", p->groups[g]);
+ get_group_info(p->group_info);
+ for (g = 0; g < min(p->group_info->ngroups,NGROUPS_SMALL); g++)
+ buffer += sprintf(buffer, "%d ", GROUP_AT(p->group_info,g));
+ put_group_info(p->group_info);
buffer += sprintf(buffer, "\n");
return buffer;
read_unlock(&tasklist_lock);
if (!task)
goto out;
+ if (!thread_group_leader(task))
+ goto out_drop_task;
inode = proc_pid_make_inode(dir->i_sb, task, PROC_TGID_INO);
-
- if (!inode) {
- put_task_struct(task);
- goto out;
- }
+ if (!inode)
+ goto out_drop_task;
inode->i_mode = S_IFDIR|S_IRUGO|S_IXUGO;
inode->i_op = &proc_tgid_base_inode_operations;
inode->i_fop = &proc_tgid_base_operations;
goto out;
}
return NULL;
+out_drop_task:
+ put_task_struct(task);
out:
return ERR_PTR(-ENOENT);
}
static struct dentry *proc_task_lookup(struct inode *dir, struct dentry * dentry, struct nameidata *nd)
{
struct task_struct *task;
+ struct task_struct *leader = proc_task(dir);
struct inode *inode;
unsigned tid;
read_unlock(&tasklist_lock);
if (!task)
goto out;
+ if (leader->tgid != task->tgid)
+ goto out_drop_task;
inode = proc_pid_make_inode(dir->i_sb, task, PROC_TID_INO);
- if (!inode) {
- put_task_struct(task);
- goto out;
- }
+ if (!inode)
+ goto out_drop_task;
inode->i_mode = S_IFDIR|S_IRUGO|S_IXUGO;
inode->i_op = &proc_tid_base_inode_operations;
inode->i_fop = &proc_tid_base_operations;
put_task_struct(task);
return NULL;
+out_drop_task:
+ put_task_struct(task);
out:
return ERR_PTR(-ENOENT);
}
journal -> j_dev_file = NULL;
journal -> j_dev_bd = NULL;
} else if( journal -> j_dev_bd != NULL ) {
- result = blkdev_put( journal -> j_dev_bd, BDEV_FS );
+ result = blkdev_put( journal -> j_dev_bd );
journal -> j_dev_bd = NULL;
}
/* there is no "jdev" option and journal is on separate device */
if( ( !jdev_name || !jdev_name[ 0 ] ) ) {
- journal->j_dev_bd = open_by_devnum(jdev, blkdev_mode, BDEV_FS);
+ journal->j_dev_bd = open_by_devnum(jdev, blkdev_mode);
if (IS_ERR(journal->j_dev_bd)) {
result = PTR_ERR(journal->j_dev_bd);
journal->j_dev_bd = NULL;
return status;
}
+static ssize_t
+smb_file_sendfile(struct file *file, loff_t *ppos,
+ size_t count, read_actor_t actor, void __user *target)
+{
+ struct dentry *dentry = file->f_dentry;
+ ssize_t status;
+
+ VERBOSE("file %s/%s, pos=%Ld, count=%d\n",
+ DENTRY_PATH(dentry), *ppos, count);
+
+ status = smb_revalidate_inode(dentry);
+ if (status) {
+ PARANOIA("%s/%s validation failed, error=%d\n",
+ DENTRY_PATH(dentry), status);
+ goto out;
+ }
+ status = generic_file_sendfile(file, ppos, count, actor, target);
+out:
+ return status;
+}
+
/*
* This does the "real" work of the write. The generic routine has
* allocated the page, locked it, done all the page alignment stuff
.open = smb_file_open,
.release = smb_file_release,
.fsync = smb_fsync,
+ .sendfile = smb_file_sendfile,
};
struct inode_operations smb_file_inode_operations =
p += 19;
p += 8;
- /* FIXME: the request will fail if the 'tid' is changed. This
- should perhaps be set just before transmitting ... */
- WSET(req->rq_header, smb_tid, server->opt.tid);
- WSET(req->rq_header, smb_pid, 1);
- WSET(req->rq_header, smb_uid, server->opt.server_uid);
-
if (server->opt.protocol > SMB_PROTOCOL_CORE) {
int flags = SMB_FLAGS_CASELESS_PATHNAMES;
int flags2 = SMB_FLAGS2_LONG_PATH_COMPONENTS |
struct smb_sb_info *server = req->rq_server;
int result;
+ if (req->rq_bytes_sent == 0) {
+ WSET(req->rq_header, smb_tid, server->opt.tid);
+ WSET(req->rq_header, smb_pid, 1);
+ WSET(req->rq_header, smb_uid, server->opt.server_uid);
+ }
+
result = smb_send_request(req);
if (result < 0 && result != -EAGAIN)
goto out;
while (head != &server->xmitq) {
req = list_entry(head, struct smb_request, rq_queue);
head = head->next;
+
+ req->rq_bytes_sent = 0;
if (req->rq_flags & SMB_REQ_NORETRY) {
VERBOSE("aborting request %p on xmitq\n", req);
req->rq_errno = -EIO;
#include <linux/config.h>
#include <linux/module.h>
#include <linux/slab.h>
+#include <linux/init.h>
#include <linux/smp_lock.h>
#include <linux/acct.h>
#include <linux/blkdev.h>
#include <linux/security.h>
#include <linux/vfs.h>
#include <linux/writeback.h> /* for the emergency remount stuff */
+#include <linux/idr.h>
#include <asm/uaccess.h>
* filesystems which don't use real block-devices. -- jrs
*/
-enum {Max_anon = 256};
-static unsigned long unnamed_dev_in_use[Max_anon/(8*sizeof(unsigned long))];
+static struct idr unnamed_dev_idr;
static spinlock_t unnamed_dev_lock = SPIN_LOCK_UNLOCKED;/* protects the above */
int set_anon_super(struct super_block *s, void *data)
{
int dev;
+
spin_lock(&unnamed_dev_lock);
- dev = find_first_zero_bit(unnamed_dev_in_use, Max_anon);
- if (dev == Max_anon) {
+ if (idr_pre_get(&unnamed_dev_idr, GFP_ATOMIC) == 0) {
spin_unlock(&unnamed_dev_lock);
- return -EMFILE;
+ return -ENOMEM;
}
- set_bit(dev, unnamed_dev_in_use);
+ dev = idr_get_new(&unnamed_dev_idr, NULL);
spin_unlock(&unnamed_dev_lock);
- s->s_dev = MKDEV(0, dev);
+
+ if ((dev & MAX_ID_MASK) == (1 << MINORBITS)) {
+ idr_remove(&unnamed_dev_idr, dev);
+ return -EMFILE;
+ }
+ s->s_dev = MKDEV(0, dev & MINORMASK);
return 0;
}
void kill_anon_super(struct super_block *sb)
{
int slot = MINOR(sb->s_dev);
+
generic_shutdown_super(sb);
spin_lock(&unnamed_dev_lock);
- clear_bit(slot, unnamed_dev_in_use);
+ idr_remove(&unnamed_dev_idr, slot);
spin_unlock(&unnamed_dev_lock);
}
EXPORT_SYMBOL(kill_anon_super);
+void __init unnamed_dev_init(void)
+{
+ idr_init(&unnamed_dev_idr);
+}
+
void kill_litter_super(struct super_block *sb)
{
if (sb->s_root)
struct super_block *s;
int error = 0;
- bdev = open_bdev_excl(dev_name, flags, BDEV_FS, fs_type);
+ bdev = open_bdev_excl(dev_name, flags, fs_type);
if (IS_ERR(bdev))
return (struct super_block *)bdev;
return s;
out:
- close_bdev_excl(bdev, BDEV_FS);
+ close_bdev_excl(bdev);
return s;
}
struct block_device *bdev = sb->s_bdev;
generic_shutdown_super(sb);
set_blocksize(bdev, sb->s_old_blocksize);
- close_bdev_excl(bdev, BDEV_FS);
+ close_bdev_excl(bdev);
}
EXPORT_SYMBOL(kill_block_super);
{
int error = 0;
- *bdevp = open_bdev_excl(name, 0, BDEV_FS, mp);
+ *bdevp = open_bdev_excl(name, 0, mp);
if (IS_ERR(*bdevp)) {
error = PTR_ERR(*bdevp);
printk("XFS: Invalid device [%s], error=%d\n", name, error);
struct block_device *bdev)
{
if (bdev)
- close_bdev_excl(bdev, BDEV_FS);
+ close_bdev_excl(bdev);
}
void
#define EXEC_PAGESIZE 8192
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
check_pgt_cache();
}
+static inline unsigned int
+tlb_is_full_mm(struct mmu_gather *tlb)
+{
+ return tlb->fullmm;
+}
+
#define tlb_remove_tlb_entry(tlb,ptep,address) do { } while (0)
#define tlb_start_vma(tlb,vma) \
* This file contains the arm architecture specific module code.
*/
-#define module_map(x) vmalloc(x)
-#define module_unmap(x) vfree(x)
-#define module_arch_init(x) (0)
-#define arch_init_modules(x) do { } while (0)
-
#endif /* _ASM_ARM_MODULE_H */
# define HZ 100
#endif
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 8192
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
check_pgt_cache();
}
+static inline unsigned int
+tlb_is_full_mm(struct mmu_gather *tlb)
+{
+ return tlb->fullmm;
+}
/* tlb_remove_page
* Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), while
* This file contains the H8/300 architecture specific module code.
*/
-#define module_map(x) vmalloc(x)
-#define module_unmap(x) vfree(x)
-#define module_arch_init(x) (0)
-#define arch_init_modules(x) do { } while (0)
-
#endif /* _ASM_H8/300_MODULE_H */
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#ifdef CONFIG_DISCONTIGMEM
+#ifdef CONFIG_NUMA
+ #ifdef CONFIG_X86_NUMAQ
+ #include <asm/numaq.h>
+ #else /* summit or generic arch */
+ #include <asm/srat.h>
+ #endif
+#else /* !CONFIG_NUMA */
+ #define get_memcfg_numa get_memcfg_numa_flat
+ #define get_zholes_size(n) (0)
+#endif /* CONFIG_NUMA */
+
extern struct pglist_data *node_data[];
+#define NODE_DATA(nid) (node_data[nid])
+
+/*
+ * generic node memory support, the following assumptions apply:
+ *
+ * 1) memory comes in 256Mb contigious chunks which are either present or not
+ * 2) we will not have more than 64Gb in total
+ *
+ * for now assume that 64Gb is max amount of RAM for whole system
+ * 64Gb / 4096bytes/page = 16777216 pages
+ */
+#define MAX_NR_PAGES 16777216
+#define MAX_ELEMENTS 256
+#define PAGES_PER_ELEMENT (MAX_NR_PAGES/MAX_ELEMENTS)
+
+extern u8 physnode_map[];
+
+static inline int pfn_to_nid(unsigned long pfn)
+{
+#ifdef CONFIG_NUMA
+ return(physnode_map[(pfn) / PAGES_PER_ELEMENT]);
+#else
+ return 0;
+#endif
+}
+
+static inline struct pglist_data *pfn_to_pgdat(unsigned long pfn)
+{
+ return(NODE_DATA(pfn_to_nid(pfn)));
+}
+
/*
* Following are macros that are specific to this numa platform.
*/
#define kvaddr_to_nid(kaddr) pfn_to_nid(__pa(kaddr) >> PAGE_SHIFT)
-/*
- * Return a pointer to the node data for node n.
- */
-#define NODE_DATA(nid) (node_data[nid])
-
#define node_mem_map(nid) (NODE_DATA(nid)->node_mem_map)
#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)
#define node_end_pfn(nid) \
+ __zone->zone_start_pfn; \
})
#define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT))
-/*
- * pfn_valid should be made as fast as possible, and the current definition
- * is valid for machines that are NUMA, but still contiguous, which is what
- * is currently supported. A more generalised, but slower definition would
- * be something like this - mbligh:
- * ( pfn_to_pgdat(pfn) && ((pfn) < node_end_pfn(pfn_to_nid(pfn))) )
- */
-#define pfn_valid(pfn) ((pfn) < num_physpages)
-
-/*
- * generic node memory support, the following assumptions apply:
- *
- * 1) memory comes in 256Mb contigious chunks which are either present or not
- * 2) we will not have more than 64Gb in total
- *
- * for now assume that 64Gb is max amount of RAM for whole system
- * 64Gb / 4096bytes/page = 16777216 pages
- */
-#define MAX_NR_PAGES 16777216
-#define MAX_ELEMENTS 256
-#define PAGES_PER_ELEMENT (MAX_NR_PAGES/MAX_ELEMENTS)
-extern u8 physnode_map[];
-
-static inline int pfn_to_nid(unsigned long pfn)
-{
- return(physnode_map[(pfn) / PAGES_PER_ELEMENT]);
-}
-static inline struct pglist_data *pfn_to_pgdat(unsigned long pfn)
+#ifdef CONFIG_X86_NUMAQ /* we have contiguous memory on NUMA-Q */
+#define pfn_valid(pfn) ((pfn) < num_physpages)
+#else
+static inline int pfn_valid(int pfn)
{
- return(NODE_DATA(pfn_to_nid(pfn)));
-}
+ int nid = pfn_to_nid(pfn);
-#ifdef CONFIG_X86_NUMAQ
-#include <asm/numaq.h>
-#elif CONFIG_ACPI_SRAT
-#include <asm/srat.h>
-#elif CONFIG_X86_PC
-#define get_zholes_size(n) (0)
-#else
-#define pfn_to_nid(pfn) (0)
-#endif /* CONFIG_X86_NUMAQ */
+ if (nid >= 0)
+ return (pfn < node_end_pfn(nid));
+ return 0;
+}
+#endif
extern int get_memcfg_numa_flat(void );
/*
#define MODULE_PROC_FAMILY "PENTIUMII "
#elif defined CONFIG_MPENTIUMIII
#define MODULE_PROC_FAMILY "PENTIUMIII "
+#elif defined CONFIG_MPENTIUMM
+#define MODULE_PROC_FAMILY "PENTIUMM "
#elif defined CONFIG_MPENTIUM4
#define MODULE_PROC_FAMILY "PENTIUM4 "
#elif defined CONFIG_MK6
#define MODULE_PROC_FAMILY "WINCHIP3D "
#elif defined CONFIG_MCYRIXIII
#define MODULE_PROC_FAMILY "CYRIXIII "
-#elif CONFIG_MVIAC3_2
+#elif defined CONFIG_MVIAC3_2
#define MODULE_PROC_FAMILY "VIAC3-2 "
#else
#error unknown processor family
#endif
-#if defined(CONFIG_REGPARM) && __GNUC__ >= 3
+#ifdef CONFIG_REGPARM
#define MODULE_REGPARM "REGPARM "
#else
-#define MODULE_REGPARM ""
+#define MODULE_REGPARM ""
#endif
#define MODULE_ARCH_VERMAGIC MODULE_PROC_FAMILY MODULE_REGPARM
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define init_thread_info (init_thread_union.thread_info)
#define init_stack (init_thread_union.stack)
+#define THREAD_SIZE (2*PAGE_SIZE)
+
/* how to get the thread information struct from C */
static inline struct thread_info *current_thread_info(void)
{
struct thread_info *ti;
- __asm__("andl %%esp,%0; ":"=r" (ti) : "0" (~8191UL));
+ __asm__("andl %%esp,%0; ":"=r" (ti) : "0" (~(THREAD_SIZE - 1)));
return ti;
}
/* thread information allocation */
-#define THREAD_SIZE (2*PAGE_SIZE)
-#define alloc_thread_info(task) ((struct thread_info *)kmalloc(THREAD_SIZE, GFP_KERNEL))
+#ifdef CONFIG_DEBUG_STACK_USAGE
+#define alloc_thread_info(tsk) \
+ ({ \
+ struct thread_info *ret; \
+ \
+ ret = kmalloc(THREAD_SIZE, GFP_KERNEL); \
+ if (ret) \
+ memset(ret, 0, THREAD_SIZE); \
+ ret; \
+ })
+#else
+#define alloc_thread_info(tsk) kmalloc(THREAD_SIZE, GFP_KERNEL)
+#endif
+
#define free_thread_info(info) kfree(info)
#define get_thread_info(ti) get_task_struct((ti)->task)
#define put_thread_info(ti) put_task_struct((ti)->task)
#else /* !__ASSEMBLY__ */
+#define THREAD_SIZE 8192
+
/* how to get the thread information struct from ASM */
#define GET_THREAD_INFO(reg) \
- movl $-8192, reg; \
+ movl $-THREAD_SIZE, reg; \
andl %esp, reg
#endif
#endif
extern unsigned long calibrate_tsc(void);
+extern void init_cpu_khz(void);
#ifdef CONFIG_HPET_TIMER
extern struct timer_opts timer_hpet;
extern unsigned long calibrate_tsc_hpet(unsigned long *tsc_hpet_quotient_ptr);
#endif
+#ifdef CONFIG_X86_PM_TIMER
+extern struct timer_opts timer_pmtmr;
+#endif
#endif
#ifdef CONFIG_X86_PC9800
extern int CLOCK_TICK_RATE;
#else
-#ifdef CONFIG_MELAN
+#ifdef CONFIG_X86_ELAN
# define CLOCK_TICK_RATE 1189200 /* AMD Elan has different frequency! */
#else
# define CLOCK_TICK_RATE 1193182 /* Underlying HZ */
--- /dev/null
+#ifndef ASM_IA64_CYCLONE_H
+#define ASM_IA64_CYCLONE_H
+
+#ifdef CONFIG_IA64_CYCLONE
+extern int use_cyclone;
+extern int __init cyclone_setup(char*);
+#else /* CONFIG_IA64_CYCLONE */
+#define use_cyclone 0
+static inline void cyclone_setup(char* s)
+{
+ printk(KERN_ERR "Cyclone Counter: System not configured"
+ " w/ CONFIG_IA64_CYCLONE.\n");
+}
+#endif /* CONFIG_IA64_CYCLONE */
+#endif /* !ASM_IA64_CYCLONE_H */
#define EXEC_PAGESIZE 65536
-#ifndef NGROUPS
-# define NGROUPS 32
-#endif
-
#ifndef NOGROUP
# define NOGROUP (-1)
#endif
unsigned int windows;
struct pci_window *window;
+
+ void *platform_data;
};
#define PCI_CONTROLLER(busdev) ((struct pci_controller *) busdev->sysdata)
#define _ASM_IA64_SN_CLKSUPPORT_H
#include <asm/sn/arch.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/sn2/addrs.h>
+#include <asm/sn/sn2/shubio.h>
+#include <asm/sn/sn2/shub_mmr.h>
typedef long clkreg_t;
extern unsigned long sn_rtc_cycles_per_second;
extern unsigned long sn_rtc_per_itc;
-
-#include <asm/sn/addrs.h>
-#include <asm/sn/sn2/addrs.h>
-#include <asm/sn/sn2/shubio.h>
-#include <asm/sn/sn2/shub_mmr.h>
#define RTC_MASK SH_RTC_MASK
#define RTC_COUNTER_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC))
#define RTC_COMPARE_A_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC))
#ifndef _ASM_IA64_SN_DMAMAP_H
#define _ASM_IA64_SN_DMAMAP_H
-#include <asm/sn/types.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
/*
* Definitions for allocating, freeing, and using DMA maps
*/
unsigned long dma_virtaddr; /* Beginning virtual address that is mapped */
} dmamap_t;
-#ifdef __cplusplus
-}
-#endif
-
/* standard flags values for pio_map routines,
* including {xtalk,pciio}_dmamap calls.
* NOTE: try to keep these in step with PIOMAP flags.
/* == Driver thread priority support == */
typedef int ilvl_t;
-#ifdef __cplusplus
-extern "C" {
-#endif
-
struct eframe_s;
struct piomap;
struct dmamap;
#ifndef _ASM_IA64_SN_INTR_H
#define _ASM_IA64_SN_INTR_H
-#include <linux/config.h>
#include <asm/sn/sn2/intr.h>
extern void sn_send_IPI_phys(long, int, int);
#ifndef _ASM_IA64_SN_IO_H
#define _ASM_IA64_SN_IO_H
-#include <linux/config.h>
-
#include <asm/sn/addrs.h>
/* Because we only have PCI I/O ports. */
#ifndef _ASM_IA64_SN_IOC4_H
#define _ASM_IA64_SN_IOC4_H
-#if 0
-
-/*
- * ioc4.h - IOC4 chip header file
- */
-
-/* Notes:
- * The IOC4 chip is a 32-bit PCI device that provides 4 serial ports,
- * an IDE bus interface, a PC keyboard/mouse interface, and a real-time
- * external interrupt interface.
- *
- * It includes an optimized DMA buffer management, and a store-and-forward
- * buffer RAM.
- *
- * All IOC4 registers are 32 bits wide.
- */
-typedef __uint32_t ioc4reg_t;
-
-/*
- * PCI Configuration Space Register Address Map, use offset from IOC4 PCI
- * configuration base such that this can be used for multiple IOC4s
- */
-#define IOC4_PCI_ID 0x0 /* ID */
-
-#define IOC4_VENDOR_ID_NUM 0x10A9
-#define IOC4_DEVICE_ID_NUM 0x100A
-#define IOC4_ADDRSPACE_MASK 0xfff00000ULL
-
-#define IOC4_PCI_SCR 0x4 /* Status/Command */
-#define IOC4_PCI_REV 0x8 /* Revision */
-#define IOC4_PCI_LAT 0xC /* Latency Timer */
-#define IOC4_PCI_BAR0 0x10 /* IOC4 base address 0 */
-#define IOC4_PCI_SIDV 0x2c /* Subsys ID and vendor */
-#define IOC4_PCI_CAP 0x34 /* Capability pointer */
-#define IOC4_PCI_LATGNTINT 0x3c /* Max_lat, min_gnt, int_pin, int_line */
-
-/*
- * PCI Memory Space Map
- */
-#define IOC4_PCI_ERR_ADDR_L 0x000 /* Low Error Address */
-#define IOC4_PCI_ERR_ADDR_VLD (0x1 << 0)
-#define IOC4_PCI_ERR_ADDR_MST_ID_MSK (0xf << 1)
-#define IOC4_PCI_ERR_ADDR_MUL_ERR (0x1 << 5)
-#define IOC4_PCI_ERR_ADDR_ADDR_MSK (0x3ffffff << 6)
-
-/* Master IDs contained in PCI_ERR_ADDR_MST_ID_MSK */
-#define IOC4_MST_ID_S0_TX 0
-#define IOC4_MST_ID_S0_RX 1
-#define IOC4_MST_ID_S1_TX 2
-#define IOC4_MST_ID_S1_RX 3
-#define IOC4_MST_ID_S2_TX 4
-#define IOC4_MST_ID_S2_RX 5
-#define IOC4_MST_ID_S3_TX 6
-#define IOC4_MST_ID_S3_RX 7
-#define IOC4_MST_ID_ATA 8
-
-#define IOC4_PCI_ERR_ADDR_H 0x004 /* High Error Address */
-
-#define IOC4_SIO_IR 0x008 /* SIO Interrupt Register */
-#define IOC4_OTHER_IR 0x00C /* Other Interrupt Register */
-
-/* These registers are read-only for general kernel code. To modify
- * them use the functions in ioc4.c
- */
-#define IOC4_SIO_IES_RO 0x010 /* SIO Interrupt Enable Set Reg */
-#define IOC4_OTHER_IES_RO 0x014 /* Other Interrupt Enable Set Reg */
-#define IOC4_SIO_IEC_RO 0x018 /* SIO Interrupt Enable Clear Reg */
-#define IOC4_OTHER_IEC_RO 0x01C /* Other Interrupt Enable Clear Reg */
-
-#define IOC4_SIO_CR 0x020 /* SIO Control Reg */
-#define IOC4_INT_OUT 0x028 /* INT_OUT Reg (realtime interrupt) */
-#define IOC4_GPCR_S 0x030 /* GenericPIO Cntrl Set Register */
-#define IOC4_GPCR_C 0x034 /* GenericPIO Cntrl Clear Register */
-#define IOC4_GPDR 0x038 /* GenericPIO Data Register */
-#define IOC4_GPPR_0 0x040 /* GenericPIO Pin Registers */
-#define IOC4_GPPR_OFF 0x4
-#define IOC4_GPPR(x) (IOC4_GPPR_0+(x)*IOC4_GPPR_OFF)
-
-/* ATAPI Registers */
-#define IOC4_ATA_0 0x100 /* Data w/timing */
-#define IOC4_ATA_1 0x104 /* Error/Features w/timing */
-#define IOC4_ATA_2 0x108 /* Sector Count w/timing */
-#define IOC4_ATA_3 0x10C /* Sector Number w/timing */
-#define IOC4_ATA_4 0x110 /* Cyliner Low w/timing */
-#define IOC4_ATA_5 0x114 /* Cylinder High w/timing */
-#define IOC4_ATA_6 0x118 /* Device/Head w/timing */
-#define IOC4_ATA_7 0x11C /* Status/Command w/timing */
-#define IOC4_ATA_0_AUX 0x120 /* Aux Status/Device Cntrl w/timing */
-#define IOC4_ATA_TIMING 0x140 /* Timing value register 0 */
-#define IOC4_ATA_DMA_PTR_L 0x144 /* Low Memory Pointer to DMA List */
-#define IOC4_ATA_DMA_PTR_H 0x148 /* High Memory Pointer to DMA List */
-#define IOC4_ATA_DMA_ADDR_L 0x14C /* Low Memory DMA Address */
-#define IOC4_ATA_DMA_ADDR_H 0x150 /* High Memory DMA Addresss */
-#define IOC4_ATA_BC_DEV 0x154 /* DMA Byte Count at Device */
-#define IOC4_ATA_BC_MEM 0x158 /* DMA Byte Count at Memory */
-#define IOC4_ATA_DMA_CTRL 0x15C /* DMA Control/Status */
-
-/* Keyboard and Mouse Registers */
-#define IOC4_KM_CSR 0x200 /* Kbd and Mouse Cntrl/Status Reg */
-#define IOC4_K_RD 0x204 /* Kbd Read Data Register */
-#define IOC4_M_RD 0x208 /* Mouse Read Data Register */
-#define IOC4_K_WD 0x20C /* Kbd Write Data Register */
-#define IOC4_M_WD 0x210 /* Mouse Write Data Register */
-
-/* Serial Port Registers used for DMA mode serial I/O */
-#define IOC4_SBBR01_H 0x300 /* Serial Port Ring Buffers
- Base Reg High for Channels 0 1*/
-#define IOC4_SBBR01_L 0x304 /* Serial Port Ring Buffers
- Base Reg Low for Channels 0 1 */
-#define IOC4_SBBR23_H 0x308 /* Serial Port Ring Buffers
- Base Reg High for Channels 2 3*/
-#define IOC4_SBBR23_L 0x30C /* Serial Port Ring Buffers
- Base Reg Low for Channels 2 3 */
-
-#define IOC4_SSCR_0 0x310 /* Serial Port 0 Control */
-#define IOC4_STPIR_0 0x314 /* Serial Port 0 TX Produce */
-#define IOC4_STCIR_0 0x318 /* Serial Port 0 TX Consume */
-#define IOC4_SRPIR_0 0x31C /* Serial Port 0 RX Produce */
-#define IOC4_SRCIR_0 0x320 /* Serial Port 0 RX Consume */
-#define IOC4_SRTR_0 0x324 /* Serial Port 0 Receive Timer Reg */
-#define IOC4_SHADOW_0 0x328 /* Serial Port 0 16550 Shadow Reg */
-
-#define IOC4_SSCR_1 0x32C /* Serial Port 1 Control */
-#define IOC4_STPIR_1 0x330 /* Serial Port 1 TX Produce */
-#define IOC4_STCIR_1 0x334 /* Serial Port 1 TX Consume */
-#define IOC4_SRPIR_1 0x338 /* Serial Port 1 RX Produce */
-#define IOC4_SRCIR_1 0x33C /* Serial Port 1 RX Consume */
-#define IOC4_SRTR_1 0x340 /* Serial Port 1 Receive Timer Reg */
-#define IOC4_SHADOW_1 0x344 /* Serial Port 1 16550 Shadow Reg */
-
-#define IOC4_SSCR_2 0x348 /* Serial Port 2 Control */
-#define IOC4_STPIR_2 0x34C /* Serial Port 2 TX Produce */
-#define IOC4_STCIR_2 0x350 /* Serial Port 2 TX Consume */
-#define IOC4_SRPIR_2 0x354 /* Serial Port 2 RX Produce */
-#define IOC4_SRCIR_2 0x358 /* Serial Port 2 RX Consume */
-#define IOC4_SRTR_2 0x35C /* Serial Port 2 Receive Timer Reg */
-#define IOC4_SHADOW_2 0x360 /* Serial Port 2 16550 Shadow Reg */
-
-#define IOC4_SSCR_3 0x364 /* Serial Port 3 Control */
-#define IOC4_STPIR_3 0x368 /* Serial Port 3 TX Produce */
-#define IOC4_STCIR_3 0x36C /* Serial Port 3 TX Consume */
-#define IOC4_SRPIR_3 0x370 /* Serial Port 3 RX Produce */
-#define IOC4_SRCIR_3 0x374 /* Serial Port 3 RX Consume */
-#define IOC4_SRTR_3 0x378 /* Serial Port 3 Receive Timer Reg */
-#define IOC4_SHADOW_3 0x37C /* Serial Port 3 16550 Shadow Reg */
-
-#define IOC4_UART0_BASE 0x380 /* UART 0 */
-#define IOC4_UART1_BASE 0x388 /* UART 1 */
-#define IOC4_UART2_BASE 0x390 /* UART 2 */
-#define IOC4_UART3_BASE 0x398 /* UART 3 */
-
-/* Private page address aliases for usermode mapping */
-#define IOC4_INT_OUT_P 0x04000 /* INT_OUT Reg */
-
-#define IOC4_SSCR_0_P 0x08000 /* Serial Port 0 */
-#define IOC4_STPIR_0_P 0x08004
-#define IOC4_STCIR_0_P 0x08008 /* (read-only) */
-#define IOC4_SRPIR_0_P 0x0800C /* (read-only) */
-#define IOC4_SRCIR_0_P 0x08010
-#define IOC4_SRTR_0_P 0x08014
-#define IOC4_UART_LSMSMCR_0_P 0x08018 /* (read-only) */
-
-#define IOC4_SSCR_1_P 0x0C000 /* Serial Port 1 */
-#define IOC4_STPIR_1_P 0x0C004
-#define IOC4_STCIR_1_P 0x0C008 /* (read-only) */
-#define IOC4_SRPIR_1_P 0x0C00C /* (read-only) */
-#define IOC4_SRCIR_1_P 0x0C010
-#define IOC4_SRTR_1_P 0x0C014
-#define IOC4_UART_LSMSMCR_1_P 0x0C018 /* (read-only) */
-
-#define IOC4_SSCR_2_P 0x10000 /* Serial Port 2 */
-#define IOC4_STPIR_2_P 0x10004
-#define IOC4_STCIR_2_P 0x10008 /* (read-only) */
-#define IOC4_SRPIR_2_P 0x1000C /* (read-only) */
-#define IOC4_SRCIR_2_P 0x10010
-#define IOC4_SRTR_2_P 0x10014
-#define IOC4_UART_LSMSMCR_2_P 0x10018 /* (read-only) */
-
-#define IOC4_SSCR_3_P 0x14000 /* Serial Port 3 */
-#define IOC4_STPIR_3_P 0x14004
-#define IOC4_STCIR_3_P 0x14008 /* (read-only) */
-#define IOC4_SRPIR_3_P 0x1400C /* (read-only) */
-#define IOC4_SRCIR_3_P 0x14010
-#define IOC4_SRTR_3_P 0x14014
-#define IOC4_UART_LSMSMCR_3_P 0x14018 /* (read-only) */
-
-#define IOC4_ALIAS_PAGE_SIZE 0x4000
-
-/* Interrupt types */
-typedef enum ioc4_intr_type_e {
- ioc4_sio_intr_type,
- ioc4_other_intr_type,
- ioc4_num_intr_types
-} ioc4_intr_type_t;
-#define ioc4_first_intr_type ioc4_sio_intr_type
-
-/* Bitmasks for IOC4_SIO_IR, IOC4_SIO_IEC, and IOC4_SIO_IES */
-#define IOC4_SIO_IR_S0_TX_MT 0x00000001 /* Serial port 0 TX empty */
-#define IOC4_SIO_IR_S0_RX_FULL 0x00000002 /* Port 0 RX buf full */
-#define IOC4_SIO_IR_S0_RX_HIGH 0x00000004 /* Port 0 RX hiwat */
-#define IOC4_SIO_IR_S0_RX_TIMER 0x00000008 /* Port 0 RX timeout */
-#define IOC4_SIO_IR_S0_DELTA_DCD 0x00000010 /* Port 0 delta DCD */
-#define IOC4_SIO_IR_S0_DELTA_CTS 0x00000020 /* Port 0 delta CTS */
-#define IOC4_SIO_IR_S0_INT 0x00000040 /* Port 0 pass-thru intr */
-#define IOC4_SIO_IR_S0_TX_EXPLICIT 0x00000080 /* Port 0 explicit TX thru */
-#define IOC4_SIO_IR_S1_TX_MT 0x00000100 /* Serial port 1 */
-#define IOC4_SIO_IR_S1_RX_FULL 0x00000200 /* */
-#define IOC4_SIO_IR_S1_RX_HIGH 0x00000400 /* */
-#define IOC4_SIO_IR_S1_RX_TIMER 0x00000800 /* */
-#define IOC4_SIO_IR_S1_DELTA_DCD 0x00001000 /* */
-#define IOC4_SIO_IR_S1_DELTA_CTS 0x00002000 /* */
-#define IOC4_SIO_IR_S1_INT 0x00004000 /* */
-#define IOC4_SIO_IR_S1_TX_EXPLICIT 0x00008000 /* */
-#define IOC4_SIO_IR_S2_TX_MT 0x00010000 /* Serial port 2 */
-#define IOC4_SIO_IR_S2_RX_FULL 0x00020000 /* */
-#define IOC4_SIO_IR_S2_RX_HIGH 0x00040000 /* */
-#define IOC4_SIO_IR_S2_RX_TIMER 0x00080000 /* */
-#define IOC4_SIO_IR_S2_DELTA_DCD 0x00100000 /* */
-#define IOC4_SIO_IR_S2_DELTA_CTS 0x00200000 /* */
-#define IOC4_SIO_IR_S2_INT 0x00400000 /* */
-#define IOC4_SIO_IR_S2_TX_EXPLICIT 0x00800000 /* */
-#define IOC4_SIO_IR_S3_TX_MT 0x01000000 /* Serial port 3 */
-#define IOC4_SIO_IR_S3_RX_FULL 0x02000000 /* */
-#define IOC4_SIO_IR_S3_RX_HIGH 0x04000000 /* */
-#define IOC4_SIO_IR_S3_RX_TIMER 0x08000000 /* */
-#define IOC4_SIO_IR_S3_DELTA_DCD 0x10000000 /* */
-#define IOC4_SIO_IR_S3_DELTA_CTS 0x20000000 /* */
-#define IOC4_SIO_IR_S3_INT 0x40000000 /* */
-#define IOC4_SIO_IR_S3_TX_EXPLICIT 0x80000000 /* */
-
-/* Per device interrupt masks */
-#define IOC4_SIO_IR_S0 (IOC4_SIO_IR_S0_TX_MT | \
- IOC4_SIO_IR_S0_RX_FULL | \
- IOC4_SIO_IR_S0_RX_HIGH | \
- IOC4_SIO_IR_S0_RX_TIMER | \
- IOC4_SIO_IR_S0_DELTA_DCD | \
- IOC4_SIO_IR_S0_DELTA_CTS | \
- IOC4_SIO_IR_S0_INT | \
- IOC4_SIO_IR_S0_TX_EXPLICIT)
-#define IOC4_SIO_IR_S1 (IOC4_SIO_IR_S1_TX_MT | \
- IOC4_SIO_IR_S1_RX_FULL | \
- IOC4_SIO_IR_S1_RX_HIGH | \
- IOC4_SIO_IR_S1_RX_TIMER | \
- IOC4_SIO_IR_S1_DELTA_DCD | \
- IOC4_SIO_IR_S1_DELTA_CTS | \
- IOC4_SIO_IR_S1_INT | \
- IOC4_SIO_IR_S1_TX_EXPLICIT)
-#define IOC4_SIO_IR_S2 (IOC4_SIO_IR_S2_TX_MT | \
- IOC4_SIO_IR_S2_RX_FULL | \
- IOC4_SIO_IR_S2_RX_HIGH | \
- IOC4_SIO_IR_S2_RX_TIMER | \
- IOC4_SIO_IR_S2_DELTA_DCD | \
- IOC4_SIO_IR_S2_DELTA_CTS | \
- IOC4_SIO_IR_S2_INT | \
- IOC4_SIO_IR_S2_TX_EXPLICIT)
-#define IOC4_SIO_IR_S3 (IOC4_SIO_IR_S3_TX_MT | \
- IOC4_SIO_IR_S3_RX_FULL | \
- IOC4_SIO_IR_S3_RX_HIGH | \
- IOC4_SIO_IR_S3_RX_TIMER | \
- IOC4_SIO_IR_S3_DELTA_DCD | \
- IOC4_SIO_IR_S3_DELTA_CTS | \
- IOC4_SIO_IR_S3_INT | \
- IOC4_SIO_IR_S3_TX_EXPLICIT)
-
-/* Bitmasks for IOC4_OTHER_IR, IOC4_OTHER_IEC, and IOC4_OTHER_IES */
-#define IOC4_OTHER_IR_ATA_INT 0x00000001 /* ATAPI intr pass-thru */
-#define IOC4_OTHER_IR_ATA_MEMERR 0x00000002 /* ATAPI DMA PCI error */
-#define IOC4_OTHER_IR_S0_MEMERR 0x00000004 /* Port 0 PCI error */
-#define IOC4_OTHER_IR_S1_MEMERR 0x00000008 /* Port 1 PCI error */
-#define IOC4_OTHER_IR_S2_MEMERR 0x00000010 /* Port 2 PCI error */
-#define IOC4_OTHER_IR_S3_MEMERR 0x00000020 /* Port 3 PCI error */
-#define IOC4_OTHER_IR_KBD_INT 0x00000040 /* Kbd/mouse intr */
-#define IOC4_OTHER_IR_ATA_DMAINT 0x00000089 /* ATAPI DMA intr */
-#define IOC4_OTHER_IR_RT_INT 0x00800000 /* RT output pulse */
-#define IOC4_OTHER_IR_GEN_INT1 0x02000000 /* RT input pulse */
-#define IOC4_OTHER_IR_GEN_INT_SHIFT 25
-
-/* Per device interrupt masks */
-#define IOC4_OTHER_IR_ATA (IOC4_OTHER_IR_ATA_INT | \
- IOC4_OTHER_IR_ATA_MEMERR | \
- IOC4_OTHER_IR_ATA_DMAINT)
-#define IOC4_OTHER_IR_RT (IOC4_OTHER_IR_RT_INT | IOC4_OTHER_IR_GEN_INT1)
-
-/* Macro to load pending interrupts */
-#define IOC4_PENDING_SIO_INTRS(mem) (PCI_INW(&((mem)->sio_ir)) & \
- PCI_INW(&((mem)->sio_ies_ro)))
-#define IOC4_PENDING_OTHER_INTRS(mem) (PCI_INW(&((mem)->other_ir)) & \
- PCI_INW(&((mem)->other_ies_ro)))
-
-/* Bitmasks for IOC4_SIO_CR */
-#define IOC4_SIO_SR_CMD_PULSE 0x00000004 /* Byte bus strobe length */
-#define IOC4_SIO_CR_CMD_PULSE_SHIFT 0
-#define IOC4_SIO_CR_ARB_DIAG 0x00000070 /* Current non-ATA PCI bus
- requester (ro) */
-#define IOC4_SIO_CR_ARB_DIAG_TX0 0x00000000
-#define IOC4_SIO_CR_ARB_DIAG_RX0 0x00000010
-#define IOC4_SIO_CR_ARB_DIAG_TX1 0x00000020
-#define IOC4_SIO_CR_ARB_DIAG_RX1 0x00000030
-#define IOC4_SIO_CR_ARB_DIAG_TX2 0x00000040
-#define IOC4_SIO_CR_ARB_DIAG_RX2 0x00000050
-#define IOC4_SIO_CR_ARB_DIAG_TX3 0x00000060
-#define IOC4_SIO_CR_ARB_DIAG_RX3 0x00000070
-#define IOC4_SIO_CR_SIO_DIAG_IDLE 0x00000080 /* 0 -> active request among
- serial ports (ro) */
-#define IOC4_SIO_CR_ATA_DIAG_IDLE 0x00000100 /* 0 -> active request from
- ATA port */
-#define IOC4_SIO_CR_ATA_DIAG_ACTIVE 0x00000200 /* 1 -> ATA request is winner */
-
-/* Bitmasks for IOC4_INT_OUT */
-#define IOC4_INT_OUT_COUNT 0x0000ffff /* Pulse interval timer */
-#define IOC4_INT_OUT_MODE 0x00070000 /* Mode mask */
-#define IOC4_INT_OUT_MODE_0 0x00000000 /* Set output to 0 */
-#define IOC4_INT_OUT_MODE_1 0x00040000 /* Set output to 1 */
-#define IOC4_INT_OUT_MODE_1PULSE 0x00050000 /* Send 1 pulse */
-#define IOC4_INT_OUT_MODE_PULSES 0x00060000 /* Send 1 pulse every interval */
-#define IOC4_INT_OUT_MODE_SQW 0x00070000 /* Toggle output every interval */
-#define IOC4_INT_OUT_DIAG 0x40000000 /* Diag mode */
-#define IOC4_INT_OUT_INT_OUT 0x80000000 /* Current state of INT_OUT */
-
-/* Time constants for IOC4_INT_OUT */
-#define IOC4_INT_OUT_NS_PER_TICK (15 * 520) /* 15 ns PCI clock, multi=520 */
-#define IOC4_INT_OUT_TICKS_PER_PULSE 3 /* Outgoing pulse lasts 3
- ticks */
-#define IOC4_INT_OUT_US_TO_COUNT(x) /* Convert uS to a count value */ \
- (((x) * 10 + IOC4_INT_OUT_NS_PER_TICK / 200) * \
- 100 / IOC4_INT_OUT_NS_PER_TICK - 1)
-#define IOC4_INT_OUT_COUNT_TO_US(x) /* Convert count value to uS */ \
- (((x) + 1) * IOC4_INT_OUT_NS_PER_TICK / 1000)
-#define IOC4_INT_OUT_MIN_TICKS 3 /* Min period is width of
- pulse in "ticks" */
-#define IOC4_INT_OUT_MAX_TICKS IOC4_INT_OUT_COUNT /* Largest possible count */
-
-/* Bitmasks for IOC4_GPCR */
-#define IOC4_GPCR_DIR 0x000000ff /* Tristate pin in or out */
-#define IOC4_GPCR_DIR_PIN(x) (1<<(x)) /* Access one of the DIR bits */
-#define IOC4_GPCR_EDGE 0x0000ff00 /* Extint edge or level
- sensitive */
-#define IOC4_GPCR_EDGE_PIN(x) (1<<((x)+7 )) /* Access one of the EDGE bits */
-
-/* Values for IOC4_GPCR */
-#define IOC4_GPCR_INT_OUT_EN 0x00100000 /* Enable INT_OUT to pin 0 */
-#define IOC4_GPCR_DIR_SER0_XCVR 0x00000010 /* Port 0 Transceiver select
- enable */
-#define IOC4_GPCR_DIR_SER1_XCVR 0x00000020 /* Port 1 Transceiver select
- enable */
-#define IOC4_GPCR_DIR_SER2_XCVR 0x00000040 /* Port 2 Transceiver select
- enable */
-#define IOC4_GPCR_DIR_SER3_XCVR 0x00000080 /* Port 3 Transceiver select
- enable */
-
-/* Defs for some of the generic I/O pins */
-#define IOC4_GPCR_UART0_MODESEL 0x10 /* Pin is output to port 0
- mode sel */
-#define IOC4_GPCR_UART1_MODESEL 0x20 /* Pin is output to port 1
- mode sel */
-#define IOC4_GPCR_UART2_MODESEL 0x40 /* Pin is output to port 2
- mode sel */
-#define IOC4_GPCR_UART3_MODESEL 0x80 /* Pin is output to port 3
- mode sel */
-
-#define IOC4_GPPR_UART0_MODESEL_PIN 4 /* GIO pin controlling
- uart 0 mode select */
-#define IOC4_GPPR_UART1_MODESEL_PIN 5 /* GIO pin controlling
- uart 1 mode select */
-#define IOC4_GPPR_UART2_MODESEL_PIN 6 /* GIO pin controlling
- uart 2 mode select */
-#define IOC4_GPPR_UART3_MODESEL_PIN 7 /* GIO pin controlling
- uart 3 mode select */
-
-/* Bitmasks for IOC4_ATA_TIMING */
-#define IOC4_ATA_TIMING_ADR_SETUP 0x00000003 /* Clocks of addr set-up */
-#define IOC4_ATA_TIMING_PULSE_WIDTH 0x000001f8 /* Clocks of read or write
- pulse width */
-#define IOC4_ATA_TIMING_RECOVERY 0x0000fe00 /* Clocks before next read
- or write */
-#define IOC4_ATA_TIMING_USE_IORDY 0x00010000 /* PIO uses IORDY */
-
-/* Bitmasks for address list elements pointed to by IOC4_ATA_DMA_PTR_<L|H> */
-#define IOC4_ATA_ALE_DMA_ADDRESS 0xfffffffffffffffe
-
-/* Bitmasks for byte count list elements pointed to by IOC4_ATA_DMA_PTR_<L|H> */
-#define IOC4_ATA_BCLE_BYTE_COUNT 0x000000000000fffe
-#define IOC4_ATA_BCLE_LIST_END 0x0000000080000000
-
-/* Bitmasks for IOC4_ATA_BC_<DEV|MEM> */
-#define IOC4_ATA_BC_BYTE_CNT 0x0001fffe /* Byte count */
-
-/* Bitmasks for IOC4_ATA_DMA_CTRL */
-#define IOC4_ATA_DMA_CTRL_STRAT 0x00000001 /* 1 -> start DMA engine */
-#define IOC4_ATA_DMA_CTRL_STOP 0x00000002 /* 1 -> stop DMA engine */
-#define IOC4_ATA_DMA_CTRL_DIR 0x00000004 /* 1 -> ATA bus data copied
- to memory */
-#define IOC4_ATA_DMA_CTRL_ACTIVE 0x00000008 /* DMA channel is active */
-#define IOC4_ATA_DMA_CTRL_MEM_ERROR 0x00000010 /* DMA engine encountered
- a PCI error */
-/* Bitmasks for IOC4_KM_CSR */
-#define IOC4_KM_CSR_K_WRT_PEND 0x00000001 /* Kbd port xmitting or resetting */
-#define IOC4_KM_CSR_M_WRT_PEND 0x00000002 /* Mouse port xmitting or resetting */
-#define IOC4_KM_CSR_K_LCB 0x00000004 /* Line Cntrl Bit for last KBD write */
-#define IOC4_KM_CSR_M_LCB 0x00000008 /* Same for mouse */
-#define IOC4_KM_CSR_K_DATA 0x00000010 /* State of kbd data line */
-#define IOC4_KM_CSR_K_CLK 0x00000020 /* State of kbd clock line */
-#define IOC4_KM_CSR_K_PULL_DATA 0x00000040 /* Pull kbd data line low */
-#define IOC4_KM_CSR_K_PULL_CLK 0x00000080 /* Pull kbd clock line low */
-#define IOC4_KM_CSR_M_DATA 0x00000100 /* State of mouse data line */
-#define IOC4_KM_CSR_M_CLK 0x00000200 /* State of mouse clock line */
-#define IOC4_KM_CSR_M_PULL_DATA 0x00000400 /* Pull mouse data line low */
-#define IOC4_KM_CSR_M_PULL_CLK 0x00000800 /* Pull mouse clock line low */
-#define IOC4_KM_CSR_EMM_MODE 0x00001000 /* Emulation mode */
-#define IOC4_KM_CSR_SIM_MODE 0x00002000 /* Clock X8 */
-#define IOC4_KM_CSR_K_SM_IDLE 0x00004000 /* Keyboard is idle */
-#define IOC4_KM_CSR_M_SM_IDLE 0x00008000 /* Mouse is idle */
-#define IOC4_KM_CSR_K_TO 0x00010000 /* Keyboard trying to send/receive */
-#define IOC4_KM_CSR_M_TO 0x00020000 /* Mouse trying to send/receive */
-#define IOC4_KM_CSR_K_TO_EN 0x00040000 /* KM_CSR_K_TO + KM_CSR_K_TO_EN =
- cause SIO_IR to assert */
-#define IOC4_KM_CSR_M_TO_EN 0x00080000 /* KM_CSR_M_TO + KM_CSR_M_TO_EN =
- cause SIO_IR to assert */
-#define IOC4_KM_CSR_K_CLAMP_ONE 0x00100000 /* Pull K_CLK low after rec. one char */
-#define IOC4_KM_CSR_M_CLAMP_ONE 0x00200000 /* Pull M_CLK low after rec. one char */
-#define IOC4_KM_CSR_K_CLAMP_THREE \
- 0x00400000 /* Pull K_CLK low after rec. three chars */
-#define IOC4_KM_CSR_M_CLAMP_THREE \
- 0x00800000 /* Pull M_CLK low after rec. three char */
-
-/* Bitmasks for IOC4_K_RD and IOC4_M_RD */
-#define IOC4_KM_RD_DATA_2 0x000000ff /* 3rd char recvd since last read */
-#define IOC4_KM_RD_DATA_2_SHIFT 0
-#define IOC4_KM_RD_DATA_1 0x0000ff00 /* 2nd char recvd since last read */
-#define IOC4_KM_RD_DATA_1_SHIFT 8
-#define IOC4_KM_RD_DATA_0 0x00ff0000 /* 1st char recvd since last read */
-#define IOC4_KM_RD_DATA_0_SHIFT 16
-#define IOC4_KM_RD_FRAME_ERR_2 0x01000000 /* Framing or parity error in byte 2 */
-#define IOC4_KM_RD_FRAME_ERR_1 0x02000000 /* Same for byte 1 */
-#define IOC4_KM_RD_FRAME_ERR_0 0x04000000 /* Same for byte 0 */
-
-#define IOC4_KM_RD_KBD_MSE 0x08000000 /* 0 if from kbd, 1 if from mouse */
-#define IOC4_KM_RD_OFLO 0x10000000 /* 4th char recvd before this read */
-#define IOC4_KM_RD_VALID_2 0x20000000 /* DATA_2 valid */
-#define IOC4_KM_RD_VALID_1 0x40000000 /* DATA_1 valid */
-#define IOC4_KM_RD_VALID_0 0x80000000 /* DATA_0 valid */
-#define IOC4_KM_RD_VALID_ALL (IOC4_KM_RD_VALID_0 | IOC4_KM_RD_VALID_1 | \
- IOC4_KM_RD_VALID_2)
-
-/* Bitmasks for IOC4_K_WD & IOC4_M_WD */
-#define IOC4_KM_WD_WRT_DATA 0x000000ff /* Write to keyboard/mouse port */
-#define IOC4_KM_WD_WRT_DATA_SHIFT 0
-
-/* Bitmasks for serial RX status byte */
-#define IOC4_RXSB_OVERRUN 0x01 /* Char(s) lost */
-#define IOC4_RXSB_PAR_ERR 0x02 /* Parity error */
-#define IOC4_RXSB_FRAME_ERR 0x04 /* Framing error */
-#define IOC4_RXSB_BREAK 0x08 /* Break character */
-#define IOC4_RXSB_CTS 0x10 /* State of CTS */
-#define IOC4_RXSB_DCD 0x20 /* State of DCD */
-#define IOC4_RXSB_MODEM_VALID 0x40 /* DCD, CTS, and OVERRUN are valid */
-#define IOC4_RXSB_DATA_VALID 0x80 /* Data byte, FRAME_ERR PAR_ERR & BREAK valid */
-
-/* Bitmasks for serial TX control byte */
-#define IOC4_TXCB_INT_WHEN_DONE 0x20 /* Interrupt after this byte is sent */
-#define IOC4_TXCB_INVALID 0x00 /* Byte is invalid */
-#define IOC4_TXCB_VALID 0x40 /* Byte is valid */
-#define IOC4_TXCB_MCR 0x80 /* Data<7:0> to modem control register */
-#define IOC4_TXCB_DELAY 0xc0 /* Delay data<7:0> mSec */
-
-/* Bitmasks for IOC4_SBBR_L */
-#define IOC4_SBBR_L_SIZE 0x00000001 /* 0 == 1KB rings, 1 == 4KB rings */
-#define IOC4_SBBR_L_BASE 0xfffff000 /* Lower serial ring base addr */
-
-/* Bitmasks for IOC4_SSCR_<3:0> */
-#define IOC4_SSCR_RX_THRESHOLD 0x000001ff /* Hiwater mark */
-#define IOC4_SSCR_TX_TIMER_BUSY 0x00010000 /* TX timer in progress */
-#define IOC4_SSCR_HFC_EN 0x00020000 /* Hardware flow control enabled */
-#define IOC4_SSCR_RX_RING_DCD 0x00040000 /* Post RX record on delta-DCD */
-#define IOC4_SSCR_RX_RING_CTS 0x00080000 /* Post RX record on delta-CTS */
-#define IOC4_SSCR_DIAG 0x00200000 /* Bypass clock divider for sim */
-#define IOC4_SSCR_RX_DRAIN 0x08000000 /* Drain RX buffer to memory */
-#define IOC4_SSCR_DMA_EN 0x10000000 /* Enable ring buffer DMA */
-#define IOC4_SSCR_DMA_PAUSE 0x20000000 /* Pause DMA */
-#define IOC4_SSCR_PAUSE_STATE 0x40000000 /* Sets when PAUSE takes effect */
-#define IOC4_SSCR_RESET 0x80000000 /* Reset DMA channels */
-
-/* All producer/comsumer pointers are the same bitfield */
-#define IOC4_PROD_CONS_PTR_4K 0x00000ff8 /* For 4K buffers */
-#define IOC4_PROD_CONS_PTR_1K 0x000003f8 /* For 1K buffers */
-#define IOC4_PROD_CONS_PTR_OFF 3
-
-/* Bitmasks for IOC4_STPIR_<3:0> */
-/* Reserved for future register definitions */
-
-/* Bitmasks for IOC4_STCIR_<3:0> */
-#define IOC4_STCIR_BYTE_CNT 0x0f000000 /* Bytes in unpacker */
-#define IOC4_STCIR_BYTE_CNT_SHIFT 24
-
-/* Bitmasks for IOC4_SRPIR_<3:0> */
-#define IOC4_SRPIR_BYTE_CNT 0x0f000000 /* Bytes in packer */
-#define IOC4_SRPIR_BYTE_CNT_SHIFT 24
-
-/* Bitmasks for IOC4_SRCIR_<3:0> */
-#define IOC4_SRCIR_ARM 0x80000000 /* Arm RX timer */
-
-/* Bitmasks for IOC4_SHADOW_<3:0> */
-#define IOC4_SHADOW_DR 0x00000001 /* Data ready */
-#define IOC4_SHADOW_OE 0x00000002 /* Overrun error */
-#define IOC4_SHADOW_PE 0x00000004 /* Parity error */
-#define IOC4_SHADOW_FE 0x00000008 /* Framing error */
-#define IOC4_SHADOW_BI 0x00000010 /* Break interrupt */
-#define IOC4_SHADOW_THRE 0x00000020 /* Xmit holding register empty */
-#define IOC4_SHADOW_TEMT 0x00000040 /* Xmit shift register empty */
-#define IOC4_SHADOW_RFCE 0x00000080 /* Char in RX fifo has an error */
-#define IOC4_SHADOW_DCTS 0x00010000 /* Delta clear to send */
-#define IOC4_SHADOW_DDCD 0x00080000 /* Delta data carrier detect */
-#define IOC4_SHADOW_CTS 0x00100000 /* Clear to send */
-#define IOC4_SHADOW_DCD 0x00800000 /* Data carrier detect */
-#define IOC4_SHADOW_DTR 0x01000000 /* Data terminal ready */
-#define IOC4_SHADOW_RTS 0x02000000 /* Request to send */
-#define IOC4_SHADOW_OUT1 0x04000000 /* 16550 OUT1 bit */
-#define IOC4_SHADOW_OUT2 0x08000000 /* 16550 OUT2 bit */
-#define IOC4_SHADOW_LOOP 0x10000000 /* Loopback enabled */
-
-/* Bitmasks for IOC4_SRTR_<3:0> */
-#define IOC4_SRTR_CNT 0x00000fff /* Reload value for RX timer */
-#define IOC4_SRTR_CNT_VAL 0x0fff0000 /* Current value of RX timer */
-#define IOC4_SRTR_CNT_VAL_SHIFT 16
-#define IOC4_SRTR_HZ 16000 /* SRTR clock frequency */
-
-/* Serial port register map used for DMA and PIO serial I/O */
-typedef volatile struct ioc4_serialregs {
- ioc4reg_t sscr;
- ioc4reg_t stpir;
- ioc4reg_t stcir;
- ioc4reg_t srpir;
- ioc4reg_t srcir;
- ioc4reg_t srtr;
- ioc4reg_t shadow;
-} ioc4_sregs_t;
-
-/* IOC4 UART register map */
-typedef volatile struct ioc4_uartregs {
- union {
- char rbr; /* read only, DLAB == 0 */
- char thr; /* write only, DLAB == 0 */
- char dll; /* DLAB == 1 */
- } u1;
- union {
- char ier; /* DLAB == 0 */
- char dlm; /* DLAB == 1 */
- } u2;
- union {
- char iir; /* read only */
- char fcr; /* write only */
- } u3;
- char i4u_lcr;
- char i4u_mcr;
- char i4u_lsr;
- char i4u_msr;
- char i4u_scr;
-} ioc4_uart_t;
-
-#define i4u_rbr u1.rbr
-#define i4u_thr u1.thr
-#define i4u_dll u1.dll
-#define i4u_ier u2.ier
-#define i4u_dlm u2.dlm
-#define i4u_iir u3.iir
-#define i4u_fcr u3.fcr
-
-/* PCI config space register map */
-typedef volatile struct ioc4_configregs {
- ioc4reg_t pci_id;
- ioc4reg_t pci_scr;
- ioc4reg_t pci_rev;
- ioc4reg_t pci_lat;
- ioc4reg_t pci_bar0;
- ioc4reg_t pci_bar1;
- ioc4reg_t pci_bar2_not_implemented;
- ioc4reg_t pci_cis_ptr_not_implemented;
- ioc4reg_t pci_sidv;
- ioc4reg_t pci_rom_bar_not_implemented;
- ioc4reg_t pci_cap;
- ioc4reg_t pci_rsv;
- ioc4reg_t pci_latgntint;
-
- char pci_fill1[0x58 - 0x3c - 4];
-
- ioc4reg_t pci_pcix;
- ioc4reg_t pci_pcixstatus;
-} ioc4_cfg_t;
-
-/* PCI memory space register map addressed using pci_bar0 */
-typedef volatile struct ioc4_memregs {
-
- /* Miscellaneous IOC4 registers */
- ioc4reg_t pci_err_addr_l;
- ioc4reg_t pci_err_addr_h;
- ioc4reg_t sio_ir;
- ioc4reg_t other_ir;
-
- /* These registers are read-only for general kernel code. To
- * modify them use the functions in ioc4.c.
- */
- ioc4reg_t sio_ies_ro;
- ioc4reg_t other_ies_ro;
- ioc4reg_t sio_iec_ro;
- ioc4reg_t other_iec_ro;
- ioc4reg_t sio_cr;
- ioc4reg_t misc_fill1;
- ioc4reg_t int_out;
- ioc4reg_t misc_fill2;
- ioc4reg_t gpcr_s;
- ioc4reg_t gpcr_c;
- ioc4reg_t gpdr;
- ioc4reg_t misc_fill3;
- ioc4reg_t gppr_0;
- ioc4reg_t gppr_1;
- ioc4reg_t gppr_2;
- ioc4reg_t gppr_3;
- ioc4reg_t gppr_4;
- ioc4reg_t gppr_5;
- ioc4reg_t gppr_6;
- ioc4reg_t gppr_7;
-
- char misc_fill4[0x100 - 0x5C - 4];
-
- /* ATA/ATAP registers */
- ioc4reg_t ata_0;
- ioc4reg_t ata_1;
- ioc4reg_t ata_2;
- ioc4reg_t ata_3;
- ioc4reg_t ata_4;
- ioc4reg_t ata_5;
- ioc4reg_t ata_6;
- ioc4reg_t ata_7;
- ioc4reg_t ata_aux;
-
- char ata_fill1[0x140 - 0x120 - 4];
-
- ioc4reg_t ata_timing;
- ioc4reg_t ata_dma_ptr_l;
- ioc4reg_t ata_dma_ptr_h;
- ioc4reg_t ata_dma_addr_l;
- ioc4reg_t ata_dma_addr_h;
- ioc4reg_t ata_bc_dev;
- ioc4reg_t ata_bc_mem;
- ioc4reg_t ata_dma_ctrl;
-
- char ata_fill2[0x200 - 0x15C - 4];
-
- /* Keyboard and mouse registers */
- ioc4reg_t km_csr;
- ioc4reg_t k_rd;
- ioc4reg_t m_rd;
- ioc4reg_t k_wd;
- ioc4reg_t m_wd;
-
- char km_fill1[0x300 - 0x210 - 4];
-
- /* Serial port registers used for DMA serial I/O */
- ioc4reg_t sbbr01_l;
- ioc4reg_t sbbr01_h;
- ioc4reg_t sbbr23_l;
- ioc4reg_t sbbr23_h;
-
- ioc4_sregs_t port_0;
- ioc4_sregs_t port_1;
- ioc4_sregs_t port_2;
- ioc4_sregs_t port_3;
-
- ioc4_uart_t uart_0;
- ioc4_uart_t uart_1;
- ioc4_uart_t uart_2;
- ioc4_uart_t uart_3;
-} ioc4_mem_t;
-
-#endif /* 0 */
-
/*
* Bytebus device space
*/
#define IOC4_BYTEBUS_DEV2 0xC0000L /* Addressed using pci_bar0 */
#define IOC4_BYTEBUS_DEV3 0xE0000L /* Addressed using pci_bar0 */
-#if 0
-/* UART clock speed */
-#define IOC4_SER_XIN_CLK 66000000
-
-typedef enum ioc4_subdevs_e {
- ioc4_subdev_generic,
- ioc4_subdev_kbms,
- ioc4_subdev_tty0,
- ioc4_subdev_tty1,
- ioc4_subdev_tty2,
- ioc4_subdev_tty3,
- ioc4_subdev_rt,
- ioc4_nsubdevs
-} ioc4_subdev_t;
-
-/* Subdevice disable bits,
- * from the standard INFO_LBL_SUBDEVS
- */
-#define IOC4_SDB_TTY0 (1 << ioc4_subdev_tty0)
-#define IOC4_SDB_TTY1 (1 << ioc4_subdev_tty1)
-#define IOC4_SDB_TTY2 (1 << ioc4_subdev_tty2)
-#define IOC4_SDB_TTY3 (1 << ioc4_subdev_tty3)
-#define IOC4_SDB_KBMS (1 << ioc4_subdev_kbms)
-#define IOC4_SDB_RT (1 << ioc4_subdev_rt)
-#define IOC4_SDB_GENERIC (1 << ioc4_subdev_generic)
-
-#define IOC4_ALL_SUBDEVS ((1 << ioc4_nsubdevs) - 1)
-
-#define IOC4_SDB_SERIAL (IOC4_SDB_TTY0 | IOC4_SDB_TTY1 | IOC4_SDB_TTY2 | IOC4_SDB_TTY3)
-
-#define IOC4_STD_SUBDEVS IOC4_ALL_SUBDEVS
-
-#define IOC4_INTA_SUBDEVS (IOC4_SDB_SERIAL | IOC4_SDB_KBMS | IOC4_SDB_RT | IOC4_SDB_GENERIC)
-
-extern int ioc4_subdev_enabled(vertex_hdl_t, ioc4_subdev_t);
-extern void ioc4_subdev_enables(vertex_hdl_t, ulong_t);
-extern void ioc4_subdev_enable(vertex_hdl_t, ioc4_subdev_t);
-extern void ioc4_subdev_disable(vertex_hdl_t, ioc4_subdev_t);
-
-/* Macros to read and write the SIO_IEC and SIO_IES registers (see the
- * comments in ioc4.c for details on why this is necessary
- */
-#define IOC4_W_IES 0
-#define IOC4_W_IEC 1
-extern void ioc4_write_ireg(void *, ioc4reg_t, int, ioc4_intr_type_t);
-
-#define IOC4_WRITE_IES(ioc4, val, type) ioc4_write_ireg(ioc4, val, IOC4_W_IES, type)
-#define IOC4_WRITE_IEC(ioc4, val, type) ioc4_write_ireg(ioc4, val, IOC4_W_IEC, type)
-
-typedef void
-ioc4_intr_func_f (intr_arg_t, ioc4reg_t);
-
-typedef void
-ioc4_intr_connect_f (vertex_hdl_t conn_vhdl,
- ioc4_intr_type_t,
- ioc4reg_t,
- ioc4_intr_func_f *,
- intr_arg_t info,
- vertex_hdl_t owner_vhdl,
- vertex_hdl_t intr_dev_vhdl,
- int (*)(intr_arg_t));
-
-typedef void
-ioc4_intr_disconnect_f (vertex_hdl_t conn_vhdl,
- ioc4_intr_type_t,
- ioc4reg_t,
- ioc4_intr_func_f *,
- intr_arg_t info,
- vertex_hdl_t owner_vhdl);
-
-ioc4_intr_disconnect_f ioc4_intr_disconnect;
-ioc4_intr_connect_f ioc4_intr_connect;
-
-extern int ioc4_is_console(vertex_hdl_t conn_vhdl);
-
-extern void ioc4_mlreset(ioc4_cfg_t *, ioc4_mem_t *);
-
-extern intr_func_f ioc4_intr;
-
-extern ioc4_mem_t *ioc4_mem_ptr(void *ioc4_fastinfo);
-
-typedef ioc4_intr_func_f *ioc4_intr_func_t;
-
-#endif /* 0 */
-#endif /* _ASM_IA64_SN_IOC4_H */
+#endif /* _ASM_IA64_SN_IOC4_H */
#ifndef _ASM_IA64_SN_IOCONFIG_BUS_H
#define _ASM_IA64_SN_IOCONFIG_BUS_H
-#define IOCONFIG_PCIBUS "/boot/efi/ioconfig_pcibus"
-#define POUND_CHAR '#'
+#define IOCONFIG_PCIBUS "/boot/efi/ioconfig_pcibus"
+#define POUND_CHAR '#'
#define MAX_LINE_LEN 128
#define MAXPATHLEN 128
struct ioconfig_parm {
unsigned long ioconfig_activated;
- unsigned long number;
- void *buffer;
+ unsigned long number;
+ void *buffer;
};
-struct ascii_moduleid{
- unsigned char io_moduleid[8]; /* pci path name */
+struct ascii_moduleid {
+ unsigned char io_moduleid[8]; /* pci path name */
};
#endif /* _ASM_IA64_SN_IOCONFIG_BUS_H */
#include <linux/types.h>
#include <asm/sn/sgi.h>
-#if __KERNEL__
+#ifdef __KERNEL__
/*
* Basic types required for io error handling interfaces.
ERROR_CLASS_BAD_RESP_PKT
};
-typedef uint64_t error_class_t;
-
-
-/*
- * Error context which the error action can use.
- */
-typedef void *error_context_t;
-#define ERROR_CONTEXT_IGNORE ((error_context_t)-1ll)
-
-
-/*
- * Error action type.
- */
-typedef error_return_code_t (*error_action_f)( error_context_t);
-#define ERROR_ACTION_IGNORE ((error_action_f)-1ll)
-
-/* Typical set of error actions */
-typedef struct error_action_set_s {
- error_action_f eas_panic;
- error_action_f eas_shutdown;
- error_action_f eas_abort;
- error_action_f eas_retry;
- error_action_f eas_failover;
- error_action_f eas_log_n_ignore;
- error_action_f eas_reset;
-} error_action_set_t;
-
-
-/* Set of priorites for in case mutliple error actions/states
- * are trying to be prescribed for a device.
- * NOTE : The ordering below encapsulates the priorities. Highest value
- * corresponds to highest priority.
- */
-enum error_priority_e {
- ERROR_PRIORITY_IGNORE,
- ERROR_PRIORITY_NONE,
- ERROR_PRIORITY_NORMAL,
- ERROR_PRIORITY_LOG,
- ERROR_PRIORITY_FAILOVER,
- ERROR_PRIORITY_RETRY,
- ERROR_PRIORITY_ABORT,
- ERROR_PRIORITY_SHUTDOWN,
- ERROR_PRIORITY_RESTART,
- ERROR_PRIORITY_PANIC
-};
-
-typedef uint64_t error_priority_t;
-
-/* Error action interfaces */
-
-extern error_return_code_t error_action_set(vertex_hdl_t,
- error_action_f,
- error_context_t,
- error_priority_t);
-extern error_return_code_t error_action_perform(vertex_hdl_t);
-
-
-#define INFO_LBL_ERROR_SKIP_ENV "error_skip_env"
-
-#define v_error_skip_env_get(v, l) \
-hwgraph_info_get_LBL(v, INFO_LBL_ERROR_SKIP_ENV, (arbitrary_info_t *)&l)
-
-#define v_error_skip_env_set(v, l, r) \
-(r ? \
- hwgraph_info_replace_LBL(v, INFO_LBL_ERROR_SKIP_ENV, (arbitrary_info_t)l,0) :\
- hwgraph_info_add_LBL(v, INFO_LBL_ERROR_SKIP_ENV, (arbitrary_info_t)l))
-
-#define v_error_skip_env_clear(v) \
-hwgraph_info_remove_LBL(v, INFO_LBL_ERROR_SKIP_ENV, 0)
-
-typedef uint64_t counter_t;
-
-extern counter_t error_retry_count_get(vertex_hdl_t);
-extern error_return_code_t error_retry_count_set(vertex_hdl_t,counter_t);
-extern counter_t error_retry_count_increment(vertex_hdl_t);
-extern counter_t error_retry_count_decrement(vertex_hdl_t);
-
-/* Except for the PIO Read error typically the other errors are handled in
- * the context of an asynchronous error interrupt.
- */
-#define IS_ERROR_INTR_CONTEXT(_ec) ((_ec & IOECODE_DMA) || \
- (_ec == IOECODE_PIO_WRITE))
-
#endif /* __KERNEL__ */
#endif /* _ASM_IA64_SN_IOERROR_HANDLING_H */
#ifndef _ASM_IA64_SN_IOGRAPH_H
#define _ASM_IA64_SN_IOGRAPH_H
+#include <asm/sn/xtalk/xbow.h> /* For get MAX_PORT_NUM */
+
/*
* During initialization, platform-dependent kernel code establishes some
* basic elements of the hardware graph. This file contains edge and
#define INFO_LBL_XSWITCH_VOL "_xswitch_volunteer"
#define INFO_LBL_XFUNCS "_xtalk_ops" /* ops vector for gio providers */
#define INFO_LBL_XWIDGET "_xwidget"
-/* Device/Driver Admin directive labels */
-#define ADMIN_LBL_INTR_TARGET "INTR_TARGET" /* Target cpu for device interrupts*/
-#define ADMIN_LBL_INTR_SWLEVEL "INTR_SWLEVEL" /* Priority level of the ithread */
-
-#define ADMIN_LBL_DMATRANS_NODE "PCIBUS_DMATRANS_NODE" /* Node used for
- * 32-bit Direct
- * Mapping I/O
- */
-#define ADMIN_LBL_DISABLED "DISABLE" /* Device has been disabled */
-#define ADMIN_LBL_DETACH "DETACH" /* Device has been detached */
-#define ADMIN_LBL_THREAD_PRI "thread_priority"
- /* Driver adminstrator
- * hint parameter for
- * thread priority
- */
-#define ADMIN_LBL_THREAD_CLASS "thread_class"
- /* Driver adminstrator
- * hint parameter for
- * thread priority
- * default class
- */
-/* Info labels that begin with '_' cannot be overwritten by an attr_set call */
-#define INFO_LBL_RESERVED(name) ((name)[0] == '_')
-#if defined(__KERNEL__)
+#ifdef __KERNEL__
void init_all_devices(void);
#endif /* __KERNEL__ */
-#include <asm/sn/sgi.h>
-#include <asm/sn/xtalk/xbow.h> /* For get MAX_PORT_NUM */
-
int io_brick_map_widget(int, int);
-int io_path_map_widget(vertex_hdl_t);
/*
* Map a brick's widget number to a meaningful int
int ibm_map_wid[MAX_PORT_NUM]; /* wid to int map */
};
-
#endif /* _ASM_IA64_SN_IOGRAPH_H */
* Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
*/
-#include <linux/config.h>
-#include <asm/smp.h>
#include <asm/sn/addrs.h>
-#include <asm/sn/sn_cpuid.h>
#include <asm/sn/pda.h>
#include <asm/sn/sn2/shub.h>
#define LED_ALWAYS_SET 0x00
/*
- * Basic macros for flashing the LEDS on an SGI, SN1.
+ * Basic macros for flashing the LEDS on an SGI SN.
*/
static __inline__ void
#ifndef _ASM_IA64_SN_MODULE_H
#define _ASM_IA64_SN_MODULE_H
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-
-#include <asm/semaphore.h>
#include <asm/sn/klconfig.h>
#include <asm/sn/ksys/elsc.h>
#define MAX_PCI_XWIDGET 256
#define MAX_ATE_MAPS 1024
+#define SN_DEVICE_SYSDATA(dev) \
+ ((struct sn_device_sysdata *) \
+ (((struct pci_controller *) ((dev)->sysdata))->platform_data))
+
#define IS_PCI32G(dev) ((dev)->dma_mask >= 0xffffffff)
#define IS_PCI32L(dev) ((dev)->dma_mask < 0xffffffff)
#define PCIDEV_VERTEX(pci_dev) \
- (((struct sn_device_sysdata *)((pci_dev)->sysdata))->vhdl)
-
-#define PCIBUS_VERTEX(pci_bus) \
- (((struct sn_widget_sysdata *)((pci_bus)->sysdata))->vhdl)
+ ((SN_DEVICE_SYSDATA(pci_dev))->vhdl)
struct sn_widget_sysdata {
vertex_hdl_t vhdl;
};
struct sn_device_sysdata {
- vertex_hdl_t vhdl;
+ vertex_hdl_t vhdl;
pciio_provider_t *pci_provider;
+ pciio_intr_t intr_handle;
+ struct sn_flush_device_list *dma_flush_list;
+ pciio_piomap_t pio_map[PCI_ROM_RESOURCE];
};
struct ioports_to_tlbs_s {
#include <linux/config.h>
#include <asm/sn/types.h>
-#include <asm/uaccess.h> /* for copy_??_user */
#include <asm/sn/hwgfs.h>
typedef hwgfs_handle_t vertex_hdl_t;
#include <linux/wait.h>
#include <asm/sn/nodepda.h>
#include <asm/sn/io.h>
+#include <asm/sn/iograph.h>
#include <asm/sn/xtalk/xwidget.h>
#include <asm/sn/xtalk/xtalk_private.h>
check_pgt_cache();
}
+static inline unsigned int
+tlb_is_full_mm(struct mmu_gather *tlb)
+{
+ return tlb->fullmm;
+}
+
/*
* Logically, this routine frees PAGE. On MP machines, the actual freeing of the page
* must be delayed until after the TLB has been flushed (see comments at the beginning of
#define EXEC_PAGESIZE 8192
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#include <asm/ptrace.h>
#include <asm/user.h>
+/*
+ * 68k ELF relocation types
+ */
+#define R_68K_NONE 0
+#define R_68K_32 1
+#define R_68K_16 2
+#define R_68K_8 3
+#define R_68K_PC32 4
+#define R_68K_PC16 5
+#define R_68K_PC8 6
+#define R_68K_GOT32 7
+#define R_68K_GOT16 8
+#define R_68K_GOT8 9
+#define R_68K_GOT32O 10
+#define R_68K_GOT16O 11
+#define R_68K_GOT8O 12
+#define R_68K_PLT32 13
+#define R_68K_PLT16 14
+#define R_68K_PLT8 15
+#define R_68K_PLT32O 16
+#define R_68K_PLT16O 17
+#define R_68K_PLT8O 18
+#define R_68K_COPY 19
+#define R_68K_GLOB_DAT 20
+#define R_68K_JMP_SLOT 21
+#define R_68K_RELATIVE 22
+
typedef unsigned long elf_greg_t;
#define ELF_NGREG (sizeof(struct user_regs_struct) / sizeof(elf_greg_t))
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#if CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT
# define in_atomic() (preempt_count() != kernel_locked())
# define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1)
#else
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define VMALLOC_START KSEG2
-#if CONFIG_HIGHMEM
+#ifdef CONFIG_HIGHMEM
# define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE)
#else
# define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE)
#ifndef __ASM_TOPOLOGY_H
#define __ASM_TOPOLOGY_H
-#if CONFIG_SGI_IP27
+#ifdef CONFIG_SGI_IP27
#include <asm/mmzone.h>
#define Elf_Rela Elf32_Rela
#endif
-#define module_map(x) vmalloc(x)
-#define module_unmap(x) vfree(x)
-#define module_arch_init(x) (0)
-#define arch_init_modules(x) do { } while (0)
-
struct mod_arch_specific
{
unsigned long got_offset, got_count, got_max;
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
};
extern struct machdep_calls ppc_md;
-extern char cmd_line[512];
+#define COMMAND_LINE_SIZE 512
+extern char cmd_line[COMMAND_LINE_SIZE];
extern void setup_pci_ptrs(void);
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#endif
#else
-#define debugger(regs) 0
-#define debugger_bpt(regs) 0
-#define debugger_sstep(regs) 0
-#define debugger_iabr_match(regs) 0
-#define debugger_dabr_match(regs) 0
-#define debugger_fault_handler(regs) 0
+static inline int debugger(struct pt_regs *regs) { return 0; }
+static inline int debugger_bpt(struct pt_regs *regs) { return 0; }
+static inline int debugger_sstep(struct pt_regs *regs) { return 0; }
+static inline int debugger_iabr_match(struct pt_regs *regs) { return 0; }
+static inline int debugger_dabr_match(struct pt_regs *regs) { return 0; }
+static inline int debugger_fault_handler(struct pt_regs *regs) { return 0; }
#endif
extern void show_regs(struct pt_regs * regs);
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define nmi_enter() (irq_enter())
#define nmi_exit() (preempt_count() -= HARDIRQ_OFFSET)
-#if CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT
# define in_atomic() ((preempt_count() & ~PREEMPT_ACTIVE) != kernel_locked())
# define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1)
#else
#include <linux/config.h>
-#if CONFIG_DEBUG_HIGHMEM
+#ifdef CONFIG_DEBUG_HIGHMEM
# define D(n) __KM_FENCE_##n ,
#else
# define D(n)
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 8192 /* Thanks for sun4's we carry baggage... */
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 8192 /* Thanks for sun4's we carry baggage... */
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
#define EXEC_PAGESIZE 4096
-#ifndef NGROUPS
-#define NGROUPS 32
-#endif
-
#ifndef NOGROUP
#define NOGROUP (-1)
#endif
}
/* for sysctl: */
-extern unsigned aio_max_nr, aio_max_size, aio_max_pinned;
+extern atomic_t aio_nr;
+extern unsigned aio_max_nr;
#endif /* __LINUX__AIO_H */
void bitmap_or(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, int bits);
int bitmap_weight(const unsigned long *bitmap, int bits);
-int bitmap_snprintf(char *buf, unsigned int buflen,
+int bitmap_scnprintf(char *buf, unsigned int buflen,
const unsigned long *maskp, int bits);
int bitmap_parse(const char __user *ubuf, unsigned int ubuflen,
unsigned long *maskp, int bits);
#define __attribute_pure__ __attribute__((pure))
#define __attribute_const__ __attribute__((__const__))
+
+#if __GNUC_MINOR__ >= 1
+#define noinline __attribute__((noinline))
+#endif
# define __attribute_const__ /* unimplemented */
#endif
+#ifndef noinline
+#define noinline
+#endif
+
/* Optimization barrier */
#ifndef barrier
# define barrier() __memory_barrier()
* @div: divisor
* @mult: multiplier
*
- * Needed for loops_per_jiffy and similar calculations. We do it
- * this way to avoid math overflow on 32-bit machines. This will
- * become architecture dependent once high-resolution-timer is
- * merged (or any other thing that introduces sc_math.h).
*
* new = old * mult / div
*/
static inline unsigned long cpufreq_scale(unsigned long old, u_int div, u_int mult)
{
- unsigned long val, carry;
+#if BITS_PER_LONG == 32
- mult /= 100;
- div /= 100;
- val = (old / div) * mult;
- carry = old % div;
- carry = carry * mult / div;
+ u64 result = ((u64) old) * ((u64) mult);
+ do_div(result, div);
+ return (unsigned long) result;
- return carry + val;
+#elif BITS_PER_LONG == 64
+
+ unsigned long result = old * ((u64) mult);
+ result /= div;
+ return result;
+
+#endif
};
/*********************************************************************
#define for_each_online_cpu(cpu) for (cpu = 0; cpu < 1; cpu++)
#endif
-#define cpumask_snprintf(buf, buflen, map) \
- bitmap_snprintf(buf, buflen, cpus_addr(map), NR_CPUS)
+#define cpumask_scnprintf(buf, buflen, map) \
+ bitmap_scnprintf(buf, buflen, cpus_addr(map), NR_CPUS)
#define cpumask_parse(buf, buflen, map) \
bitmap_parse(buf, buflen, cpus_addr(map), NR_CPUS)
void *data);
struct super_block *get_sb_pseudo(struct file_system_type *, char *,
struct super_operations *ops, unsigned long);
+void unnamed_dev_init(void);
/* Alas, no aliases. Too much hassle with bringing module.h everywhere */
#define fops_get(fops) \
#define __getname() kmem_cache_alloc(names_cachep, SLAB_KERNEL)
#define putname(name) kmem_cache_free(names_cachep, (void *)(name))
-enum {BDEV_FILE, BDEV_SWAP, BDEV_FS, BDEV_RAW};
extern int register_blkdev(unsigned int, const char *);
extern int unregister_blkdev(unsigned int, const char *);
extern struct block_device *bdget(dev_t);
extern void bd_forget(struct inode *inode);
extern void bdput(struct block_device *);
extern int blkdev_open(struct inode *, struct file *);
-extern struct block_device *open_by_devnum(dev_t, unsigned, int);
+extern struct block_device *open_by_devnum(dev_t, unsigned);
extern struct file_operations def_blk_fops;
extern struct address_space_operations def_blk_aops;
extern struct file_operations def_chr_fops;
extern struct file_operations def_fifo_fops;
extern int ioctl_by_bdev(struct block_device *, unsigned, unsigned long);
extern int blkdev_ioctl(struct inode *, struct file *, unsigned, unsigned long);
-extern int blkdev_get(struct block_device *, mode_t, unsigned, int);
-extern int blkdev_put(struct block_device *, int);
+extern int blkdev_get(struct block_device *, mode_t, unsigned);
+extern int blkdev_put(struct block_device *);
extern int bd_claim(struct block_device *, void *);
extern void bd_release(struct block_device *);
extern void blk_run_queues(void);
extern const char *__bdevname(dev_t, char *buffer);
extern const char *bdevname(struct block_device *bdev, char *buffer);
extern struct block_device *lookup_bdev(const char *);
-extern struct block_device *open_bdev_excl(const char *, int, int, void *);
-extern void close_bdev_excl(struct block_device *, int);
+extern struct block_device *open_bdev_excl(const char *, int, void *);
+extern void close_bdev_excl(struct block_device *);
extern void init_special_inode(struct inode *, umode_t, dev_t);
int policy, partno;
};
-#define GENHD_FL_REMOVABLE 1
-#define GENHD_FL_DRIVERFS 2
-#define GENHD_FL_CD 8
-#define GENHD_FL_UP 16
+#define GENHD_FL_REMOVABLE 1
+#define GENHD_FL_DRIVERFS 2
+#define GENHD_FL_CD 8
+#define GENHD_FL_UP 16
+#define GENHD_FL_SUPPRESS_PARTITION_INFO 32
struct disk_stats {
unsigned read_sectors, write_sectors;
typedef struct pvc_device_struct {
- struct hdlc_device_struct *master;
+ struct net_device *master;
struct net_device *main;
struct net_device *ether; /* bridged Ethernet interface */
struct pvc_device_struct *next; /* Sorted in ascending DLCI order */
typedef struct hdlc_device_struct {
/* To be initialized by hardware driver */
- struct net_device netdev; /* master net device - must be first */
struct net_device_stats stats;
/* used by HDLC layer to take control over HDLC device from hw driver*/
- int (*attach)(struct hdlc_device_struct *hdlc,
+ int (*attach)(struct net_device *dev,
unsigned short encoding, unsigned short parity);
/* hardware driver must handle this instead of dev->hard_start_xmit */
/* Things below are for HDLC layer internal use only */
struct {
- int (*open)(struct hdlc_device_struct *hdlc);
- void (*close)(struct hdlc_device_struct *hdlc);
+ int (*open)(struct net_device *dev);
+ void (*close)(struct net_device *dev);
/* if open & DCD */
- void (*start)(struct hdlc_device_struct *hdlc);
+ void (*start)(struct net_device *dev);
/* if open & !DCD */
- void (*stop)(struct hdlc_device_struct *hdlc);
+ void (*stop)(struct net_device *dev);
void (*detach)(struct hdlc_device_struct *hdlc);
int (*netif_rx)(struct sk_buff *skb);
int new_mtu);
}ppp;
}state;
+ void *priv;
}hdlc_device;
-int hdlc_raw_ioctl(hdlc_device *hdlc, struct ifreq *ifr);
-int hdlc_raw_eth_ioctl(hdlc_device *hdlc, struct ifreq *ifr);
-int hdlc_cisco_ioctl(hdlc_device *hdlc, struct ifreq *ifr);
-int hdlc_ppp_ioctl(hdlc_device *hdlc, struct ifreq *ifr);
-int hdlc_fr_ioctl(hdlc_device *hdlc, struct ifreq *ifr);
-int hdlc_x25_ioctl(hdlc_device *hdlc, struct ifreq *ifr);
+int hdlc_raw_ioctl(struct net_device *dev, struct ifreq *ifr);
+int hdlc_raw_eth_ioctl(struct net_device *dev, struct ifreq *ifr);
+int hdlc_cisco_ioctl(struct net_device *dev, struct ifreq *ifr);
+int hdlc_ppp_ioctl(struct net_device *dev, struct ifreq *ifr);
+int hdlc_fr_ioctl(struct net_device *dev, struct ifreq *ifr);
+int hdlc_x25_ioctl(struct net_device *dev, struct ifreq *ifr);
/* Exported from hdlc.o */
int hdlc_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd);
/* Must be used by hardware driver on module startup/exit */
-int register_hdlc_device(hdlc_device *hdlc);
-void unregister_hdlc_device(hdlc_device *hdlc);
-
-
-static __inline__ struct net_device* hdlc_to_dev(hdlc_device *hdlc)
-{
- return &hdlc->netdev;
-}
+int register_hdlc_device(struct net_device *dev);
+void unregister_hdlc_device(struct net_device *dev);
+struct net_device *alloc_hdlcdev(void *priv);
static __inline__ hdlc_device* dev_to_hdlc(struct net_device *dev)
{
- return (hdlc_device*)dev;
+ return netdev_priv(dev);
}
}
-static __inline__ const char *hdlc_to_name(hdlc_device *hdlc)
-{
- return hdlc_to_dev(hdlc)->name;
-}
-
-
static __inline__ void debug_frame(const struct sk_buff *skb)
{
int i;
/* Must be called by hardware driver when HDLC device is being opened */
-int hdlc_open(hdlc_device *hdlc);
+int hdlc_open(struct net_device *dev);
/* Must be called by hardware driver when HDLC device is being closed */
-void hdlc_close(hdlc_device *hdlc);
+void hdlc_close(struct net_device *dev);
/* Called by hardware driver when DCD line level changes */
-void hdlc_set_carrier(int on, hdlc_device *hdlc);
+void hdlc_set_carrier(int on, struct net_device *dev);
/* May be used by hardware driver to gain control over HDLC device */
static __inline__ void hdlc_proto_detach(hdlc_device *hdlc)
}
+static __inline__ struct net_device_stats *hdlc_stats(struct net_device *dev)
+{
+ return &dev_to_hdlc(dev)->stats;
+}
+
+
static __inline__ unsigned short hdlc_type_trans(struct sk_buff *skb,
struct net_device *dev)
{
u8 state; /* retry state */
u8 waiting_for_dma; /* dma currently in progress */
u8 unmask; /* okay to unmask other irqs */
- u8 slow; /* slow data port */
u8 bswap; /* byte swap data */
u8 dsc_overlap; /* DSC overlap */
u8 nice1; /* give potential excess bandwidth */
unsigned forced_geom : 1; /* 1 if hdx=c,h,s was given at boot */
unsigned no_unmask : 1; /* disallow setting unmask bit */
unsigned no_io_32bit : 1; /* disallow enabling 32bit I/O */
- unsigned nobios : 1; /* do not probe bios for drive */
unsigned atapi_overlap : 1; /* ATAPI overlap (not supported) */
unsigned nice0 : 1; /* give obvious excess bandwidth */
unsigned nice2 : 1; /* give a share in our own bandwidth */
unsigned doorlocking : 1; /* for removable only: door lock/unlock works */
unsigned autotune : 2; /* 0=default, 1=autotune, 2=noautotune */
unsigned remap_0_to_1 : 1; /* 0=noremap, 1=remap 0->1 (for EZDrive) */
- unsigned ata_flash : 1; /* 1=present, 0=default */
unsigned blocked : 1; /* 1=powermanagment told us not to do anything, so sleep nicely */
unsigned vdma : 1; /* 1=doing PIO over DMA 0=doing normal DMA */
unsigned addressing; /* : 3;
} ide_proc_entry_t;
#ifdef CONFIG_PROC_FS
+extern struct proc_dir_entry *proc_ide_root;
+
extern void proc_ide_create(void);
extern void proc_ide_destroy(void);
extern void destroy_proc_ide_device(ide_hwif_t *, ide_drive_t *);
read_proc_t proc_ide_read_capacity;
read_proc_t proc_ide_read_geometry;
+#ifdef CONFIG_BLK_DEV_IDEPCI
+void ide_pci_create_host_proc(const char *, get_info_t *);
+#endif
+
/*
* Standard exit stuff:
*/
/*
* Subdrivers support.
*/
-#define IDE_SUBDRIVER_VERSION 1
-
typedef struct ide_driver_s {
struct module *owner;
const char *name;
unsigned busy : 1;
unsigned supports_dsc_overlap : 1;
int (*cleanup)(ide_drive_t *);
- int (*shutdown)(ide_drive_t *);
- int (*flushcache)(ide_drive_t *);
ide_startstop_t (*do_request)(ide_drive_t *, struct request *, sector_t);
int (*end_request)(ide_drive_t *, int, int);
u8 (*sense)(ide_drive_t *, const char *, u8);
*/
extern ide_startstop_t do_rw_taskfile(ide_drive_t *, ide_task_t *);
-/* (ide_drive_t *drive, u8 stat, u8 err) */
-extern void ide_end_taskfile(ide_drive_t *, u8, u8);
-
/*
* Special Flagged Register Validation Caller
*/
int ide_register_driver(ide_driver_t *driver);
void ide_unregister_driver(ide_driver_t *driver);
-int ide_register_subdriver (ide_drive_t *drive, ide_driver_t *driver, int version);
+int ide_register_subdriver(ide_drive_t *, ide_driver_t *);
int ide_unregister_subdriver (ide_drive_t *drive);
int ide_replace_subdriver(ide_drive_t *drive, const char *driver);
-#ifdef CONFIG_PROC_FS
-typedef struct ide_pci_host_proc_s {
- char *name;
- u8 set;
- get_info_t *get_info;
- struct proc_dir_entry *parent;
- struct ide_pci_host_proc_s *next;
-} ide_pci_host_proc_t;
-
-void ide_pci_register_host_proc(ide_pci_host_proc_t *);
-#endif /* CONFIG_PROC_FS */
-
#define ON_BOARD 1
#define NEVER_BOARD 0
*/
void *idr_find(struct idr *idp, int id);
-int idr_pre_get(struct idr *idp);
+int idr_pre_get(struct idr *idp, unsigned gfp_mask);
int idr_get_new(struct idr *idp, void *ptr);
void idr_remove(struct idr *idp, int id);
void idr_init(struct idr *idp);
* 2003/05/01 - Amir Noam <amir.noam at intel dot com>
* - Added ABI version control to restore compatibility between
* new/old ifenslave and new/old bonding.
+ *
+ * 2003/12/01 - Shmulik Hen <shmulik.hen at intel dot com>
+ * - Code cleanup and style changes
*/
#ifndef _LINUX_IF_BONDING_H
typedef struct ifslave
{
__s32 slave_id; /* Used as an IN param to the BOND_SLAVE_INFO_QUERY ioctl */
- __s8 slave_name[IFNAMSIZ];
+ char slave_name[IFNAMSIZ];
__s8 link;
__s8 state;
__u32 link_failure_count;
+++ /dev/null
-/* From: if_pppvar.h,v 1.2 1995/06/12 11:36:51 paulus Exp */
-/*
- * if_pppvar.h - private structures and declarations for PPP.
- *
- * Copyright (c) 1994 The Australian National University.
- * All rights reserved.
- *
- * Permission to use, copy, modify, and distribute this software and its
- * documentation is hereby granted, provided that the above copyright
- * notice appears in all copies. This software is provided without any
- * warranty, express or implied. The Australian National University
- * makes no representations about the suitability of this software for
- * any purpose.
- *
- * IN NO EVENT SHALL THE AUSTRALIAN NATIONAL UNIVERSITY BE LIABLE TO ANY
- * PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
- * ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF
- * THE AUSTRALIAN NATIONAL UNIVERSITY HAVE BEEN ADVISED OF THE POSSIBILITY
- * OF SUCH DAMAGE.
- *
- * THE AUSTRALIAN NATIONAL UNIVERSITY SPECIFICALLY DISCLAIMS ANY WARRANTIES,
- * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
- * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS
- * ON AN "AS IS" BASIS, AND THE AUSTRALIAN NATIONAL UNIVERSITY HAS NO
- * OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS,
- * OR MODIFICATIONS.
- *
- * Copyright (c) 1989 Carnegie Mellon University.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms are permitted
- * provided that the above copyright notice and this paragraph are
- * duplicated in all such forms and that any documentation,
- * advertising materials, and other materials related to such
- * distribution and use acknowledge that the software was developed
- * by Carnegie Mellon University. The name of the
- * University may not be used to endorse or promote products derived
- * from this software without specific prior written permission.
- * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-/*
- * ==FILEVERSION 990806==
- *
- * NOTE TO MAINTAINERS:
- * If you modify this file at all, please set the above date.
- * if_pppvar.h is shipped with a PPP distribution as well as with the kernel;
- * if everyone increases the FILEVERSION number above, then scripts
- * can do the right thing when deciding whether to install a new if_pppvar.h
- * file. Don't change the format of that line otherwise, so the
- * installation script can recognize it.
- */
-
-/*
- * Supported network protocols. These values are used for
- * indexing sc_npmode.
- */
-
-#define NP_IP 0 /* Internet Protocol */
-#define NP_IPX 1 /* IPX protocol */
-#define NP_AT 2 /* Appletalk protocol */
-#define NP_IPV6 3 /* Internet Protocol */
-#define NUM_NP 4 /* Number of NPs. */
-
-#define OBUFSIZE 256 /* # chars of output buffering */
-
-/*
- * Structure describing each ppp unit.
- */
-
-struct ppp {
- int magic; /* magic value for structure */
- struct ppp *next; /* unit with next index */
- unsigned long inuse; /* are we allocated? */
- int line; /* network interface unit # */
- __u32 flags; /* miscellaneous control flags */
- int mtu; /* maximum xmit frame size */
- int mru; /* maximum receive frame size */
- struct slcompress *slcomp; /* for TCP header compression */
- struct sk_buff_head xmt_q; /* frames to send from pppd */
- struct sk_buff_head rcv_q; /* frames for pppd to read */
- unsigned long xmit_busy; /* bit 0 set when xmitter busy */
-
- /* Information specific to using ppp on async serial lines. */
- struct tty_struct *tty; /* ptr to TTY structure */
- struct tty_struct *backup_tty; /* TTY to use if tty gets closed */
- __u8 escape; /* 0x20 if prev char was PPP_ESC */
- __u8 toss; /* toss this frame */
- volatile __u8 tty_pushing; /* internal state flag */
- volatile __u8 woke_up; /* internal state flag */
- __u32 xmit_async_map[8]; /* 1 bit means that given control
- character is quoted on output*/
- __u32 recv_async_map; /* 1 bit means that given control
- character is ignored on input*/
- __u32 bytes_sent; /* Bytes sent on frame */
- __u32 bytes_rcvd; /* Bytes recvd on frame */
-
- /* Async transmission information */
- struct sk_buff *tpkt; /* frame currently being sent */
- int tpkt_pos; /* how much of it we've done */
- __u16 tfcs; /* FCS so far for it */
- unsigned char *optr; /* where we're up to in sending */
- unsigned char *olim; /* points past last valid char */
-
- /* Async reception information */
- struct sk_buff *rpkt; /* frame currently being rcvd */
- __u16 rfcs; /* FCS so far of rpkt */
-
- /* Queues for select() functionality */
- wait_queue_head_t read_wait; /* queue for reading processes */
-
- /* info for detecting idle channels */
- unsigned long last_xmit; /* time of last transmission */
- unsigned long last_recv; /* time last packet received */
-
- /* Statistic information */
- struct pppstat stats; /* statistic information */
-
- /* PPP compression protocol information */
- struct compressor *sc_xcomp; /* transmit compressor */
- void *sc_xc_state; /* transmit compressor state */
- struct compressor *sc_rcomp; /* receive decompressor */
- void *sc_rc_state; /* receive decompressor state */
-
- enum NPmode sc_npmode[NUM_NP]; /* what to do with each NP */
- int sc_xfer; /* PID of reserved PPP table */
- char name[16]; /* space for unit name */
- struct net_device dev; /* net device structure */
- struct net_device_stats estats; /* more detailed stats */
-
- /* tty output buffer */
- unsigned char obuf[OBUFSIZE]; /* buffer for characters to send */
-};
-
-#define PPP_MAGIC 0x5002
-#define PPP_VERSION "2.3.7"
int mc_forwarding;
int tag;
int arp_filter;
+ int arp_announce;
+ int arp_ignore;
int medium_id;
int no_xfrm;
int no_policy;
(ipv4_devconf.accept_redirects || (in_dev)->cnf.accept_redirects)))
#define IN_DEV_ARPFILTER(in_dev) (ipv4_devconf.arp_filter || (in_dev)->cnf.arp_filter)
+#define IN_DEV_ARP_ANNOUNCE(in_dev) (max(ipv4_devconf.arp_announce, (in_dev)->cnf.arp_announce))
+#define IN_DEV_ARP_IGNORE(in_dev) (max(ipv4_devconf.arp_ignore, (in_dev)->cnf.arp_ignore))
struct in_ifaddr
{
extern struct in_device *inetdev_init(struct net_device *dev);
extern struct in_device *inetdev_by_index(int);
extern u32 inet_select_addr(const struct net_device *dev, u32 dst, int scope);
+extern u32 inet_confirm_addr(const struct net_device *dev, u32 dst, u32 local, int scope);
extern struct in_ifaddr *inet_ifa_byprefix(struct in_device *in_dev, u32 prefix, u32 mask);
extern void inet_forward_change(void);
.siglock = SPIN_LOCK_UNLOCKED, \
}
+extern struct group_info init_groups;
+
/*
* INIT_TASK is used to set up the first task table, touch at
* your own risk!. Base=0, limit=0x1fffff (=2MB)
.real_timer = { \
.function = it_real_fn \
}, \
+ .group_info = &init_groups, \
.cap_effective = CAP_INIT_EFF_SET, \
.cap_inheritable = CAP_INIT_INH_SET, \
.cap_permitted = CAP_FULL_SET, \
__s32 rtr_solicits;
__s32 rtr_solicit_interval;
__s32 rtr_solicit_delay;
+ __s32 force_mld_version;
#ifdef CONFIG_IPV6_PRIVACY
__s32 use_tempaddr;
__s32 temp_valid_lft;
DEVCONF_REGEN_MAX_RETRY,
DEVCONF_MAX_DESYNC_FACTOR,
DEVCONF_MAX_ADDRESSES,
+ DEVCONF_FORCE_MLD_VERSION,
DEVCONF_MAX
};
#define RTF_CACHE 0x01000000 /* cache entry */
#define RTF_FLOW 0x02000000 /* flow significant route */
#define RTF_POLICY 0x04000000 /* policy route */
-#define RTF_NDISC 0x08000000 /* ndisc route */
#define RTF_LOCAL 0x80000000
void *argp;
unsigned int rxmarkmsk;
struct tty_struct *tty;
-#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0))
- struct wait_queue *open_wait;
- struct wait_queue *close_wait;
- struct wait_queue *raw_wait;
-#else
wait_queue_head_t open_wait;
wait_queue_head_t close_wait;
wait_queue_head_t raw_wait;
-#endif
struct work_struct tqhangup;
asysigs_t asig;
unsigned long addr;
extern int snprintf(char * buf, size_t size, const char * fmt, ...)
__attribute__ ((format (printf, 3, 4)));
extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args);
+extern int scnprintf(char * buf, size_t size, const char * fmt, ...)
+ __attribute__ ((format (printf, 3, 4)));
+extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args);
extern int sscanf(const char *, const char *, ...)
__attribute__ ((format (scanf,2,3)));
--- /dev/null
+#ifndef _LINUX_KTHREAD_H
+#define _LINUX_KTHREAD_H
+/* Simple interface for creating and stopping kernel threads without mess. */
+#include <linux/err.h>
+#include <linux/sched.h>
+
+/**
+ * kthread_create: create a kthread.
+ * @threadfn: the function to run until signal_pending(current).
+ * @data: data ptr for @threadfn.
+ * @namefmt: printf-style name for the thread.
+ *
+ * Description: This helper function creates and names a kernel
+ * thread. The thread will be stopped: use wake_up_process() to start
+ * it. See also kthread_run(), kthread_create_on_cpu().
+ *
+ * When woken, the thread will run @threadfn() with @data as its
+ * argument. @threadfn can either call do_exit() directly if it is a
+ * standalone thread for which noone will call kthread_stop(), or
+ * return when 'kthread_should_stop()' is true (which means
+ * kthread_stop() has been called). The return value should be zero
+ * or a negative error number: it will be passed to kthread_stop().
+ *
+ * Returns a task_struct or ERR_PTR(-ENOMEM).
+ */
+struct task_struct *kthread_create(int (*threadfn)(void *data),
+ void *data,
+ const char namefmt[], ...);
+
+/**
+ * kthread_run: create and wake a thread.
+ * @threadfn: the function to run until signal_pending(current).
+ * @data: data ptr for @threadfn.
+ * @namefmt: printf-style name for the thread.
+ *
+ * Description: Convenient wrapper for kthread_create() followed by
+ * wake_up_process(). Returns the kthread, or ERR_PTR(-ENOMEM). */
+#define kthread_run(threadfn, data, namefmt, ...) \
+({ \
+ struct task_struct *__k \
+ = kthread_create(threadfn, data, namefmt, ## __VA_ARGS__); \
+ if (!IS_ERR(__k)) \
+ wake_up_process(__k); \
+ __k; \
+})
+
+/**
+ * kthread_bind: bind a just-created kthread to a cpu.
+ * @k: thread created by kthread_create().
+ * @cpu: cpu (might not be online, must be possible) for @k to run on.
+ *
+ * Description: This function is equivalent to set_cpus_allowed(),
+ * except that @cpu doesn't need to be online, and the thread must be
+ * stopped (ie. just returned from kthread_create().
+ */
+void kthread_bind(struct task_struct *k, unsigned int cpu);
+
+/**
+ * kthread_stop: stop a thread created by kthread_create().
+ * @k: thread created by kthread_create().
+ *
+ * Sets kthread_should_stop() for @k to return true, wakes it, and
+ * waits for it to exit. Your threadfn() must not call do_exit()
+ * itself if you use this function! This can also be called after
+ * kthread_create() instead of calling wake_up_process(): the thread
+ * will exit without calling threadfn().
+ *
+ * Returns the result of threadfn(), or -EINTR if wake_up_process()
+ * was never called. */
+int kthread_stop(struct task_struct *k);
+
+/**
+ * kthread_should_stop: should this kthread return now?
+ *
+ * When someone calls kthread_stop on your kthread, it will be woken
+ * and this will return true. You should then return, and your return
+ * value will be passed through to kthread_stop().
+ */
+int kthread_should_stop(void);
+
+#endif /* _LINUX_KTHREAD_H */
#define LAPB_DCE 0x04
struct lapb_register_struct {
- void (*connect_confirmation)(void *token, int reason);
- void (*connect_indication)(void *token, int reason);
- void (*disconnect_confirmation)(void *token, int reason);
- void (*disconnect_indication)(void *token, int reason);
- int (*data_indication)(void *token, struct sk_buff *skb);
- void (*data_transmit)(void *token, struct sk_buff *skb);
+ void (*connect_confirmation)(struct net_device *dev, int reason);
+ void (*connect_indication)(struct net_device *dev, int reason);
+ void (*disconnect_confirmation)(struct net_device *dev, int reason);
+ void (*disconnect_indication)(struct net_device *dev, int reason);
+ int (*data_indication)(struct net_device *dev, struct sk_buff *skb);
+ void (*data_transmit)(struct net_device *dev, struct sk_buff *skb);
};
struct lapb_parms_struct {
unsigned int mode;
};
-extern int lapb_register(void *token, struct lapb_register_struct *callbacks);
-extern int lapb_unregister(void *token);
-extern int lapb_getparms(void *token, struct lapb_parms_struct *parms);
-extern int lapb_setparms(void *token, struct lapb_parms_struct *parms);
-extern int lapb_connect_request(void *token);
-extern int lapb_disconnect_request(void *token);
-extern int lapb_data_request(void *token, struct sk_buff *skb);
-extern int lapb_data_received(void *token, struct sk_buff *skb);
+extern int lapb_register(struct net_device *dev, struct lapb_register_struct *callbacks);
+extern int lapb_unregister(struct net_device *dev);
+extern int lapb_getparms(struct net_device *dev, struct lapb_parms_struct *parms);
+extern int lapb_setparms(struct net_device *dev, struct lapb_parms_struct *parms);
+extern int lapb_connect_request(struct net_device *dev);
+extern int lapb_disconnect_request(struct net_device *dev);
+extern int lapb_data_request(struct net_device *dev, struct sk_buff *skb);
+extern int lapb_data_received(struct net_device *dev, struct sk_buff *skb);
#endif
#define NR_OPEN 1024
-#define NGROUPS_MAX 32 /* supplemental group IDs are available */
+#define NGROUPS_MAX 65536 /* supplemental group IDs are available */
#define ARG_MAX 131072 /* # bytes of args + environ for exec() */
#define CHILD_MAX 999 /* no limit :-) */
#define OPEN_MAX 256 /* # open files a process may have */
loff_t lo_sizelimit;
int lo_flags;
int (*transfer)(struct loop_device *, int cmd,
- char *raw_buf, char *loop_buf, int size,
- sector_t real_block);
+ struct page *raw_page, unsigned raw_off,
+ struct page *loop_page, unsigned loop_off,
+ int size, sector_t real_block);
char lo_file_name[LO_NAME_SIZE];
char lo_crypt_name[LO_NAME_SIZE];
char lo_encrypt_key[LO_KEY_SIZE];
/*
* Loop flags
*/
-#define LO_FLAGS_DO_BMAP 1
-#define LO_FLAGS_READ_ONLY 2
+#define LO_FLAGS_READ_ONLY 1
#include <asm/posix_types.h> /* for __kernel_old_dev_t */
#include <asm/types.h> /* for __u64 */
/* Support for loadable transfer modules */
struct loop_func_table {
int number; /* filter type */
- int (*transfer)(struct loop_device *lo, int cmd, char *raw_buf,
- char *loop_buf, int size, sector_t real_block);
+ int (*transfer)(struct loop_device *lo, int cmd,
+ struct page *raw_page, unsigned raw_off,
+ struct page *loop_page, unsigned loop_off,
+ int size, sector_t real_block);
int (*init)(struct loop_device *, const struct loop_info64 *);
/* release is called from loop_unregister_transfer or clr_fd */
int (*release)(struct loop_device *);
#define NODE_DATA(nid) (&contig_page_data)
#define NODE_MEM_MAP(nid) mem_map
#define MAX_NODES_SHIFT 1
+#define pfn_to_nid(pfn) (0)
#else /* CONFIG_DISCONTIGMEM */
+++ /dev/null
-/* Symbol versioning nastiness. */
-
-#define __SYMBOL_VERSION(x) __ver_ ## x
-#define __VERSIONED_SYMBOL2(x,v) x ## _R ## v
-#define __VERSIONED_SYMBOL1(x,v) __VERSIONED_SYMBOL2(x,v)
-#define __VERSIONED_SYMBOL(x) __VERSIONED_SYMBOL1(x,__SYMBOL_VERSION(x))
-
-#ifndef _set_ver
-#define _set_ver(x) __VERSIONED_SYMBOL(x)
-#endif
#define _LINUX_MSG_H
#include <linux/ipc.h>
+#include <linux/list.h>
/* ipcs ctl commands */
#define MSG_STAT 11
#define NETLINK_TCPDIAG 4 /* TCP socket monitoring */
#define NETLINK_NFLOG 5 /* netfilter/iptables ULOG */
#define NETLINK_XFRM 6 /* ipsec */
+#define NETLINK_SELINUX 7 /* SELinux event notifications */
#define NETLINK_ARPD 8
#define NETLINK_ROUTE6 11 /* af_inet6 route comm channel */
#define NETLINK_IP6_FW 13
* Set the current process's fsuid/fsgid etc to those of the NFS
* client user
*/
-void nfsd_setuser(struct svc_rqst *, struct svc_export *);
+int nfsd_setuser(struct svc_rqst *, struct svc_export *);
#endif /* __KERNEL__ */
#endif /* LINUX_NFSD_AUTH_H */
#define PCI_DEVICE_ID_ENSONIQ_ES1370 0x5000
#define PCI_DEVICE_ID_ENSONIQ_ES1371 0x1371
+#define PCI_VENDOR_ID_TRANSMETA 0x1279
+#define PCI_DEVICE_ID_EFFICEON 0x0060
+
#define PCI_VENDOR_ID_ROCKWELL 0x127A
#define PCI_VENDOR_ID_ITE 0x1283
+++ /dev/null
-/*
- * Back compatibility for a while.
- */
-#include <linux/if_ppp.h>
{
void *private;
mdk_personality_t *pers;
- int __minor;
+ dev_t unit;
+ int md_minor;
struct list_head disks;
int sb_dirty;
int ro;
struct semaphore reconfig_sem;
atomic_t active;
+ int changed; /* true if we might need to reread partition info */
int degraded; /* whether md should consider
* adding a spare
*/
};
-/*
- * Currently we index md_array directly, based on the minor
- * number. This will have to change to dynamic allocation
- * once we start supporting partitioning of md devices.
- */
-static inline int mdidx (mddev_t * mddev)
-{
- return mddev->__minor;
-}
static inline char * mdname (mddev_t * mddev)
{
return mddev->gendisk ? mddev->gendisk->disk_name : "mdX";
atomic_t remaining; /* 'have we finished' count,
* used from IRQ handlers
*/
- int cmd;
sector_t sector;
+ int sectors;
unsigned long state;
mddev_t *mddev;
/*
*/
struct bio *master_bio;
/*
- * if the IO is in READ direction, then this bio is used:
+ * if the IO is in READ direction, then this is where we read
*/
- struct bio *read_bio;
int read_disk;
- r1bio_t *next_r1; /* next for retry or in free list */
struct list_head retry_list;
/*
* if the IO is in WRITE direction, then multiple bios are used.
* We choose the number when they are allocated.
*/
- struct bio *write_bios[0];
+ struct bio *bios[0];
};
/* bits for r1bio.state */
-#define R1BIO_Uptodate 1
-
+#define R1BIO_Uptodate 0
+#define R1BIO_IsSync 1
#endif
extern void show_state(void);
extern void show_regs(struct pt_regs *);
+extern void show_trace_task(task_t *tsk);
/*
* TASK is a pointer to the task whose backtrace we want to see (or NULL for current
struct io_context; /* See blkdev.h */
void exit_io_context(void);
+#define NGROUPS_SMALL 32
+#define NGROUPS_PER_BLOCK ((int)(EXEC_PAGESIZE / sizeof(gid_t)))
+struct group_info {
+ int ngroups;
+ atomic_t usage;
+ gid_t small_block[NGROUPS_SMALL];
+ int nblocks;
+ gid_t *blocks[0];
+};
+
+#define get_group_info(group_info) do { \
+ atomic_inc(&(group_info)->usage); \
+} while (0)
+
+#define put_group_info(group_info) do { \
+ if (atomic_dec_and_test(&(group_info)->usage)) \
+ groups_free(group_info); \
+} while (0)
+
+struct group_info *groups_alloc(int gidsetsize);
+void groups_free(struct group_info *group_info);
+int set_current_groups(struct group_info *group_info);
+/* access the groups "array" with this macro */
+#define GROUP_AT(gi, i) \
+ ((gi)->blocks[(i)/NGROUPS_PER_BLOCK][(i)%NGROUPS_PER_BLOCK])
+
+
struct task_struct {
volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */
struct thread_info *thread_info;
/* process credentials */
uid_t uid,euid,suid,fsuid;
gid_t gid,egid,sgid,fsgid;
- int ngroups;
- gid_t groups[NGROUPS];
+ struct group_info *group_info;
kernel_cap_t cap_effective, cap_inheritable, cap_permitted;
int keep_capabilities:1;
struct user_struct *user;
extern int do_execve(char *, char __user * __user *, char __user * __user *, struct pt_regs *);
extern long do_fork(unsigned long, unsigned long, struct pt_regs *, unsigned long, int __user *, int __user *);
extern struct task_struct * copy_process(unsigned long, unsigned long, struct pt_regs *, unsigned long, int __user *, int __user *);
+extern asmlinkage long sys_sched_setscheduler(pid_t pid, int policy,
+ struct sched_param __user *parm);
#ifdef CONFIG_SMP
extern void wait_task_inactive(task_t * p);
* Return 0 if permission is granted.
* @task_setgroups:
* Check permission before setting the supplementary group set of the
- * current process to @grouplist.
- * @gidsetsize contains the number of elements in @grouplist.
- * @grouplist contains the array of gids.
+ * current process.
+ * @group_info contains the new group information.
* Return 0 if permission is granted.
* @task_setnice:
* Check permission before setting the nice value of @p to @nice.
int (*task_setpgid) (struct task_struct * p, pid_t pgid);
int (*task_getpgid) (struct task_struct * p);
int (*task_getsid) (struct task_struct * p);
- int (*task_setgroups) (int gidsetsize, gid_t * grouplist);
+ int (*task_setgroups) (struct group_info *group_info);
int (*task_setnice) (struct task_struct * p, int nice);
int (*task_setrlimit) (unsigned int resource, struct rlimit * new_rlim);
int (*task_setscheduler) (struct task_struct * p, int policy,
return security_ops->task_getsid (p);
}
-static inline int security_task_setgroups (int gidsetsize, gid_t *grouplist)
+static inline int security_task_setgroups (struct group_info *group_info)
{
- return security_ops->task_setgroups (gidsetsize, grouplist);
+ return security_ops->task_setgroups (group_info);
}
static inline int security_task_setnice (struct task_struct *p, int nice)
return 0;
}
-static inline int security_task_setgroups (int gidsetsize, gid_t *grouplist)
+static inline int security_task_setgroups (struct group_info *group_info)
{
return 0;
}
--- /dev/null
+/*
+ * Netlink event notifications for SELinux.
+ *
+ * Author: James Morris <jmorris@redhat.com>
+ *
+ * Copyright (C) 2004 Red Hat, Inc., James Morris <jmorris@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2,
+ * as published by the Free Software Foundation.
+ */
+#ifndef _LINUX_SELINUX_NETLINK_H
+#define _LINUX_SELINUX_NETLINK_H
+
+/* Message types. */
+#define SELNL_MSG_BASE 0x10
+enum {
+ SELNL_MSG_SETENFORCE = SELNL_MSG_BASE,
+ SELNL_MSG_POLICYLOAD,
+ SELNL_MSG_MAX
+};
+
+/* Multicast groups */
+#define SELNL_GRP_NONE 0x00000000
+#define SELNL_GRP_AVC 0x00000001 /* AVC notifications */
+#define SELNL_GRP_ALL 0xffffffff
+
+/* Message structures */
+struct selnl_msg_setenforce {
+ int32_t val;
+};
+
+struct selnl_msg_policyload {
+ u_int32_t seqno;
+};
+
+#endif /* _LINUX_SELINUX_NETLINK_H */
* @cb: Control buffer. Free for use by every layer. Put private vars here
* @len: Length of actual data
* @data_len: Data length
+ * @mac_len: Length of link layer header
* @csum: Checksum
* @__unused: Dead field, may be reused
* @cloned: Head may be cloned (check refcnt to be sure)
struct icmphdr *icmph;
struct igmphdr *igmph;
struct iphdr *ipiph;
+ struct ipv6hdr *ipv6h;
unsigned char *raw;
} h;
unsigned int len,
data_len,
+ mac_len,
csum;
unsigned char local_df,
cloned,
unsigned long hwid;
void *uartp;
struct tty_struct *tty;
-#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0))
- struct wait_queue *open_wait;
- struct wait_queue *close_wait;
-#else
wait_queue_head_t open_wait;
wait_queue_head_t close_wait;
-#endif
struct work_struct tqueue;
comstats_t stats;
stlrq_t tx;
struct auth_cred {
uid_t uid;
gid_t gid;
- int ngroups;
- gid_t *groups;
+ struct group_info *group_info;
};
/*
* If "set" == 0 :
* If an entry is found, it is returned
* If no entry is found, a new non-VALID entry is created.
- * If "set" == 1 :
+ * If "set" == 1 and INPLACE == 0 :
* If no entry is found a new one is inserted with data from "template"
* If a non-CACHE_VALID entry is found, it is updated from template using UPDATE
* If a CACHE_VALID entry is found, a new entry is swapped in with data
* from "template"
- * If set == 2, we UPDATE, but don't swap. i.e. update in place
+ * If set == 1, and INPLACE == 1 :
+ * As above, except that if a CACHE_VALID entry is found, we UPDATE in place
+ * instead of swapping in a new entry.
*
* If the passed handle has the CACHE_NEGATIVE flag set, then UPDATE is not
* run but insteead CACHE_NEGATIVE is set in any new item.
RTN *tmp, *new=NULL; \
struct cache_head **hp, **head; \
SETUP; \
- retry: \
head = &(DETAIL)->hash_table[HASHFN]; \
+ retry: \
if (set||new) write_lock(&(DETAIL)->hash_lock); \
else read_lock(&(DETAIL)->hash_lock); \
for(hp=head; *hp != NULL; hp = &tmp->MEMBER.next) { \
if (set && !INPLACE && test_bit(CACHE_VALID, &tmp->MEMBER.flags) && !new) \
break; \
\
+ if (new) \
+ {INIT;} \
cache_get(&tmp->MEMBER); \
if (set) { \
if (!INPLACE && test_bit(CACHE_VALID, &tmp->MEMBER.flags))\
} \
/* Didn't find anything */ \
if (new) { \
+ INIT; \
new->MEMBER.next = *head; \
*head = &new->MEMBER; \
(DETAIL)->entries ++; \
if (new) { \
cache_init(&new->MEMBER); \
cache_get(&new->MEMBER); \
- INIT; \
- tmp = new; \
goto retry; \
} \
return NULL; \
#ifdef CONFIG_PROC_FS
struct proc_dir_entry * rpc_proc_register(struct rpc_stat *);
void rpc_proc_unregister(const char *);
-int rpc_proc_read(char *, char **, off_t, int,
- int *, void *);
void rpc_proc_zero(struct rpc_program *);
-struct proc_dir_entry * svc_proc_register(struct svc_stat *);
+struct proc_dir_entry * svc_proc_register(struct svc_stat *,
+ struct file_operations *);
void svc_proc_unregister(const char *);
-int svc_proc_read(char *, char **, off_t, int,
- int *, void *);
-void svc_proc_zero(struct svc_program *);
+
+void svc_seq_show(struct seq_file *,
+ const struct svc_stat *);
extern struct proc_dir_entry *proc_net_rpc;
static inline struct proc_dir_entry *rpc_proc_register(struct rpc_stat *s) { return NULL; }
static inline void rpc_proc_unregister(const char *p) {}
-static inline int rpc_proc_read(char *a, char **b, off_t c, int d, int *e, void *f) { return 0; }
static inline void rpc_proc_zero(struct rpc_program *p) {}
-static inline struct proc_dir_entry *svc_proc_register(struct svc_stat *s) { return NULL; }
+static inline struct proc_dir_entry *svc_proc_register(struct svc_stat *s,
+ struct file_operations *f) { return NULL; }
static inline void svc_proc_unregister(const char *p) {}
-static inline int svc_proc_read(char *a, char **b, off_t c, int d, int *e, void *f) { return 0; }
-static inline void svc_proc_zero(struct svc_program *p) {}
+
+static inline void svc_seq_show(struct seq_file *seq,
+ const struct svc_stat *st) {}
#define proc_net_rpc NULL
#include <linux/sunrpc/cache.h>
#include <linux/hash.h>
+#define SVC_CRED_NGROUPS 32
struct svc_cred {
uid_t cr_uid;
gid_t cr_gid;
- gid_t cr_groups[NGROUPS];
+ gid_t cr_groups[SVC_CRED_NGROUPS];
};
struct svc_rqst; /* forward decl */
NET_IPV4_CONF_NOXFRM=15,
NET_IPV4_CONF_NOPOLICY=16,
NET_IPV4_CONF_FORCE_IGMP_VERSION=17,
+ NET_IPV4_CONF_ARP_ANNOUNCE=18,
+ NET_IPV4_CONF_ARP_IGNORE=19,
};
/* /proc/sys/net/ipv4/netfilter */
NET_IPV6_TEMP_PREFERED_LFT=13,
NET_IPV6_REGEN_MAX_RETRY=14,
NET_IPV6_MAX_DESYNC_FACTOR=15,
- NET_IPV6_MAX_ADDRESSES=16
+ NET_IPV6_MAX_ADDRESSES=16,
+ NET_IPV6_FORCE_MLD_VERSION=17
};
/* /proc/sys/net/ipv6/icmp */
FS_LEASE_TIME=15, /* int: maximum time to wait for a lease break */
FS_DQSTATS=16, /* disc quota usage statistics */
FS_XFS=17, /* struct: control xfs parameters */
+ FS_AIO_NR=18, /* current system-wide number of aio requests */
+ FS_AIO_MAX_NR=19, /* system-wide maximum number of aio requests */
};
/* /proc/sys/fs/quota/ */
extern int FASTCALL(schedule_delayed_work(struct work_struct *work, unsigned long delay));
extern void flush_scheduled_work(void);
extern int current_is_keventd(void);
+extern int keventd_up(void);
extern void init_workqueues(void);
#define TUNER_HITACHI_NTSC 40
#define TUNER_PHILIPS_PAL_MK 41
#define TUNER_PHILIPS_ATSC 42
-#define TUNER_PHILIPS_FM1236_MK3 43
+#define TUNER_PHILIPS_FM1236_MK3 43
+#define TUNER_PHILIPS_4IN1 44 /* ATI TV Wonder Pro - Conexant */
+#define TUNER_MICROTUNE_4049FM5 45
#define NOTUNER 0
#define PAL 1 /* PAL_BG */
extern int ipv6_chk_mcast_addr(struct net_device *dev, struct in6_addr *group,
struct in6_addr *src_addr);
+extern int ipv6_is_mld(struct sk_buff *skb, int nexthdr);
extern void addrconf_prefix_rcv(struct net_device *dev, u8 *opt, int len);
extern int dn_nsp_backlog_rcv(struct sock *sk, struct sk_buff *skb);
extern struct sk_buff *dn_alloc_skb(struct sock *sk, int size, int pri);
-extern struct sk_buff *dn_alloc_send_skb(struct sock *sk, int *size, int noblock, int *err);
+extern struct sk_buff *dn_alloc_send_skb(struct sock *sk, size_t *size, int noblock, int *err);
#define NSP_REASON_OK 0 /* No error */
#define NSP_REASON_NR 1 /* No resources */
#include <asm/types.h> /* For __uXX types */
-#define IP_VS_VERSION_CODE 0x010108
+#define IP_VS_VERSION_CODE 0x010200
#define NVERSION(version) \
(version >> 16) & 0xFF, \
(version >> 8) & 0xFF, \
*/
struct lapb_cb {
struct list_head node;
- void *token;
+ struct net_device *dev;
/* Link status fields */
unsigned int mode;
include/linux/compile.h: FORCE
@echo ' CHK $@'
- @sh $(srctree)/scripts/mkcompile_h $@ "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CC) $(CFLAGS)"
-
+ @$(CONFIG_SHELL) $(srctree)/scripts/mkcompile_h $@ "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CC) $(CFLAGS)"
static char * argv_init[MAX_INIT_ARGS+2] = { "init", NULL, };
char * envp_init[MAX_INIT_ENVS+2] = { "HOME=/", "TERM=linux", NULL, };
+static const char *panic_later, *panic_param;
__setup("profile=", profile_setup);
return 0;
}
+ if (panic_later)
+ return 0;
+
if (val) {
/* Environment option */
unsigned int i;
for (i = 0; envp_init[i]; i++) {
- if (i == MAX_INIT_ENVS)
- panic("Too many boot env vars at `%s'", param);
+ if (i == MAX_INIT_ENVS) {
+ panic_later = "Too many boot env vars at `%s'";
+ panic_param = param;
+ }
}
envp_init[i] = param;
} else {
/* Command line option */
unsigned int i;
for (i = 0; argv_init[i]; i++) {
- if (i == MAX_INIT_ARGS)
- panic("Too many boot init vars at `%s'",param);
+ if (i == MAX_INIT_ARGS) {
+ panic_later = "Too many boot init vars at `%s'";
+ panic_param = param;
+ }
}
argv_init[i] = param;
}
* between the root thread and the init thread may cause start_kernel to
* be reaped by free_initmem before the root thread has proceeded to
* cpu_idle.
+ *
+ * gcc-3.4 accidentally inlines this function, so use noinline.
*/
-static void rest_init(void)
+static void noinline rest_init(void)
{
kernel_thread(init, NULL, CLONE_FS | CLONE_SIGHAND);
unlock_kernel();
* this. But we do want output early, in case something goes wrong.
*/
console_init();
+ if (panic_later)
+ panic(panic_later, panic_param);
profile_init();
local_irq_enable();
#ifdef CONFIG_BLK_DEV_INITRD
fork_init(num_physpages);
proc_caches_init();
buffer_init();
+ unnamed_dev_init();
security_scaffolding_startup();
vfs_caches_init(num_physpages);
radix_tree_init();
exit.o itimer.o time.o softirq.o resource.o \
sysctl.o capability.o ptrace.o timer.o user.o \
signal.o sys.o kmod.o workqueue.o pid.o \
- rcupdate.o intermodule.o extable.o params.o posix-timers.o
+ rcupdate.o intermodule.o extable.o params.o posix-timers.o \
+ kthread.o
obj-$(CONFIG_FUTEX) += futex.o
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
security_task_free(tsk);
free_uid(tsk->user);
+ put_group_info(tsk->group_info);
free_task(tsk);
}
atomic_inc(&p->user->__count);
atomic_inc(&p->user->processes);
+ get_group_info(p->group_info);
/*
* If multiple threads are within copy_process(), then this check
bad_fork_cleanup_put_domain:
module_put(p->thread_info->exec_domain->module);
bad_fork_cleanup_count:
+ put_group_info(p->group_info);
atomic_dec(&p->user->processes);
free_uid(p->user);
bad_fork_free:
{
struct subprocess_info *sub_info = data;
int retval;
+ cpumask_t mask = CPU_MASK_ALL;
/* Unblock all signals. */
flush_signals(current);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
+ /* We can run anywhere, unlike our parent keventd(). */
+ set_cpus_allowed(current, mask);
+
retval = -EPERM;
if (current->fs->root)
retval = execve(sub_info->path, sub_info->argv,sub_info->envp);
--- /dev/null
+/* Kernel thread helper functions.
+ * Copyright (C) 2004 IBM Corporation, Rusty Russell.
+ *
+ * Creation is done via keventd, so that we get a clean environment
+ * even if we're invoked from userspace (think modprobe, hotplug cpu,
+ * etc.).
+ */
+#include <linux/sched.h>
+#include <linux/kthread.h>
+#include <linux/completion.h>
+#include <linux/err.h>
+#include <linux/unistd.h>
+#include <asm/semaphore.h>
+
+struct kthread_create_info
+{
+ /* Information passed to kthread() from keventd. */
+ int (*threadfn)(void *data);
+ void *data;
+ struct completion started;
+
+ /* Result passed back to kthread_create() from keventd. */
+ struct task_struct *result;
+ struct completion done;
+};
+
+struct kthread_stop_info
+{
+ struct task_struct *k;
+ int err;
+ struct completion done;
+};
+
+/* Thread stopping is done by setthing this var: lock serializes
+ * multiple kthread_stop calls. */
+static DECLARE_MUTEX(kthread_stop_lock);
+static struct kthread_stop_info kthread_stop_info;
+
+int kthread_should_stop(void)
+{
+ return (kthread_stop_info.k == current);
+}
+
+static int kthread(void *_create)
+{
+ struct kthread_create_info *create = _create;
+ int (*threadfn)(void *data);
+ void *data;
+ sigset_t blocked;
+ int ret = -EINTR;
+ cpumask_t mask = CPU_MASK_ALL;
+
+ /* Copy data: it's on keventd's stack */
+ threadfn = create->threadfn;
+ data = create->data;
+
+ /* Block and flush all signals (in case we're not from keventd). */
+ sigfillset(&blocked);
+ sigprocmask(SIG_BLOCK, &blocked, NULL);
+ flush_signals(current);
+
+ /* By default we can run anywhere, unlike keventd. */
+ set_cpus_allowed(current, mask);
+
+ /* OK, tell user we're spawned, wait for stop or wakeup */
+ __set_current_state(TASK_INTERRUPTIBLE);
+ complete(&create->started);
+ schedule();
+
+ if (!kthread_should_stop())
+ ret = threadfn(data);
+
+ /* It might have exited on its own, w/o kthread_stop. Check. */
+ if (kthread_should_stop()) {
+ kthread_stop_info.err = ret;
+ complete(&kthread_stop_info.done);
+ }
+ return 0;
+}
+
+/* We are keventd: create a thread. */
+static void keventd_create_kthread(void *_create)
+{
+ struct kthread_create_info *create = _create;
+ int pid;
+
+ /* We want our own signal handler (we take no signals by default). */
+ pid = kernel_thread(kthread, create, CLONE_FS | CLONE_FILES | SIGCHLD);
+ if (pid < 0) {
+ create->result = ERR_PTR(pid);
+ } else {
+ wait_for_completion(&create->started);
+ create->result = find_task_by_pid(pid);
+ wait_task_inactive(create->result);
+ }
+ complete(&create->done);
+}
+
+struct task_struct *kthread_create(int (*threadfn)(void *data),
+ void *data,
+ const char namefmt[],
+ ...)
+{
+ struct kthread_create_info create;
+ DECLARE_WORK(work, keventd_create_kthread, &create);
+
+ create.threadfn = threadfn;
+ create.data = data;
+ init_completion(&create.started);
+ init_completion(&create.done);
+
+ /* If we're being called to start the first workqueue, we
+ * can't use keventd. */
+ if (!keventd_up())
+ work.func(work.data);
+ else {
+ schedule_work(&work);
+ wait_for_completion(&create.done);
+ }
+ if (!IS_ERR(create.result)) {
+ va_list args;
+ va_start(args, namefmt);
+ vsnprintf(create.result->comm, sizeof(create.result->comm),
+ namefmt, args);
+ va_end(args);
+ }
+
+ return create.result;
+}
+
+void kthread_bind(struct task_struct *k, unsigned int cpu)
+{
+ BUG_ON(k->state != TASK_INTERRUPTIBLE);
+ k->thread_info->cpu = cpu;
+ k->cpus_allowed = cpumask_of_cpu(cpu);
+}
+
+int kthread_stop(struct task_struct *k)
+{
+ int ret;
+
+ down(&kthread_stop_lock);
+
+ /* It could exit after stop_info.k set, but before wake_up_process. */
+ get_task_struct(k);
+
+ /* Must init completion *before* thread sees kthread_stop_info.k */
+ init_completion(&kthread_stop_info.done);
+ wmb();
+
+ /* Now set kthread_should_stop() to true, and wake it up. */
+ kthread_stop_info.k = k;
+ wake_up_process(k);
+ put_task_struct(k);
+
+ /* Once it dies, reset stop ptr, gather result and we're done. */
+ wait_for_completion(&kthread_stop_info.done);
+ kthread_stop_info.k = NULL;
+ ret = kthread_stop_info.err;
+ up(&kthread_stop_lock);
+
+ return ret;
+}
#include <linux/err.h>
#include <linux/vermagic.h>
#include <linux/notifier.h>
+#include <linux/kthread.h>
#include <asm/uaccess.h>
#include <asm/semaphore.h>
#include <asm/pgalloc.h>
}
}
+#ifdef CONFIG_MODULE_FORCE_UNLOAD
+static inline int try_force(unsigned int flags)
+{
+ int ret = (flags & O_TRUNC);
+ if (ret)
+ tainted |= TAINT_FORCED_MODULE;
+ return ret;
+}
+#else
+static inline int try_force(unsigned int flags)
+{
+ return 0;
+}
+#endif /* CONFIG_MODULE_FORCE_UNLOAD */
+
+static int try_stop_module_local(struct module *mod, int flags, int *forced)
+{
+ local_irq_disable();
+
+ /* If it's not unused, quit unless we are told to block. */
+ if ((flags & O_NONBLOCK) && module_refcount(mod) != 0) {
+ if (!(*forced = try_force(flags))) {
+ local_irq_enable();
+ return -EWOULDBLOCK;
+ }
+ }
+
+ /* Mark it as dying. */
+ mod->waiter = current;
+ mod->state = MODULE_STATE_GOING;
+ local_irq_enable();
+ return 0;
+}
+
#ifdef CONFIG_SMP
/* Thread to stop each CPU in user context. */
enum stopref_state {
int irqs_disabled = 0;
int prepared = 0;
- sprintf(current->comm, "kmodule%lu\n", (unsigned long)cpu);
-
- /* Highest priority we can manage, and move to right CPU. */
-#if 0 /* FIXME */
- struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
- setscheduler(current->pid, SCHED_FIFO, ¶m);
-#endif
set_cpus_allowed(current, cpumask_of_cpu((int)(long)cpu));
/* Ack: we are alive */
}
}
-/* Stop the machine. Disables irqs. */
-static int stop_refcounts(void)
+struct stopref
{
- unsigned int i, cpu;
- cpumask_t old_allowed;
+ struct module *mod;
+ int flags;
+ int *forced;
+ struct completion started;
+};
+
+static int spawn_stopref(void *data)
+{
+ struct stopref *sref = data;
+ struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
+ unsigned int i, cpu = smp_processor_id();
int ret = 0;
- /* One thread per cpu. We'll do our own. */
- cpu = smp_processor_id();
+ complete(&sref->started);
- /* FIXME: racy with set_cpus_allowed. */
- old_allowed = current->cpus_allowed;
+ /* One high-prio thread per cpu. We'll do one (any one). */
set_cpus_allowed(current, cpumask_of_cpu(cpu));
+ sys_sched_setscheduler(current->pid, SCHED_FIFO, ¶m);
atomic_set(&stopref_thread_ack, 0);
stopref_num_threads = 0;
stopref_state = STOPREF_WAIT;
- /* No CPUs can come up or down during this. */
- lock_cpu_hotplug();
-
- for (i = 0; i < NR_CPUS; i++) {
- if (i == cpu || !cpu_online(i))
+ for_each_online_cpu(i) {
+ if (i == cpu)
continue;
ret = kernel_thread(stopref, (void *)(long)i, CLONE_KERNEL);
if (ret < 0)
/* If some failed, kill them all. */
if (ret < 0) {
stopref_set_state(STOPREF_EXIT, 1);
- unlock_cpu_hotplug();
- return ret;
+ goto out;
}
/* Don't schedule us away at this point, please. */
preempt_disable();
- /* Now they are all scheduled, make them hold the CPUs, ready. */
+ /* Now they are all started, make them hold the CPUs, ready. */
stopref_set_state(STOPREF_PREPARE, 0);
/* Make them disable irqs. */
stopref_set_state(STOPREF_DISABLE_IRQ, 0);
- local_irq_disable();
- return 0;
-}
+ /* Atomically disable module if possible */
+ ret = try_stop_module_local(sref->mod, sref->flags, sref->forced);
-/* Restart the machine. Re-enables irqs. */
-static void restart_refcounts(void)
-{
stopref_set_state(STOPREF_EXIT, 0);
- local_irq_enable();
preempt_enable();
- unlock_cpu_hotplug();
+
+out:
+ /* Wait for kthread_stop */
+ while (!kthread_should_stop()) {
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule();
+ }
+ return ret;
}
-#else /* ...!SMP */
-static inline int stop_refcounts(void)
+
+static int try_stop_module(struct module *mod, int flags, int *forced)
{
- local_irq_disable();
- return 0;
+ struct task_struct *p;
+ struct stopref sref = { mod, flags, forced };
+ int ret;
+
+ init_completion(&sref.started);
+
+ /* No CPUs can come up or down during this. */
+ lock_cpu_hotplug();
+ p = kthread_run(spawn_stopref, &sref, "krmmod");
+ if (IS_ERR(p))
+ ret = PTR_ERR(p);
+ else {
+ wait_for_completion(&sref.started);
+ ret = kthread_stop(p);
+ }
+ unlock_cpu_hotplug();
+ return ret;
}
-static inline void restart_refcounts(void)
+#else /* ...!SMP */
+static inline int try_stop_module(struct module *mod, int flags, int *forced)
{
- local_irq_enable();
+ return try_stop_module_local(mod, flags, forced);
}
#endif
/* This exists whether we can unload or not */
static void free_module(struct module *mod);
-#ifdef CONFIG_MODULE_FORCE_UNLOAD
-static inline int try_force(unsigned int flags)
-{
- int ret = (flags & O_TRUNC);
- if (ret)
- tainted |= TAINT_FORCED_MODULE;
- return ret;
-}
-#else
-static inline int try_force(unsigned int flags)
-{
- return 0;
-}
-#endif /* CONFIG_MODULE_FORCE_UNLOAD */
-
/* Stub function for modules which don't have an exitfn */
void cleanup_module(void)
{
goto out;
}
}
- /* Stop the machine so refcounts can't move: irqs disabled. */
- DEBUGP("Stopping refcounts...\n");
- ret = stop_refcounts();
- if (ret != 0)
- goto out;
- /* If it's not unused, quit unless we are told to block. */
- if ((flags & O_NONBLOCK) && module_refcount(mod) != 0) {
- forced = try_force(flags);
- if (!forced) {
- ret = -EWOULDBLOCK;
- restart_refcounts();
- goto out;
- }
- }
-
- /* Mark it as dying. */
- mod->waiter = current;
- mod->state = MODULE_STATE_GOING;
- restart_refcounts();
+ /* Stop the machine so refcounts can't move and disable module. */
+ ret = try_stop_module(mod, flags, &forced);
/* Never wait if forced. */
if (!forced && module_refcount(mod) != 0)
spin_lock_init(&new_timer->it_lock);
do {
- if (unlikely(!idr_pre_get(&posix_timers_id))) {
+ if (unlikely(!idr_pre_get(&posix_timers_id, GFP_KERNEL))) {
error = -EAGAIN;
new_timer->it_id = (timer_t)-1;
goto out;
resume_device = name_to_dev_t(resume_file);
pr_debug("pmdisk: Resume From Partition: %s\n", resume_file);
- resume_bdev = open_by_devnum(resume_device, FMODE_READ, BDEV_RAW);
+ resume_bdev = open_by_devnum(resume_device, FMODE_READ);
if (!IS_ERR(resume_bdev)) {
set_blocksize(resume_bdev, PAGE_SIZE);
error = read_suspend_image();
- blkdev_put(resume_bdev, BDEV_RAW);
+ blkdev_put(resume_bdev);
} else
error = PTR_ERR(resume_bdev);
struct block_device *bdev;
printk("Resuming from device %s\n",
__bdevname(resume_device, b));
- bdev = open_by_devnum(resume_device, FMODE_READ, BDEV_RAW);
+ bdev = open_by_devnum(resume_device, FMODE_READ);
if (IS_ERR(bdev)) {
error = PTR_ERR(bdev);
} else {
set_blocksize(bdev, PAGE_SIZE);
error = __read_suspend_image(bdev, cur, noresume);
- blkdev_put(bdev, BDEV_RAW);
+ blkdev_put(bdev);
}
} else error = -ENOMEM;
/* Emit the output into the temporary buffer */
va_start(args, fmt);
- printed_len = vsnprintf(printk_buf, sizeof(printk_buf), fmt, args);
+ printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
va_end(args);
/*
p = &parent->child;
end = start + n - 1;
+ write_lock(&resource_lock);
+
for (;;) {
struct resource *res = *p;
if (res->start != start || res->end != end)
break;
*p = res->sibling;
+ write_unlock(&resource_lock);
kfree(res);
return;
}
p = &res->sibling;
}
+
+ write_unlock(&resource_lock);
+
printk(KERN_WARNING "Trying to free nonexistent resource <%08lx-%08lx>\n", start, end);
}
#include <linux/rcupdate.h>
#include <linux/cpu.h>
#include <linux/percpu.h>
+#include <linux/kthread.h>
#ifdef CONFIG_NUMA
#define cpu_to_node_mask(cpu) node_to_cpumask(cpu_to_node(cpu))
static void show_task(task_t * p)
{
- unsigned long free = 0;
task_t *relative;
- int state;
- static const char * stat_nam[] = { "R", "S", "D", "T", "Z", "W" };
+ unsigned state;
+ unsigned long free = 0;
+ static const char *stat_nam[] = { "R", "S", "D", "T", "Z", "W" };
printk("%-13.13s ", p->comm);
state = p->state ? __ffs(p->state) + 1 : 0;
- if (((unsigned) state) < sizeof(stat_nam)/sizeof(char *))
+ if (state < ARRAY_SIZE(stat_nam))
printk(stat_nam[state]);
else
- printk(" ");
+ printk("?");
#if (BITS_PER_LONG == 32)
- if (p == current)
- printk(" current ");
+ if (state == TASK_RUNNING)
+ printk(" running ");
else
printk(" %08lX ", thread_saved_pc(p));
#else
- if (p == current)
- printk(" current task ");
+ if (state == TASK_RUNNING)
+ printk(" running task ");
else
printk(" %016lx ", thread_saved_pc(p));
#endif
+#ifdef CONFIG_DEBUG_STACK_USAGE
{
unsigned long * n = (unsigned long *) (p->thread_info+1);
while (!*n)
n++;
free = (unsigned long) n - (unsigned long)(p->thread_info+1);
}
+#endif
printk("%5lu %5d %6d ", free, p->pid, p->parent->pid);
if ((relative = eldest_child(p)))
printk("%5d ", relative->pid);
else
printk(" (NOTLB)\n");
- show_stack(p, NULL);
+ if (state != TASK_RUNNING)
+ show_stack(p, NULL);
}
void show_state(void)
#if (BITS_PER_LONG == 32)
printk("\n"
- " free sibling\n");
- printk(" task PC stack pid father child younger older\n");
+ " sibling\n");
+ printk(" task PC pid father child younger older\n");
#else
printk("\n"
- " free sibling\n");
- printk(" task PC stack pid father child younger older\n");
+ " sibling\n");
+ printk(" task PC pid father child younger older\n");
#endif
read_lock(&tasklist_lock);
do_each_thread(g, p) {
local_irq_restore(flags);
}
-typedef struct {
- int cpu;
- struct completion startup_done;
- task_t *task;
-} migration_startup_t;
-
/*
* migration_thread - this is a highprio system thread that performs
* thread migration by bumping thread off CPU then 'pushing' onto
{
/* Marking "param" __user is ok, since we do a set_fs(KERNEL_DS); */
struct sched_param __user param = { .sched_priority = MAX_RT_PRIO-1 };
- migration_startup_t *startup = data;
- int cpu = startup->cpu;
runqueue_t *rq;
+ int cpu = (long)data;
int ret;
- startup->task = current;
- complete(&startup->startup_done);
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule();
-
BUG_ON(smp_processor_id() != cpu);
-
- daemonize("migration/%d", cpu);
- set_fs(KERNEL_DS);
-
ret = setscheduler(0, SCHED_FIFO, ¶m);
rq = this_rq();
- rq->migration_thread = current;
+ BUG_ON(rq->migration_thread != current);
- for (;;) {
+ while (!kthread_should_stop()) {
struct list_head *head;
migration_req_t *req;
any_online_cpu(req->task->cpus_allowed));
complete(&req->done);
}
+ return 0;
}
/*
static int migration_call(struct notifier_block *nfb, unsigned long action,
void *hcpu)
{
- long cpu = (long)hcpu;
- migration_startup_t startup;
+ int cpu = (long)hcpu;
+ struct task_struct *p;
switch (action) {
+ case CPU_UP_PREPARE:
+ p = kthread_create(migration_thread, hcpu, "migration/%d",cpu);
+ if (IS_ERR(p))
+ return NOTIFY_BAD;
+ kthread_bind(p, cpu);
+ cpu_rq(cpu)->migration_thread = p;
+ break;
case CPU_ONLINE:
-
- printk("Starting migration thread for cpu %li\n", cpu);
-
- startup.cpu = cpu;
- startup.task = NULL;
- init_completion(&startup.startup_done);
-
- kernel_thread(migration_thread, &startup, CLONE_KERNEL);
- wait_for_completion(&startup.startup_done);
- wait_task_inactive(startup.task);
-
- startup.task->thread_info->cpu = cpu;
- startup.task->cpus_allowed = cpumask_of_cpu(cpu);
-
- wake_up_process(startup.task);
-
- while (!cpu_rq(cpu)->migration_thread)
- yield();
-
+ /* Strictly unneccessary, as first user will wake it. */
+ wake_up_process(cpu_rq(cpu)->migration_thread);
break;
}
return NOTIFY_OK;
}
-static struct notifier_block migration_notifier
- = { .notifier_call = &migration_call };
+/*
+ * We want this after the other threads, so they can use set_cpus_allowed
+ * from their CPU_OFFLINE callback
+ */
+static struct notifier_block __devinitdata migration_notifier = {
+ .notifier_call = migration_call,
+ .priority = -10,
+};
-__init int migration_init(void)
+int __init migration_init(void)
{
+ void *cpu = (void *)(long)smp_processor_id();
/* Start one for boot CPU. */
- migration_call(&migration_notifier, CPU_ONLINE,
- (void *)(long)smp_processor_id());
+ migration_call(&migration_notifier, CPU_UP_PREPARE, cpu);
+ migration_call(&migration_notifier, CPU_ONLINE, cpu);
register_cpu_notifier(&migration_notifier);
return 0;
}
-
#endif
/*
spinlock_t kernel_flag __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED;
EXPORT_SYMBOL(kernel_flag);
-static void kstat_init_cpu(int cpu)
-{
- /* Add any initialisation to kstat here */
- /* Useful when cpu offlining logic is added.. */
-}
-
-static int __devinit kstat_cpu_notify(struct notifier_block *self,
- unsigned long action, void *hcpu)
-{
- int cpu = (unsigned long)hcpu;
- switch(action) {
- case CPU_UP_PREPARE:
- kstat_init_cpu(cpu);
- break;
- default:
- break;
- }
- return NOTIFY_OK;
-}
-
-static struct notifier_block __devinitdata kstat_nb = {
- .notifier_call = kstat_cpu_notify,
- .next = NULL,
-};
-
-__init static void init_kstat(void)
-{
- kstat_cpu_notify(&kstat_nb, (unsigned long)CPU_UP_PREPARE,
- (void *)(long)smp_processor_id());
- register_cpu_notifier(&kstat_nb);
-}
-
void __init sched_init(void)
{
runqueue_t *rq;
int i, j, k;
- /* Init the kstat counters */
- init_kstat();
for (i = 0; i < NR_CPUS; i++) {
prio_array_t *array;
#include <linux/notifier.h>
#include <linux/percpu.h>
#include <linux/cpu.h>
+#include <linux/kthread.h>
#ifdef CONFIG_KDB
#include <linux/kdb.h>
#endif
EXPORT_SYMBOL(tasklet_kill);
-static void tasklet_init_cpu(int cpu)
-{
- per_cpu(tasklet_vec, cpu).list = NULL;
- per_cpu(tasklet_hi_vec, cpu).list = NULL;
-}
-
-static int tasklet_cpu_notify(struct notifier_block *self,
- unsigned long action, void *hcpu)
-{
- long cpu = (long)hcpu;
- switch(action) {
- case CPU_UP_PREPARE:
- tasklet_init_cpu(cpu);
- break;
- default:
- break;
- }
- return 0;
-}
-
-static struct notifier_block tasklet_nb = {
- .notifier_call = tasklet_cpu_notify,
- .next = NULL,
-};
-
void __init softirq_init(void)
{
open_softirq(TASKLET_SOFTIRQ, tasklet_action, NULL);
open_softirq(HI_SOFTIRQ, tasklet_hi_action, NULL);
- tasklet_cpu_notify(&tasklet_nb, (unsigned long)CPU_UP_PREPARE,
- (void *)(long)smp_processor_id());
- register_cpu_notifier(&tasklet_nb);
}
static int ksoftirqd(void * __bind_cpu)
{
int cpu = (int) (long) __bind_cpu;
- daemonize("ksoftirqd/%d", cpu);
set_user_nice(current, 19);
current->flags |= PF_IOTHREAD;
- /* Migrate to the right CPU */
- set_cpus_allowed(current, cpumask_of_cpu(cpu));
BUG_ON(smp_processor_id() != cpu);
- __set_current_state(TASK_INTERRUPTIBLE);
- mb();
+ set_current_state(TASK_INTERRUPTIBLE);
- __get_cpu_var(ksoftirqd) = current;
-
- for (;;) {
+ while (!kthread_should_stop()) {
if (!local_softirq_pending())
schedule();
__set_current_state(TASK_INTERRUPTIBLE);
}
+ return 0;
}
static int __devinit cpu_callback(struct notifier_block *nfb,
void *hcpu)
{
int hotcpu = (unsigned long)hcpu;
+ struct task_struct *p;
- if (action == CPU_ONLINE) {
- if (kernel_thread(ksoftirqd, hcpu, CLONE_KERNEL) < 0) {
+ switch (action) {
+ case CPU_UP_PREPARE:
+ BUG_ON(per_cpu(tasklet_vec, hotcpu).list);
+ BUG_ON(per_cpu(tasklet_hi_vec, hotcpu).list);
+ p = kthread_create(ksoftirqd, hcpu, "ksoftirqd/%d", hotcpu);
+ if (IS_ERR(p)) {
printk("ksoftirqd for %i failed\n", hotcpu);
return NOTIFY_BAD;
}
-
- while (!per_cpu(ksoftirqd, hotcpu))
- yield();
+ per_cpu(ksoftirqd, hotcpu) = p;
+ kthread_bind(p, hotcpu);
+ per_cpu(ksoftirqd, hotcpu) = p;
+ break;
+ case CPU_ONLINE:
+ wake_up_process(per_cpu(ksoftirqd, hotcpu));
+ break;
}
return NOTIFY_OK;
}
__init int spawn_ksoftirqd(void)
{
- cpu_callback(&cpu_nfb, CPU_ONLINE, (void *)(long)smp_processor_id());
+ void *cpu = (void *)(long)smp_processor_id();
+ cpu_callback(&cpu_nfb, CPU_UP_PREPARE, cpu);
+ cpu_callback(&cpu_nfb, CPU_ONLINE, cpu);
register_cpu_notifier(&cpu_nfb);
return 0;
}
/*
* Supplementary group IDs
*/
-asmlinkage long sys_getgroups(int gidsetsize, gid_t __user *grouplist)
+
+/* init to 2 - one for init_task, one to ensure it is never freed */
+struct group_info init_groups = { .usage = ATOMIC_INIT(2) };
+
+struct group_info *groups_alloc(int gidsetsize)
{
+ struct group_info *group_info;
+ int nblocks;
int i;
-
+
+ nblocks = (gidsetsize/NGROUPS_PER_BLOCK) +
+ (gidsetsize%NGROUPS_PER_BLOCK?1:0);
+ group_info = kmalloc(sizeof(*group_info) + nblocks*sizeof(gid_t *),
+ GFP_USER);
+ if (!group_info)
+ return NULL;
+ group_info->ngroups = gidsetsize;
+ group_info->nblocks = nblocks;
+ atomic_set(&group_info->usage, 1);
+
+ if (gidsetsize <= NGROUPS_SMALL) {
+ group_info->blocks[0] = group_info->small_block;
+ } else {
+ for (i = 0; i < nblocks; i++) {
+ gid_t *b;
+ b = (void *)__get_free_page(GFP_USER);
+ if (!b)
+ goto out_undo_partial_alloc;
+ group_info->blocks[i] = b;
+ }
+ }
+ return group_info;
+
+out_undo_partial_alloc:
+ while (--i >= 0) {
+ free_page((unsigned long)group_info->blocks[i]);
+ }
+ kfree(group_info);
+ return NULL;
+}
+
+EXPORT_SYMBOL(groups_alloc);
+
+void groups_free(struct group_info *group_info)
+{
+ if (group_info->blocks[0] != group_info->small_block) {
+ int i;
+ for (i = 0; i < group_info->nblocks; i++)
+ free_page((unsigned long)group_info->blocks[i]);
+ }
+ kfree(group_info);
+}
+
+EXPORT_SYMBOL(groups_free);
+
+/* export the group_info to a user-space array */
+static int groups_to_user(gid_t __user *grouplist,
+ struct group_info *group_info)
+{
+ int i;
+ int count = group_info->ngroups;
+
+ for (i = 0; i < group_info->nblocks; i++) {
+ int cp_count = min(NGROUPS_PER_BLOCK, count);
+ int off = i * NGROUPS_PER_BLOCK;
+ int len = cp_count * sizeof(*grouplist);
+
+ if (copy_to_user(grouplist+off, group_info->blocks[i], len))
+ return -EFAULT;
+
+ count -= cp_count;
+ }
+ return 0;
+}
+
+/* fill a group_info from a user-space array - it must be allocated already */
+static int groups_from_user(struct group_info *group_info,
+ gid_t __user *grouplist)
+ {
+ int i;
+ int count = group_info->ngroups;
+
+ for (i = 0; i < group_info->nblocks; i++) {
+ int cp_count = min(NGROUPS_PER_BLOCK, count);
+ int off = i * NGROUPS_PER_BLOCK;
+ int len = cp_count * sizeof(*grouplist);
+
+ if (copy_from_user(group_info->blocks[i], grouplist+off, len))
+ return -EFAULT;
+
+ count -= cp_count;
+ }
+ return 0;
+}
+
+/* a simple shell-metzner sort */
+static void groups_sort(struct group_info *group_info)
+{
+ int base, max, stride;
+ int gidsetsize = group_info->ngroups;
+
+ for (stride = 1; stride < gidsetsize; stride = 3 * stride + 1)
+ ; /* nothing */
+ stride /= 3;
+
+ while (stride) {
+ max = gidsetsize - stride;
+ for (base = 0; base < max; base++) {
+ int left = base;
+ int right = left + stride;
+ gid_t tmp = GROUP_AT(group_info, right);
+
+ while (left >= 0 && GROUP_AT(group_info, left) > tmp) {
+ GROUP_AT(group_info, right) =
+ GROUP_AT(group_info, left);
+ right = left;
+ left -= stride;
+ }
+ GROUP_AT(group_info, right) = tmp;
+ }
+ stride /= 3;
+ }
+}
+
+/* a simple bsearch */
+static int groups_search(struct group_info *group_info, gid_t grp)
+{
+ int left, right;
+
+ if (!group_info)
+ return 0;
+
+ left = 0;
+ right = group_info->ngroups;
+ while (left < right) {
+ int mid = (left+right)/2;
+ int cmp = grp - GROUP_AT(group_info, mid);
+ if (cmp > 0)
+ left = mid + 1;
+ else if (cmp < 0)
+ right = mid;
+ else
+ return 1;
+ }
+ return 0;
+}
+
+/* validate and set current->group_info */
+int set_current_groups(struct group_info *group_info)
+{
+ int retval;
+ struct group_info *old_info;
+
+ retval = security_task_setgroups(group_info);
+ if (retval)
+ return retval;
+
+ groups_sort(group_info);
+ get_group_info(group_info);
+ old_info = current->group_info;
+ current->group_info = group_info;
+ put_group_info(old_info);
+
+ return 0;
+}
+
+EXPORT_SYMBOL(set_current_groups);
+
+asmlinkage long sys_getgroups(int gidsetsize, gid_t __user *grouplist)
+{
+ int i = 0;
+
/*
* SMP: Nobody else can change our grouplist. Thus we are
* safe.
if (gidsetsize < 0)
return -EINVAL;
- i = current->ngroups;
+
+ get_group_info(current->group_info);
+ i = current->group_info->ngroups;
if (gidsetsize) {
- if (i > gidsetsize)
- return -EINVAL;
- if (copy_to_user(grouplist, current->groups, sizeof(gid_t)*i))
- return -EFAULT;
+ if (i > gidsetsize) {
+ i = -EINVAL;
+ goto out;
+ }
+ if (groups_to_user(grouplist, current->group_info)) {
+ i = -EFAULT;
+ goto out;
+ }
}
+out:
+ put_group_info(current->group_info);
return i;
}
/*
- * SMP: Our groups are not shared. We can copy to/from them safely
+ * SMP: Our groups are copy-on-write. We can set them safely
* without another task interfering.
*/
asmlinkage long sys_setgroups(int gidsetsize, gid_t __user *grouplist)
{
- gid_t groups[NGROUPS];
+ struct group_info *group_info;
int retval;
if (!capable(CAP_SETGID))
return -EPERM;
- if ((unsigned) gidsetsize > NGROUPS)
+ if ((unsigned)gidsetsize > NGROUPS_MAX)
return -EINVAL;
- if (copy_from_user(groups, grouplist, gidsetsize * sizeof(gid_t)))
- return -EFAULT;
- retval = security_task_setgroups(gidsetsize, groups);
- if (retval)
- return retval;
- memcpy(current->groups, groups, gidsetsize * sizeof(gid_t));
- current->ngroups = gidsetsize;
- return 0;
-}
-static int supplemental_group_member(gid_t grp)
-{
- int i = current->ngroups;
-
- if (i) {
- gid_t *groups = current->groups;
- do {
- if (*groups == grp)
- return 1;
- groups++;
- i--;
- } while (i);
+ group_info = groups_alloc(gidsetsize);
+ if (!group_info)
+ return -ENOMEM;
+ retval = groups_from_user(group_info, grouplist);
+ if (retval) {
+ put_group_info(group_info);
+ return retval;
}
- return 0;
+
+ retval = set_current_groups(group_info);
+ put_group_info(group_info);
+
+ return retval;
}
/*
int in_group_p(gid_t grp)
{
int retval = 1;
- if (grp != current->fsgid)
- retval = supplemental_group_member(grp);
+ if (grp != current->fsgid) {
+ get_group_info(current->group_info);
+ retval = groups_search(current->group_info, grp);
+ put_group_info(current->group_info);
+ }
return retval;
}
int in_egroup_p(gid_t grp)
{
int retval = 1;
- if (grp != current->egid)
- retval = supplemental_group_member(grp);
+ if (grp != current->egid) {
+ get_group_info(current->group_info);
+ retval = groups_search(current->group_info, grp);
+ put_group_info(current->group_info);
+ }
return retval;
}
.mode = 0644,
.proc_handler = &proc_dointvec,
},
+ {
+ .ctl_name = FS_AIO_NR,
+ .procname = "aio-nr",
+ .data = &aio_nr,
+ .maxlen = sizeof(aio_nr),
+ .mode = 0444,
+ .proc_handler = &proc_dointvec,
+ },
+ {
+ .ctl_name = FS_AIO_MAX_NR,
+ .procname = "aio-max-nr",
+ .data = &aio_max_nr,
+ .maxlen = sizeof(aio_max_nr),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
{ .ctl_name = 0 }
};
return sys_setfsgid((gid_t)gid);
}
+static int groups16_to_user(old_gid_t __user *grouplist,
+ struct group_info *group_info)
+{
+ int i;
+ old_gid_t group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ group = (old_gid_t)GROUP_AT(group_info, i);
+ if (put_user(group, grouplist+i))
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int groups16_from_user(struct group_info *group_info,
+ old_gid_t __user *grouplist)
+{
+ int i;
+ old_gid_t group;
+
+ for (i = 0; i < group_info->ngroups; i++) {
+ if (get_user(group, grouplist+i))
+ return -EFAULT;
+ GROUP_AT(group_info, i) = (gid_t)group;
+ }
+
+ return 0;
+}
+
asmlinkage long sys_getgroups16(int gidsetsize, old_gid_t __user *grouplist)
{
- old_gid_t groups[NGROUPS];
- int i,j;
+ int i = 0;
if (gidsetsize < 0)
return -EINVAL;
- i = current->ngroups;
+
+ get_group_info(current->group_info);
+ i = current->group_info->ngroups;
if (gidsetsize) {
- if (i > gidsetsize)
- return -EINVAL;
- for(j=0;j<i;j++)
- groups[j] = current->groups[j];
- if (copy_to_user(grouplist, groups, sizeof(old_gid_t)*i))
- return -EFAULT;
+ if (i > gidsetsize) {
+ i = -EINVAL;
+ goto out;
+ }
+ if (groups16_to_user(grouplist, current->group_info)) {
+ i = -EFAULT;
+ goto out;
+ }
}
+out:
+ put_group_info(current->group_info);
return i;
}
asmlinkage long sys_setgroups16(int gidsetsize, old_gid_t __user *grouplist)
{
- old_gid_t groups[NGROUPS];
- gid_t new_groups[NGROUPS];
- int i;
+ struct group_info *group_info;
+ int retval;
if (!capable(CAP_SETGID))
return -EPERM;
- if ((unsigned) gidsetsize > NGROUPS)
+ if ((unsigned)gidsetsize > NGROUPS_MAX)
return -EINVAL;
- if (copy_from_user(groups, grouplist, gidsetsize * sizeof(old_gid_t)))
- return -EFAULT;
- for (i = 0 ; i < gidsetsize ; i++)
- new_groups[i] = (gid_t)groups[i];
- i = security_task_setgroups(gidsetsize, new_groups);
- if (i)
- return i;
- memcpy(current->groups, new_groups, gidsetsize * sizeof(gid_t));
- current->ngroups = gidsetsize;
- return 0;
+
+ group_info = groups_alloc(gidsetsize);
+ if (!group_info)
+ return -ENOMEM;
+ retval = groups16_from_user(group_info, grouplist);
+ if (retval) {
+ put_group_info(group_info);
+ return retval;
+ }
+
+ retval = set_current_groups(group_info);
+ put_group_info(group_info);
+
+ return retval;
}
asmlinkage long sys_getuid16(void)
#include <linux/completion.h>
#include <linux/workqueue.h>
#include <linux/slab.h>
+#include <linux/kthread.h>
/*
* The per-CPU workqueue.
struct workqueue_struct *wq;
task_t *thread;
- struct completion exit;
} ____cacheline_aligned;
struct cpu_workqueue_struct cpu_wq[NR_CPUS];
};
+/* Preempt must be disabled. */
+static void __queue_work(struct cpu_workqueue_struct *cwq,
+ struct work_struct *work)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&cwq->lock, flags);
+ work->wq_data = cwq;
+ list_add_tail(&work->entry, &cwq->worklist);
+ cwq->insert_sequence++;
+ wake_up(&cwq->more_work);
+ spin_unlock_irqrestore(&cwq->lock, flags);
+}
+
/*
* Queue work on a workqueue. Return non-zero if it was successfully
* added.
*/
int queue_work(struct workqueue_struct *wq, struct work_struct *work)
{
- unsigned long flags;
int ret = 0, cpu = get_cpu();
- struct cpu_workqueue_struct *cwq = wq->cpu_wq + cpu;
if (!test_and_set_bit(0, &work->pending)) {
BUG_ON(!list_empty(&work->entry));
- work->wq_data = cwq;
-
- spin_lock_irqsave(&cwq->lock, flags);
- list_add_tail(&work->entry, &cwq->worklist);
- cwq->insert_sequence++;
- wake_up(&cwq->more_work);
- spin_unlock_irqrestore(&cwq->lock, flags);
+ __queue_work(wq->cpu_wq + cpu, work);
ret = 1;
}
put_cpu();
static void delayed_work_timer_fn(unsigned long __data)
{
struct work_struct *work = (struct work_struct *)__data;
- struct cpu_workqueue_struct *cwq = work->wq_data;
- unsigned long flags;
+ struct workqueue_struct *wq = work->wq_data;
- /*
- * Do the wakeup within the spinlock, so that flushing
- * can be done in a guaranteed way.
- */
- spin_lock_irqsave(&cwq->lock, flags);
- list_add_tail(&work->entry, &cwq->worklist);
- cwq->insert_sequence++;
- wake_up(&cwq->more_work);
- spin_unlock_irqrestore(&cwq->lock, flags);
+ __queue_work(wq->cpu_wq + smp_processor_id(), work);
}
int queue_delayed_work(struct workqueue_struct *wq,
struct work_struct *work, unsigned long delay)
{
- int ret = 0, cpu = get_cpu();
+ int ret = 0;
struct timer_list *timer = &work->timer;
- struct cpu_workqueue_struct *cwq = wq->cpu_wq + cpu;
if (!test_and_set_bit(0, &work->pending)) {
BUG_ON(timer_pending(timer));
BUG_ON(!list_empty(&work->entry));
- work->wq_data = cwq;
+ /* This stores wq for the moment, for the timer_fn */
+ work->wq_data = wq;
timer->expires = jiffies + delay;
timer->data = (unsigned long)work;
timer->function = delayed_work_timer_fn;
add_timer(timer);
ret = 1;
}
- put_cpu();
return ret;
}
spin_unlock_irqrestore(&cwq->lock, flags);
}
-typedef struct startup_s {
- struct cpu_workqueue_struct *cwq;
- struct completion done;
- const char *name;
-} startup_t;
-
-static int worker_thread(void *__startup)
+static int worker_thread(void *__cwq)
{
- startup_t *startup = __startup;
- struct cpu_workqueue_struct *cwq = startup->cwq;
+ struct cpu_workqueue_struct *cwq = __cwq;
int cpu = cwq - cwq->wq->cpu_wq;
DECLARE_WAITQUEUE(wait, current);
struct k_sigaction sa;
+ sigset_t blocked;
- daemonize("%s/%d", startup->name, cpu);
current->flags |= PF_IOTHREAD;
- cwq->thread = current;
set_user_nice(current, -10);
- set_cpus_allowed(current, cpumask_of_cpu(cpu));
+ BUG_ON(smp_processor_id() != cpu);
- complete(&startup->done);
+ /* Block and flush all signals */
+ sigfillset(&blocked);
+ sigprocmask(SIG_BLOCK, &blocked, NULL);
+ flush_signals(current);
/* SIG_IGN makes children autoreap: see do_notify_parent(). */
sa.sa.sa_handler = SIG_IGN;
siginitset(&sa.sa.sa_mask, sigmask(SIGCHLD));
do_sigaction(SIGCHLD, &sa, (struct k_sigaction *)0);
- for (;;) {
+ while (!kthread_should_stop()) {
set_task_state(current, TASK_INTERRUPTIBLE);
add_wait_queue(&cwq->more_work, &wait);
- if (!cwq->thread)
- break;
if (list_empty(&cwq->worklist))
schedule();
else
if (!list_empty(&cwq->worklist))
run_workqueue(cwq);
}
- remove_wait_queue(&cwq->more_work, &wait);
- complete(&cwq->exit);
-
return 0;
}
const char *name,
int cpu)
{
- startup_t startup;
struct cpu_workqueue_struct *cwq = wq->cpu_wq + cpu;
- int ret;
+ struct task_struct *p;
spin_lock_init(&cwq->lock);
cwq->wq = wq;
INIT_LIST_HEAD(&cwq->worklist);
init_waitqueue_head(&cwq->more_work);
init_waitqueue_head(&cwq->work_done);
- init_completion(&cwq->exit);
-
- init_completion(&startup.done);
- startup.cwq = cwq;
- startup.name = name;
- ret = kernel_thread(worker_thread, &startup, CLONE_FS | CLONE_FILES);
- if (ret >= 0) {
- wait_for_completion(&startup.done);
- BUG_ON(!cwq->thread);
- }
- return ret;
+
+ p = kthread_create(worker_thread, cwq, "%s/%d", name, cpu);
+ if (IS_ERR(p))
+ return PTR_ERR(p);
+ cwq->thread = p;
+ kthread_bind(p, cpu);
+ return 0;
}
struct workqueue_struct *create_workqueue(const char *name)
continue;
if (create_workqueue_thread(wq, name, cpu) < 0)
destroy = 1;
+ else
+ wake_up_process(wq->cpu_wq[cpu].thread);
}
/*
* Was there any error during startup? If yes then clean up:
struct cpu_workqueue_struct *cwq;
cwq = wq->cpu_wq + cpu;
- if (cwq->thread) {
- /* Tell thread to exit and wait for it. */
- cwq->thread = NULL;
- wake_up(&cwq->more_work);
-
- wait_for_completion(&cwq->exit);
- }
+ if (cwq->thread)
+ kthread_stop(cwq->thread);
}
void destroy_workqueue(struct workqueue_struct *wq)
flush_workqueue(keventd_wq);
}
+int keventd_up(void)
+{
+ return keventd_wq != NULL;
+}
+
int current_is_keventd(void)
{
struct cpu_workqueue_struct *cwq;
#define unhex(c) (isdigit(c) ? (c - '0') : (toupper(c) - 'A' + 10))
/**
- * bitmap_snprintf - convert bitmap to an ASCII hex string.
+ * bitmap_scnprintf - convert bitmap to an ASCII hex string.
* @buf: byte buffer into which string is placed
* @buflen: reserved size of @buf, in bytes
* @maskp: pointer to bitmap to convert
* Exactly @nmaskbits bits are displayed. Hex digits are grouped into
* comma-separated sets of eight digits per set.
*/
-int bitmap_snprintf(char *buf, unsigned int buflen,
+int bitmap_scnprintf(char *buf, unsigned int buflen,
const unsigned long *maskp, int nmaskbits)
{
int i, word, bit, len = 0;
word = i / BITS_PER_LONG;
bit = i % BITS_PER_LONG;
val = (maskp[word] >> bit) & chunkmask;
- len += snprintf(buf+len, buflen-len, "%s%0*lx", sep,
+ len += scnprintf(buf+len, buflen-len, "%s%0*lx", sep,
(chunksz+3)/4, val);
chunksz = CHUNKSZ;
sep = ",";
}
return len;
}
-EXPORT_SYMBOL(bitmap_snprintf);
+EXPORT_SYMBOL(bitmap_scnprintf);
/**
* bitmap_parse - convert an ASCII hex string into a bitmap.
-/*
+/*
* Oct 15, 2000 Matt Domsch <Matt_Domsch@dell.com>
* Nicer crc32 functions/docs submitted by linux@horizon.com. Thanks!
+ * Code was from the public domain, copyright abandoned. Code was
+ * subsequently included in the kernel, thus was re-licensed under the
+ * GNU GPL v2.
*
* Oct 12, 2000 Matt Domsch <Matt_Domsch@dell.com>
* Same crc32 function was used in 5 other places in the kernel.
* drivers/net/smc9194.c uses seed ~0, doesn't xor with ~0.
* fs/jffs2 uses seed 0, doesn't xor with ~0.
* fs/partitions/efi.c uses seed ~0, xor's with ~0.
- *
+ *
+ * This source code is licensed under the GNU General Public License,
+ * Version 2. See the file COPYING for more details.
*/
#include <linux/crc32.h>
#define attribute(x)
#endif
-/*
- * This code is in the public domain; copyright abandoned.
- * Liability for non-performance of this code is limited to the amount
- * you paid for it. Since it is distributed for free, your refund will
- * be very very small. If it breaks, you get to keep both pieces.
- */
MODULE_AUTHOR("Matt Domsch <Matt_Domsch@dell.com>");
MODULE_DESCRIPTION("Ethernet CRC32 calculations");
-MODULE_LICENSE("GPL and additional rights");
+MODULE_LICENSE("GPL");
#if CRC_LE_BITS == 1
/*
* the same way on decoding, it doesn't make a difference.
*/
-#if UNITTEST
+#ifdef UNITTEST
#include <stdlib.h>
#include <stdio.h>
* to the rest of the functions. The structure is defined in the
* header.
- * int idr_pre_get(struct idr *idp)
+ * int idr_pre_get(struct idr *idp, unsigned gfp_mask)
* This function should be called prior to locking and calling the
* following function. It pre allocates enough memory to satisfy the
- * worst possible allocation. It can sleep, so must not be called
- * with any spinlocks held. If the system is REALLY out of memory
- * this function returns 0, other wise 1.
+ * worst possible allocation. Unless gfp_mask is GFP_ATOMIC, it can
+ * sleep, so must not be called with any spinlocks held. If the system is
+ * REALLY out of memory this function returns 0, other wise 1.
* int idr_get_new(struct idr *idp, void *ptr);
spin_unlock(&idp->lock);
}
-int idr_pre_get(struct idr *idp)
+int idr_pre_get(struct idr *idp, unsigned gfp_mask)
{
while (idp->id_free_cnt < idp->layers + 1) {
struct idr_layer *new;
- new = kmem_cache_alloc(idr_layer_cache, GFP_KERNEL);
+ new = kmem_cache_alloc(idr_layer_cache, gfp_mask);
if(new == NULL)
return (0);
free_layer(idp, new);
/*
* Fri Jul 13 2001 Crutcher Dunnavant <crutcher+kernel@datastacks.com>
* - changed to provide snprintf and vsnprintf functions
+ * So Feb 1 16:51:32 CET 2004 Juergen Quade <quade@hsnr.de>
+ * - scnprintf and vscnprintf
*/
#include <stdarg.h>
}
/**
-* vsnprintf - Format a string and place it in a buffer
-* @buf: The buffer to place the result into
-* @size: The size of the buffer, including the trailing null space
-* @fmt: The format string to use
-* @args: Arguments for the format string
-*
-* The return value is the number of characters which would be
-* generated for the given input, excluding the trailing null,
-* as per ISO C99. If the return is greater than or equal to
-* @size, the resulting string is truncated.
-*
-* Call this function if you are already dealing with a va_list.
-* You probably want snprintf instead.
+ * vsnprintf - Format a string and place it in a buffer
+ * @buf: The buffer to place the result into
+ * @size: The size of the buffer, including the trailing null space
+ * @fmt: The format string to use
+ * @args: Arguments for the format string
+ *
+ * The return value is the number of characters which would
+ * be generated for the given input, excluding the trailing
+ * '\0', as per ISO C99. If you want to have the exact
+ * number of characters written into @buf as return value
+ * (not including the trailing '\0'), use vscnprintf. If the
+ * return is greater than or equal to @size, the resulting
+ * string is truncated.
+ *
+ * Call this function if you are already dealing with a va_list.
+ * You probably want snprintf instead.
*/
int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
{
EXPORT_SYMBOL(vsnprintf);
/**
+ * vscnprintf - Format a string and place it in a buffer
+ * @buf: The buffer to place the result into
+ * @size: The size of the buffer, including the trailing null space
+ * @fmt: The format string to use
+ * @args: Arguments for the format string
+ *
+ * The return value is the number of characters which have been written into
+ * the @buf not including the trailing '\0'. If @size is <= 0 the function
+ * returns 0.
+ *
+ * Call this function if you are already dealing with a va_list.
+ * You probably want scnprintf instead.
+ */
+int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
+{
+ int i;
+
+ i=vsnprintf(buf,size,fmt,args);
+ return (i >= size) ? (size - 1) : i;
+}
+
+EXPORT_SYMBOL(vscnprintf);
+
+/**
* snprintf - Format a string and place it in a buffer
* @buf: The buffer to place the result into
* @size: The size of the buffer, including the trailing null space
EXPORT_SYMBOL(snprintf);
/**
+ * scnprintf - Format a string and place it in a buffer
+ * @buf: The buffer to place the result into
+ * @size: The size of the buffer, including the trailing null space
+ * @fmt: The format string to use
+ * @...: Arguments for the format string
+ *
+ * The return value is the number of characters written into @buf not including
+ * the trailing '\0'. If @size is <= 0 the function returns 0. If the return is
+ * greater than or equal to @size, the resulting string is truncated.
+ */
+
+int scnprintf(char * buf, size_t size, const char *fmt, ...)
+{
+ va_list args;
+ int i;
+
+ va_start(args, fmt);
+ i = vsnprintf(buf, size, fmt, args);
+ va_end(args);
+ return (i >= size) ? (size - 1) : i;
+}
+EXPORT_SYMBOL(scnprintf);
+
+/**
* vsprintf - Format a string and place it in a buffer
* @buf: The buffer to place the result into
* @fmt: The format string to use
* @args: Arguments for the format string
*
+ * The function returns the number of characters written
+ * into @buf. Use vsnprintf or vscnprintf in order to avoid
+ * buffer overflows.
+ *
* Call this function if you are already dealing with a va_list.
* You probably want sprintf instead.
*/
* @buf: The buffer to place the result into
* @fmt: The format string to use
* @...: Arguments for the format string
+ *
+ * The function returns the number of characters written
+ * into @buf. Use snprintf or scnprintf in order to avoid
+ * buffer overflows.
*/
int sprintf(char * buf, const char *fmt, ...)
{
if (end > bdata->node_low_pfn)
BUG();
for (i = sidx; i < eidx; i++)
- if (test_and_set_bit(i, bdata->node_bootmem_map))
+ if (test_and_set_bit(i, bdata->node_bootmem_map)) {
+#ifdef CONFIG_DEBUG_BOOTMEM
printk("hm, page %08lx reserved twice.\n", i*PAGE_SIZE);
+#endif
+ }
}
static void __init free_bootmem_core(bootmem_data_t *bdata, unsigned long addr, unsigned long size)
if ((long)zap_bytes > 0)
continue;
if (need_resched()) {
+ int fullmm = tlb_is_full_mm(*tlbp);
tlb_finish_mmu(*tlbp, tlb_start, start);
cond_resched_lock(&mm->page_table_lock);
- *tlbp = tlb_gather_mmu(mm, 0);
+ *tlbp = tlb_gather_mmu(mm, fullmm);
tlb_start_valid = 0;
}
zap_bytes = ZAP_BLOCK_SIZE;
spin_lock(&mm->page_table_lock);
do {
struct page *map;
- while (!(map = follow_page(mm, start, write))) {
+ int lookup_write = write;
+ while (!(map = follow_page(mm, start, lookup_write))) {
spin_unlock(&mm->page_table_lock);
switch (handle_mm_fault(mm,vma,start,write)) {
case VM_FAULT_MINOR:
default:
BUG();
}
+ /*
+ * Now that we have performed a write fault
+ * and surely no longer have a shared page we
+ * shouldn't write, we shouldn't ignore an
+ * unwritable page in the page table if
+ * we are forcing write access.
+ */
+ lookup_write = write && !force;
spin_lock(&mm->page_table_lock);
}
if (pages) {
EXPORT_SYMBOL(remap_page_range);
/*
+ * Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when
+ * servicing faults for write access. In the normal case, do always want
+ * pte_mkwrite. But get_user_pages can cause write faults for mappings
+ * that do not have writing enabled, when used by access_process_vm.
+ */
+static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+{
+ if (likely(vma->vm_flags & VM_WRITE))
+ pte = pte_mkwrite(pte);
+ return pte;
+}
+
+/*
* We hold the mm semaphore for reading and vma->vm_mm->page_table_lock
*/
static inline void break_cow(struct vm_area_struct * vma, struct page * new_page, unsigned long address,
pte_t entry;
flush_cache_page(vma, address);
- entry = pte_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)));
+ entry = maybe_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)),
+ vma);
ptep_establish(vma, address, page_table, entry);
update_mmu_cache(vma, address, entry);
}
unlock_page(old_page);
if (reuse) {
flush_cache_page(vma, address);
- entry = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte)));
+ entry = maybe_mkwrite(pte_mkyoung(pte_mkdirty(pte)),
+ vma);
ptep_establish(vma, address, page_table, entry);
update_mmu_cache(vma, address, entry);
pte_unmap(page_table);
mark_page_accessed(page);
pte_chain = pte_chain_alloc(GFP_KERNEL);
if (!pte_chain) {
- ret = -ENOMEM;
+ ret = VM_FAULT_OOM;
goto out;
}
lock_page(page);
mm->rss++;
pte = mk_pte(page, vma->vm_page_prot);
if (write_access && can_share_swap_page(page))
- pte = pte_mkdirty(pte_mkwrite(pte));
+ pte = maybe_mkwrite(pte_mkdirty(pte), vma);
unlock_page(page);
flush_icache_page(vma, page);
goto out;
}
mm->rss++;
- entry = pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
+ entry = maybe_mkwrite(pte_mkdirty(mk_pte(page,
+ vma->vm_page_prot)),
+ vma);
lru_cache_add_active(page);
mark_page_accessed(page);
}
flush_icache_page(vma, new_page);
entry = mk_pte(new_page, vma->vm_page_prot);
if (write_access)
- entry = pte_mkwrite(pte_mkdirty(entry));
+ entry = maybe_mkwrite(pte_mkdirty(entry), vma);
set_pte(page_table, entry);
pte_chain = page_add_rmap(new_page, page_table, pte_chain);
pte_unmap(page_table);
printk("\n");
for (cpu = 0; cpu < NR_CPUS; ++cpu) {
- struct per_cpu_pageset *pageset = zone->pageset + cpu;
+ struct per_cpu_pageset *pageset;
+
+ if (!cpu_possible(cpu))
+ continue;
+
+ pageset = zone->pageset + cpu;
+
for (temperature = 0; temperature < 2; temperature++)
printk("cpu %d %s: low %d, high %d, batch %d\n",
cpu,
#endif /* CONFIG_PROC_FS */
-static void __devinit init_page_alloc_cpu(int cpu)
-{
- struct page_state *ps = &per_cpu(page_states, cpu);
- memset(ps, 0, sizeof(*ps));
-}
-
-static int __devinit page_alloc_cpu_notify(struct notifier_block *self,
- unsigned long action, void *hcpu)
-{
- int cpu = (unsigned long)hcpu;
- switch(action) {
- case CPU_UP_PREPARE:
- init_page_alloc_cpu(cpu);
- break;
- default:
- break;
- }
- return NOTIFY_OK;
-}
-
-static struct notifier_block __devinitdata page_alloc_nb = {
- .notifier_call = page_alloc_cpu_notify,
-};
void __init page_alloc_init(void)
{
- init_page_alloc_cpu(smp_processor_id());
- register_cpu_notifier(&page_alloc_nb);
}
/*
pte_addr_t pte_paddr = ptep_to_paddr(ptep);
struct pte_chain *cur_pte_chain;
- if (!pfn_valid(page_to_pfn(page)) || PageReserved(page))
+ if (PageReserved(page))
return pte_chain;
pte_chain_lock(page);
#define RED_ACTIVE 0x170FC2A5UL /* when obj is active */
/* ...and for poisoning */
-#define POISON_BEFORE 0x5a /* for use-uninitialised poisoning */
-#define POISON_AFTER 0x6b /* for use-after-free poisoning */
+#define POISON_INUSE 0x5a /* for use-uninitialised poisoning */
+#define POISON_FREE 0x6b /* for use-after-free poisoning */
#define POISON_END 0xa5 /* end-byte of poisoning */
/* memory layout of objects:
*(unsigned char *)(addr+size-1) = POISON_END;
}
-static void *scan_poisoned_obj(unsigned char* addr, unsigned int size)
+static void dump_line(char *data, int offset, int limit)
{
- unsigned char *end;
-
- end = addr + size - 1;
+ int i;
+ printk(KERN_ERR "%03x:", offset);
+ for (i=0;i<limit;i++) {
+ printk(" %02x", (unsigned char)data[offset+i]);
+ }
+ printk("\n");
+}
+#endif
+
+static void print_objinfo(kmem_cache_t *cachep, void *objp, int lines)
+{
+#if DEBUG
+ int i, size;
+ char *realobj;
- for (; addr < end; addr++) {
- if (*addr != POISON_BEFORE && *addr != POISON_AFTER)
- return addr;
+ if (cachep->flags & SLAB_RED_ZONE) {
+ printk(KERN_ERR "Redzone: 0x%lx/0x%lx.\n",
+ *dbg_redzone1(cachep, objp),
+ *dbg_redzone2(cachep, objp));
}
- if (*addr != POISON_END)
- return addr;
- return NULL;
+
+ if (cachep->flags & SLAB_STORE_USER) {
+ printk(KERN_ERR "Last user: [<%p>]", *dbg_userword(cachep, objp));
+ print_symbol("(%s)", (unsigned long)*dbg_userword(cachep, objp));
+ printk("\n");
+ }
+ realobj = (char*)objp+obj_dbghead(cachep);
+ size = cachep->objsize;
+ for (i=0; i<size && lines;i+=16, lines--) {
+ int limit;
+ limit = 16;
+ if (i+limit > size)
+ limit = size-i;
+ dump_line(realobj, i, limit);
+ }
+#endif
}
+#if DEBUG
+
static void check_poison_obj(kmem_cache_t *cachep, void *objp)
{
- void *end;
- void *realobj;
- int size = obj_reallen(cachep);
-
- realobj = objp+obj_dbghead(cachep);
-
- end = scan_poisoned_obj(realobj, size);
- if (end) {
- int s;
- printk(KERN_ERR "Slab corruption: start=%p, expend=%p, "
- "problemat=%p\n", realobj, realobj+size-1, end);
- if (cachep->flags & SLAB_STORE_USER) {
- printk(KERN_ERR "Last user: [<%p>]", *dbg_userword(cachep, objp));
- print_symbol("(%s)", (unsigned long)*dbg_userword(cachep, objp));
- printk("\n");
+ char *realobj;
+ int size, i;
+ int lines = 0;
+
+ realobj = (char*)objp+obj_dbghead(cachep);
+ size = obj_reallen(cachep);
+
+ for (i=0;i<size;i++) {
+ char exp = POISON_FREE;
+ if (i == size-1)
+ exp = POISON_END;
+ if (realobj[i] != exp) {
+ int limit;
+ /* Mismatch ! */
+ /* Print header */
+ if (lines == 0) {
+ printk(KERN_ERR "Slab corruption: start=%p, len=%d\n",
+ realobj, size);
+ print_objinfo(cachep, objp, 0);
+ }
+ /* Hexdump the affected line */
+ i = (i/16)*16;
+ limit = 16;
+ if (i+limit > size)
+ limit = size-i;
+ dump_line(realobj, i, limit);
+ i += 16;
+ lines++;
+ /* Limit to 5 lines */
+ if (lines > 5)
+ break;
}
- printk(KERN_ERR "Data: ");
- for (s = 0; s < size; s++) {
- if (((char*)realobj)[s] == POISON_BEFORE)
- printk(".");
- else if (((char*)realobj)[s] == POISON_AFTER)
- printk("*");
- else
- printk("%02X ", ((unsigned char*)realobj)[s]);
+ }
+ if (lines != 0) {
+ /* Print some data about the neighboring objects, if they
+ * exist:
+ */
+ struct slab *slabp = GET_PAGE_SLAB(virt_to_page(objp));
+ int objnr;
+
+ objnr = (objp-slabp->s_mem)/cachep->objsize;
+ if (objnr) {
+ objp = slabp->s_mem+(objnr-1)*cachep->objsize;
+ realobj = (char*)objp+obj_dbghead(cachep);
+ printk(KERN_ERR "Prev obj: start=%p, len=%d\n",
+ realobj, size);
+ print_objinfo(cachep, objp, 2);
}
- printk("\n");
- printk(KERN_ERR "Next: ");
- for (; s < size + 32; s++) {
- if (((char*)realobj)[s] == POISON_BEFORE)
- printk(".");
- else if (((char*)realobj)[s] == POISON_AFTER)
- printk("*");
- else
- printk("%02X ", ((unsigned char*)realobj)[s]);
+ if (objnr+1 < cachep->num) {
+ objp = slabp->s_mem+(objnr+1)*cachep->objsize;
+ realobj = (char*)objp+obj_dbghead(cachep);
+ printk(KERN_ERR "Next obj: start=%p, len=%d\n",
+ realobj, size);
+ print_objinfo(cachep, objp, 2);
}
- printk("\n");
- slab_error(cachep, "object was modified after freeing");
}
}
#endif
unsigned long flags, void (*ctor)(void*, kmem_cache_t *, unsigned long),
void (*dtor)(void*, kmem_cache_t *, unsigned long))
{
- const char *func_nm = KERN_ERR "kmem_create: ";
size_t left_over, align, slab_size;
kmem_cache_t *cachep = NULL;
(size < BYTES_PER_WORD) ||
(size > (1<<MAX_OBJ_ORDER)*PAGE_SIZE) ||
(dtor && !ctor) ||
- (offset < 0 || offset > size))
+ (offset < 0 || offset > size)) {
+ printk(KERN_ERR "%s: Early error in slab %s\n",
+ __FUNCTION__, name);
BUG();
+ }
#if DEBUG
WARN_ON(strchr(name, ' ')); /* It confuses parsers */
if ((flags & SLAB_DEBUG_INITIAL) && !ctor) {
/* No constructor, but inital state check requested */
- printk("%sNo con, but init state check requested - %s\n", func_nm, name);
+ printk(KERN_ERR "%s: No con, but init state check "
+ "requested - %s\n", __FUNCTION__, name);
flags &= ~SLAB_DEBUG_INITIAL;
}
if (size & (BYTES_PER_WORD-1)) {
size += (BYTES_PER_WORD-1);
size &= ~(BYTES_PER_WORD-1);
- printk("%sForcing size word alignment - %s\n", func_nm, name);
}
#if DEBUG
#if DEBUG
/* need to poison the objs? */
if (cachep->flags & SLAB_POISON)
- poison_obj(cachep, objp, POISON_BEFORE);
+ poison_obj(cachep, objp, POISON_FREE);
if (cachep->flags & SLAB_STORE_USER)
*dbg_userword(cachep, objp) = NULL;
if (cachep->flags & SLAB_POISON) {
#ifdef CONFIG_DEBUG_PAGEALLOC
if ((cachep->objsize % PAGE_SIZE) == 0 && OFF_SLAB(cachep)) {
- store_stackinfo(cachep, objp, POISON_AFTER);
+ store_stackinfo(cachep, objp, (unsigned long)caller);
kernel_map_pages(virt_to_page(objp), cachep->objsize/PAGE_SIZE, 0);
} else {
- poison_obj(cachep, objp, POISON_AFTER);
+ poison_obj(cachep, objp, POISON_FREE);
}
#else
- poison_obj(cachep, objp, POISON_AFTER);
+ poison_obj(cachep, objp, POISON_FREE);
#endif
}
#endif
#else
check_poison_obj(cachep, objp);
#endif
- poison_obj(cachep, objp, POISON_BEFORE);
+ poison_obj(cachep, objp, POISON_INUSE);
}
if (cachep->flags & SLAB_STORE_USER)
*dbg_userword(cachep, objp) = caller;
kernel_map_pages(virt_to_page(objp),
c->objsize/PAGE_SIZE, 1);
- if (c->flags & SLAB_RED_ZONE)
- printk("redzone: 0x%lx/0x%lx.\n",
- *dbg_redzone1(c, objp),
- *dbg_redzone2(c, objp));
-
- if (c->flags & SLAB_STORE_USER)
- printk("Last user: %p.\n",
- *dbg_userword(c, objp));
+ print_objinfo(c, objp, 2);
}
spin_unlock_irqrestore(&c->spinlock, flags);
goto out_put_dev;
out_free_newdev:
- kfree(new_dev);
+ free_netdev(new_dev);
out_unlock:
rtnl_unlock();
to work, choose Y.
To compile this driver as a module, choose M here: the module will
- be called af_packet. If you use modprobe or kmod, you may also
- want to add "alias net-pf-17 af_packet" to /etc/modules.conf.
+ be called af_packet.
If unsure, say Y.
want to say Y here.
To compile this driver as a module, choose M here: the module will be
- called unix. If you try building this as a module and you have
- said Y to "Kernel module loader support" above, be sure to add
- 'alias net-pf-1 unix' to your /etc/modules.conf file. Note that
- several important services won't work correctly if you say M here
- and then neglect to load the module.
+ called unix. Note that several important services won't work
+ correctly if you say M here and then neglect to load the module.
Say Y unless you know what you are doing.
in the kernel source.
To compile this protocol support as a module, choose M here: the
- module will be called ipv6. If you try building this as a module
- and you have said Y to "Kernel module loader support" above,
- be sure to add 'alias net-pf-10 ipv6' to your /etc/modules.conf file.
+ module will be called ipv6.
It is safe to say N here for now.
ddp->deh_dport = usat->sat_port;
ddp->deh_sport = at->src_port;
- SOCK_DEBUG(sk, "SK %p: Copy user data (%d bytes).\n", sk, len);
+ SOCK_DEBUG(sk, "SK %p: Copy user data (%Zd bytes).\n", sk, len);
err = memcpy_fromiovec(skb_put(skb, len), msg->msg_iov, len);
if (err) {
kfree_skb(skb);
/* else queued/sent above in the aarp queue */
}
- SOCK_DEBUG(sk, "SK %p: Done write (%d).\n", sk, len);
+ SOCK_DEBUG(sk, "SK %p: Done write (%Zd).\n", sk, len);
return len;
}
!clip_vcc || clip_vcc->encap ? "LLC" : "NULL",
(jiffies-(clip_vcc ? clip_vcc->last_use : entry->neigh->used))/HZ);
- off = snprintf(buf, sizeof(buf) - 1, "%d.%d.%d.%d", NIPQUAD(entry->ip));
+ off = scnprintf(buf, sizeof(buf) - 1, "%d.%d.%d.%d", NIPQUAD(entry->ip));
while (off < 16)
buf[off++] = ' ';
buf[off] = '\0';
return -ENOMEM;
snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i);
if (register_netdev(dev_lec[i])) {
- kfree(dev_lec[i]);
+ free_netdev(dev_lec[i]);
return -EINVAL;
}
__bnep_unlink_session(s);
up_write(&bnep_session_sem);
- kfree(dev);
+ free_netdev(dev);
return 0;
}
failed:
up_write(&bnep_session_sem);
- kfree(dev);
+ free_netdev(dev);
return err;
}
lock_sock(sk);
while (len) {
- size_t size = min(len, d->mtu);
+ size_t size = min_t(size_t, len, d->mtu);
skb = sock_alloc_send_skb(sk, size + RFCOMM_SKB_RESERVE,
msg->msg_flags & MSG_DONTWAIT, &err);
#endif
skb->h.raw = skb->nh.raw = skb->data;
+ skb->mac_len = skb->nh.raw - skb->mac.raw;
pt_prev = NULL;
rcu_read_lock();
{
struct ethtool_ringparam ringparam;
- if (!dev->ethtool_ops->get_ringparam)
+ if (!dev->ethtool_ops->set_ringparam)
return -EOPNOTSUPP;
if (copy_from_user(&ringparam, useraddr, sizeof(ringparam)))
struct sock_filter *ftest;
int pc;
- if ((unsigned int)flen >= (~0U / sizeof(struct sock_filter)))
+ if (((unsigned int)flen >= (~0U / sizeof(struct sock_filter))) || flen == 0)
return -EINVAL;
/* check the filter code now */
if (!tbl->kmem_cachep)
tbl->kmem_cachep = kmem_cache_create(tbl->id,
- (tbl->entry_size +
- 15) & ~15,
+ tbl->entry_size,
0, SLAB_HWCACHE_ALIGN,
NULL, NULL);
tbl->lock = RW_LOCK_UNLOCKED;
* Fix refcount off by one if first packet fails, potential null deref,
* memleak 030710- KJP
*
+ * Fixed unaligned access on IA-64 Grant Grundler <grundler@parisc-linux.org>
+ *
* See Documentation/networking/pktgen.txt for how to use this.
*/
#define cycles() ((u32)get_cycles())
-#define VERSION "pktgen version 1.31"
+#define VERSION "pktgen version 1.32"
static char version[] __initdata =
"pktgen.c: v1.3: Packet Generator for packet performance testing.\n";
struct pktgen_hdr {
__u32 pgh_magic;
__u32 seq_num;
- struct timeval timestamp;
+ __u32 tv_sec;
+ __u32 tv_usec;
};
static int cpu_speed;
/* Stamp the time, and sequence number, convert them to network byte order */
if (pgh) {
+ struct timeval timestamp;
+
pgh->pgh_magic = htonl(PKTGEN_MAGIC);
- do_gettimeofday(&(pgh->timestamp));
- pgh->timestamp.tv_usec = htonl(pgh->timestamp.tv_usec);
- pgh->timestamp.tv_sec = htonl(pgh->timestamp.tv_sec);
- pgh->seq_num = htonl(info->seq_num);
+ pgh->seq_num = htonl(info->seq_num);
+
+ do_gettimeofday(×tamp);
+ pgh->tv_sec = htonl(timestamp.tv_sec);
+ pgh->tv_usec = htonl(timestamp.tv_usec);
}
return skb;
* whole size thats been asked for (plus 11 bytes of header). If this
* fails, then we try for any size over 16 bytes for SOCK_STREAMS.
*/
-struct sk_buff *dn_alloc_send_skb(struct sock *sk, int *size, int noblock, int *err)
+struct sk_buff *dn_alloc_send_skb(struct sock *sk, size_t *size, int noblock, int *err)
{
int space;
int len;
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
return -1;
}
static void arp_solicit(struct neighbour *neigh, struct sk_buff *skb)
{
- u32 saddr;
+ u32 saddr = 0;
u8 *dst_ha = NULL;
struct net_device *dev = neigh->dev;
u32 target = *(u32*)neigh->primary_key;
int probes = atomic_read(&neigh->probes);
+ struct in_device *in_dev = in_dev_get(dev);
+
+ if (!in_dev)
+ return;
- if (skb && inet_addr_type(skb->nh.iph->saddr) == RTN_LOCAL)
+ switch (IN_DEV_ARP_ANNOUNCE(in_dev)) {
+ default:
+ case 0: /* By default announce any local IP */
+ if (skb && inet_addr_type(skb->nh.iph->saddr) == RTN_LOCAL)
+ saddr = skb->nh.iph->saddr;
+ break;
+ case 1: /* Restrict announcements of saddr in same subnet */
+ if (!skb)
+ break;
saddr = skb->nh.iph->saddr;
- else
+ if (inet_addr_type(saddr) == RTN_LOCAL) {
+ /* saddr should be known to target */
+ if (inet_addr_onlink(in_dev, target, saddr))
+ break;
+ }
+ saddr = 0;
+ break;
+ case 2: /* Avoid secondary IPs, get a primary/preferred one */
+ break;
+ }
+
+ if (in_dev)
+ in_dev_put(in_dev);
+ if (!saddr)
saddr = inet_select_addr(dev, target, RT_SCOPE_LINK);
if ((probes -= neigh->parms->ucast_probes) < 0) {
read_unlock_bh(&neigh->lock);
}
+static int arp_ignore(struct in_device *in_dev, struct net_device *dev,
+ u32 sip, u32 tip)
+{
+ int scope;
+
+ switch (IN_DEV_ARP_IGNORE(in_dev)) {
+ case 0: /* Reply, the tip is already validated */
+ return 0;
+ case 1: /* Reply only if tip is configured on the incoming interface */
+ sip = 0;
+ scope = RT_SCOPE_HOST;
+ break;
+ case 2: /*
+ * Reply only if tip is configured on the incoming interface
+ * and is in same subnet as sip
+ */
+ scope = RT_SCOPE_HOST;
+ break;
+ case 3: /* Do not reply for scope host addresses */
+ sip = 0;
+ scope = RT_SCOPE_LINK;
+ dev = NULL;
+ break;
+ case 4: /* Reserved */
+ case 5:
+ case 6:
+ case 7:
+ return 0;
+ case 8: /* Do not reply */
+ return 1;
+ default:
+ return 0;
+ }
+ return !inet_confirm_addr(dev, sip, tip, scope);
+}
+
static int arp_filter(__u32 sip, __u32 tip, struct net_device *dev)
{
struct flowi fl = { .nl_u = { .ip4_u = { .daddr = sip,
/* Special case: IPv4 duplicate address detection packet (RFC2131) */
if (sip == 0) {
if (arp->ar_op == htons(ARPOP_REQUEST) &&
- inet_addr_type(tip) == RTN_LOCAL)
+ inet_addr_type(tip) == RTN_LOCAL &&
+ !arp_ignore(in_dev,dev,sip,tip))
arp_send(ARPOP_REPLY,ETH_P_ARP,tip,dev,tip,sha,dev->dev_addr,dev->dev_addr);
goto out;
}
n = neigh_event_ns(&arp_tbl, sha, &sip, dev);
if (n) {
int dont_send = 0;
- if (IN_DEV_ARPFILTER(in_dev))
+
+ if (!dont_send)
+ dont_send |= arp_ignore(in_dev,dev,sip,tip);
+ if (!dont_send && IN_DEV_ARPFILTER(in_dev))
dont_send |= arp_filter(sip,tip,dev);
if (!dont_send)
arp_send(ARPOP_REPLY,ETH_P_ARP,sip,dev,tip,sha,dev->dev_addr,sha);
goto out;
}
+static u32 confirm_addr_indev(struct in_device *in_dev, u32 dst,
+ u32 local, int scope)
+{
+ int same = 0;
+ u32 addr = 0;
+
+ for_ifa(in_dev) {
+ if (!addr &&
+ (local == ifa->ifa_local || !local) &&
+ ifa->ifa_scope <= scope) {
+ addr = ifa->ifa_local;
+ if (same)
+ break;
+ }
+ if (!same) {
+ same = (!local || inet_ifa_match(local, ifa)) &&
+ (!dst || inet_ifa_match(dst, ifa));
+ if (same && addr) {
+ if (local || !dst)
+ break;
+ /* Is the selected addr into dst subnet? */
+ if (inet_ifa_match(addr, ifa))
+ break;
+ /* No, then can we use new local src? */
+ if (ifa->ifa_scope <= scope) {
+ addr = ifa->ifa_local;
+ break;
+ }
+ /* search for large dst subnet for addr */
+ same = 0;
+ }
+ }
+ } endfor_ifa(in_dev);
+
+ return same? addr : 0;
+}
+
+/*
+ * Confirm that local IP address exists using wildcards:
+ * - dev: only on this interface, 0=any interface
+ * - dst: only in the same subnet as dst, 0=any dst
+ * - local: address, 0=autoselect the local address
+ * - scope: maximum allowed scope value for the local address
+ */
+u32 inet_confirm_addr(const struct net_device *dev, u32 dst, u32 local, int scope)
+{
+ u32 addr = 0;
+ struct in_device *in_dev;
+
+ if (dev) {
+ read_lock(&inetdev_lock);
+ if ((in_dev = __in_dev_get(dev))) {
+ read_lock(&in_dev->lock);
+ addr = confirm_addr_indev(in_dev, dst, local, scope);
+ read_unlock(&in_dev->lock);
+ }
+ read_unlock(&inetdev_lock);
+
+ return addr;
+ }
+
+ read_lock(&dev_base_lock);
+ read_lock(&inetdev_lock);
+ for (dev = dev_base; dev; dev = dev->next) {
+ if ((in_dev = __in_dev_get(dev))) {
+ read_lock(&in_dev->lock);
+ addr = confirm_addr_indev(in_dev, dst, local, scope);
+ read_unlock(&in_dev->lock);
+ if (addr)
+ break;
+ }
+ }
+ read_unlock(&inetdev_lock);
+ read_unlock(&dev_base_lock);
+
+ return addr;
+}
+
/*
* Device notifier
*/
static struct devinet_sysctl_table {
struct ctl_table_header *sysctl_header;
- ctl_table devinet_vars[18];
+ ctl_table devinet_vars[20];
ctl_table devinet_dev[2];
ctl_table devinet_conf_dir[2];
ctl_table devinet_proto_dir[2];
.proc_handler = &proc_dointvec,
},
{
+ .ctl_name = NET_IPV4_CONF_ARP_ANNOUNCE,
+ .procname = "arp_announce",
+ .data = &ipv4_devconf.arp_announce,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+ {
+ .ctl_name = NET_IPV4_CONF_ARP_IGNORE,
+ .procname = "arp_ignore",
+ .data = &ipv4_devconf.arp_ignore,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+ {
.ctl_name = NET_IPV4_CONF_NOXFRM,
.procname = "disable_xfrm",
.data = &ipv4_devconf.no_xfrm,
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
return -1;
}
nt->parms = *parms;
if (register_netdevice(dev) < 0) {
- kfree(dev);
+ free_netdev(dev);
goto failed;
}
return err;
fail:
inet_del_protocol(&ipgre_protocol, IPPROTO_GRE);
- kfree(ipgre_fb_tunnel_dev);
+ free_netdev(ipgre_fb_tunnel_dev);
goto out;
}
nt->parms = *parms;
if (register_netdevice(dev) < 0) {
- kfree(dev);
+ free_netdev(dev);
goto failed;
}
return err;
fail:
xfrm4_tunnel_deregister(&ipip_handler);
- kfree(ipip_fb_tunnel_dev);
+ free_netdev(ipip_fb_tunnel_dev);
goto out;
}
return NULL;
if (register_netdevice(dev)) {
- kfree(dev);
+ free_netdev(dev);
return NULL;
}
dev->iflink = 0;
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <linux/skbuff.h>
#include <linux/in.h>
#include <linux/ip.h>
-#include <linux/init.h>
#include <net/protocol.h>
-#include <net/tcp.h>
-#include <net/udp.h>
#include <asm/system.h>
#include <linux/stat.h>
#include <linux/proc_fs.h>
*
*/
-#include <linux/config.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/compiler.h>
#include <linux/vmalloc.h>
#include <linux/proc_fs.h> /* for proc_net_* */
#include <linux/seq_file.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/compiler.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/icmp.h>
*
*/
-#include <linux/config.h>
-#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
-#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/sysctl.h>
#include <linux/proc_fs.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <asm/system.h>
-#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/skbuff.h>
#include <linux/in.h>
#include <linux/ip.h>
-#include <linux/init.h>
#include <net/protocol.h>
#include <net/tcp.h>
* me to write this module.
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
-/* for systcl */
+/* for sysctl */
#include <linux/fs.h>
#include <linux/sysctl.h>
* entries that haven't been touched for a day.
*/
#define COUNT_FOR_FULL_EXPIRATION 30
-int sysctl_ip_vs_lblc_expiration = 24*60*60*HZ;
+static int sysctl_ip_vs_lblc_expiration = 24*60*60*HZ;
/*
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
-/* for systcl */
+/* for sysctl */
#include <linux/fs.h>
#include <linux/sysctl.h>
/* for proc_net_create/proc_net_remove */
* entries that haven't been touched for a day.
*/
#define COUNT_FOR_FULL_EXPIRATION 30
-int sysctl_ip_vs_lblcr_expiration = 24*60*60*HZ;
+static int sysctl_ip_vs_lblcr_expiration = 24*60*60*HZ;
/*
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <linux/skbuff.h>
#include <linux/in.h>
#include <linux/ip.h>
-#include <linux/init.h>
#include <net/protocol.h>
#include <net/tcp.h>
#include <net/udp.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/compiler.h>
-#include <linux/vmalloc.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/compiler.h>
-#include <linux/vmalloc.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/compiler.h>
-#include <linux/vmalloc.h>
#include <linux/icmp.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>
*
*/
-#include <linux/config.h>
-#include <linux/compiler.h>
+#include <linux/kernel.h>
#include <linux/ip.h>
#include <linux/tcp.h> /* for tcphdr */
#include <net/ip.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <asm/string.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
* messages filtering.
*/
-#define __KERNEL_SYSCALLS__ /* for waitpid */
-
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/net.h>
-#include <linux/sched.h>
-#include <linux/wait.h>
-#include <linux/unistd.h>
#include <linux/completion.h>
#include <linux/skbuff.h>
}
-static int errno;
-
static DECLARE_WAIT_QUEUE_HEAD(sync_wait);
static pid_t sync_master_pid = 0;
static pid_t sync_backup_pid = 0;
if (ip_vs_sync_state & IP_VS_STATE_MASTER && !sync_master_pid) {
state = IP_VS_STATE_MASTER;
- name = "ipvs syncmaster";
+ name = "ipvs_syncmaster";
} else if (ip_vs_sync_state & IP_VS_STATE_BACKUP && !sync_backup_pid) {
state = IP_VS_STATE_BACKUP;
- name = "ipvs syncbackup";
+ name = "ipvs_syncbackup";
} else {
IP_VS_BUG();
ip_vs_use_count_dec();
static int fork_sync_thread(void *startup)
{
+ pid_t pid;
+
/* fork the sync thread here, then the parent process of the
sync thread is the init process after this thread exits. */
- if (kernel_thread(sync_thread, startup, 0) < 0)
- IP_VS_BUG();
+ repeat:
+ if ((pid = kernel_thread(sync_thread, startup, 0)) < 0) {
+ IP_VS_ERR("could not create sync_thread due to %d... "
+ "retrying.\n", pid);
+ current->state = TASK_UNINTERRUPTIBLE;
+ schedule_timeout(HZ);
+ goto repeat;
+ }
+
return 0;
}
{
DECLARE_COMPLETION(startup);
pid_t pid;
- int waitpid_result;
if ((state == IP_VS_STATE_MASTER && sync_master_pid) ||
(state == IP_VS_STATE_BACKUP && sync_backup_pid))
ip_vs_backup_syncid = syncid;
}
- if ((pid = kernel_thread(fork_sync_thread, &startup, 0)) < 0)
- IP_VS_BUG();
-
- if ((waitpid_result = waitpid(pid, NULL, __WCLONE)) != pid) {
- IP_VS_ERR("%s: waitpid(%d,...) failed, errno %d\n",
- __FUNCTION__, pid, -waitpid_result);
+ repeat:
+ if ((pid = kernel_thread(fork_sync_thread, &startup, 0)) < 0) {
+ IP_VS_ERR("could not create fork_sync_thread due to %d... "
+ "retrying.\n", pid);
+ current->state = TASK_UNINTERRUPTIBLE;
+ schedule_timeout(HZ);
+ goto repeat;
}
wait_for_completion(&startup);
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
*
*/
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/errno.h>
#include <net/ip_vs.h>
*
*/
-#include <linux/config.h>
-#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/compiler.h>
#include <linux/ip.h>
#include <linux/tcp.h> /* for tcphdr */
#include <net/tcp.h> /* for csum_tcpudp_magic */
#endif /* CONFIG_PROC_FS */
#endif /* CONFIG_NET_CLS_ROUTE */
+static __initdata unsigned long rhash_entries;
+static int __init set_rhash_entries(char *str)
+{
+ if (!str)
+ return 0;
+ rhash_entries = simple_strtoul(str, &str, 0);
+ return 1;
+}
+__setup("rhash_entries=", set_rhash_entries);
+
int __init ip_rt_init(void)
{
int i, order, goal, rc = 0;
panic("IP: failed to allocate ip_dst_cache\n");
goal = num_physpages >> (26 - PAGE_SHIFT);
-
+ if (!rhash_entries)
+ goal = min(10, goal);
+ else
+ goal = (rhash_entries * sizeof(struct rt_hash_bucket)) >> PAGE_SHIFT;
for (order = 0; (1UL << order) < goal; order++)
/* NOTHING */;
extern void __skb_cb_too_small_for_tcp(int, int);
extern void tcpdiag_init(void);
+static __initdata unsigned long thash_entries;
+static int __init set_thash_entries(char *str)
+{
+ if (!str)
+ return 0;
+ thash_entries = simple_strtoul(str, &str, 0);
+ return 1;
+}
+__setup("thash_entries=", set_thash_entries);
+
void __init tcp_init(void)
{
struct sk_buff *skb = NULL;
else
goal = num_physpages >> (23 - PAGE_SHIFT);
+ if (!thash_entries)
+ goal = min(10UL, goal);
+ else
+ goal = (thash_entries * sizeof(struct tcp_ehash_bucket)) >> PAGE_SHIFT;
for (order = 0; (1UL << order) < goal; order++)
;
do {
*
*/
+#include <linux/string.h>
#include <net/inet_ecn.h>
#include <net/ip.h>
#include <net/xfrm.h>
return xfrm4_rcv_encap(skb, 0);
}
-static inline void ipip_ecn_decapsulate(struct iphdr *outer_iph, struct sk_buff *skb)
+static inline void ipip_ecn_decapsulate(struct sk_buff *skb)
{
- struct iphdr *inner_iph = skb->nh.iph;
+ struct iphdr *outer_iph = skb->nh.iph;
+ struct iphdr *inner_iph = skb->h.ipiph;
if (INET_ECN_is_ce(outer_iph->tos) &&
INET_ECN_is_not_ce(inner_iph->tos))
if (x->props.mode) {
if (iph->protocol != IPPROTO_IPIP)
goto drop;
- skb->nh.raw = skb->data;
+ if (!pskb_may_pull(skb, sizeof(struct iphdr)))
+ goto drop;
+ if (skb_cloned(skb) &&
+ pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
+ goto drop;
if (!(x->props.flags & XFRM_STATE_NOECN))
- ipip_ecn_decapsulate(iph, skb);
- iph = skb->nh.iph;
+ ipip_ecn_decapsulate(skb);
+ skb->mac.raw = memmove(skb->data - skb->mac_len,
+ skb->mac.raw, skb->mac_len);
+ skb->nh.raw = skb->data;
memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
decaps = 1;
break;
.accept_ra = 1,
.accept_redirects = 1,
.autoconf = 1,
+ .force_mld_version = 0,
.dad_transmits = 1,
.rtr_solicits = MAX_RTR_SOLICITATIONS,
.rtr_solicit_interval = RTR_SOLICITATION_INTERVAL,
if ((addr->s6_addr32[0] | addr->s6_addr32[1]) == 0) {
if (addr->s6_addr32[2] == 0) {
- if (addr->in6_u.u6_addr32[3] == 0)
+ if (addr->s6_addr32[3] == 0)
return IPV6_ADDR_ANY;
if (addr->s6_addr32[3] == htonl(0x00000001))
eui[0] ^= 2;
return 0;
case ARPHRD_ARCNET:
- /* XXX: inherit EUI-64 fro mother interface -- yoshfuji */
+ /* XXX: inherit EUI-64 from other interface -- yoshfuji */
if (dev->addr_len != ARCNET_ALEN)
return -1;
memset(eui, 0, 7);
array[DEVCONF_RTR_SOLICITS] = cnf->rtr_solicits;
array[DEVCONF_RTR_SOLICIT_INTERVAL] = cnf->rtr_solicit_interval;
array[DEVCONF_RTR_SOLICIT_DELAY] = cnf->rtr_solicit_delay;
+ array[DEVCONF_FORCE_MLD_VERSION] = cnf->force_mld_version;
#ifdef CONFIG_IPV6_PRIVACY
array[DEVCONF_USE_TEMPADDR] = cnf->use_tempaddr;
array[DEVCONF_TEMP_VALID_LFT] = cnf->temp_valid_lft;
static struct addrconf_sysctl_table
{
struct ctl_table_header *sysctl_header;
- ctl_table addrconf_vars[17];
+ ctl_table addrconf_vars[18];
ctl_table addrconf_dev[2];
ctl_table addrconf_conf_dir[2];
ctl_table addrconf_proto_dir[2];
.proc_handler = &proc_dointvec_jiffies,
.strategy = &sysctl_jiffies,
},
+ {
+ .ctl_name = NET_IPV6_FORCE_MLD_VERSION,
+ .procname = "force_mld_version",
+ .data = &ipv6_devconf.force_mld_version,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
#ifdef CONFIG_IPV6_PRIVACY
{
.ctl_name = NET_IPV6_USE_TEMPADDR,
#include <net/transp_v6.h>
#include <net/ip6_route.h>
#include <net/addrconf.h>
-#if CONFIG_IPV6_TUNNEL
+#ifdef CONFIG_IPV6_TUNNEL
#include <net/ip6_tunnel.h>
#endif
for (curr=procs; curr->type >= 0; curr++) {
if (curr->type == skb->nh.raw[off]) {
/* type specific length/alignment
- checks will be perfomed in the
+ checks will be performed in the
func(). */
if (curr->func(skb, off) == 0)
return 0;
/*
* A routing update causes an increase of the serial number on the
- * afected subtree. This allows for cached routes to be asynchronously
+ * affected subtree. This allows for cached routes to be asynchronously
* tested when modifications are made to the destination cache as a
* result of redirects, path MTU changes, etc.
*/
smp_read_barrier_depends();
if (ipprot->flags & INET6_PROTO_FINAL) {
+ struct ipv6hdr *hdr;
+
if (!cksum_sub && skb->ip_summed == CHECKSUM_HW) {
skb->csum = csum_sub(skb->csum,
csum_partial(skb->nh.raw, skb->h.raw-skb->nh.raw, 0));
cksum_sub++;
}
+ hdr = skb->nh.ipv6h;
+ if (ipv6_addr_is_multicast(&hdr->daddr) &&
+ !ipv6_chk_mcast_addr(skb->dev, &hdr->daddr,
+ &hdr->saddr) &&
+ !ipv6_is_mld(skb, nexthdr))
+ goto discard;
}
if (!(ipprot->flags & INET6_PROTO_NOPOLICY) &&
!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
int ip6_mc_input(struct sk_buff *skb)
{
- struct ipv6hdr *hdr;
- int deliver = 0;
- int discard = 1;
+ struct ipv6hdr *hdr;
+ int deliver;
IP6_INC_STATS_BH(Ip6InMcastPkts);
hdr = skb->nh.ipv6h;
- if (ipv6_chk_mcast_addr(skb->dev, &hdr->daddr, &hdr->saddr))
- deliver = 1;
+ deliver = likely(!(skb->dev->flags & (IFF_PROMISC|IFF_ALLMULTI))) ||
+ ipv6_chk_mcast_addr(skb->dev, &hdr->daddr, NULL);
/*
* IPv6 multicast router mode isnt currently supported.
if (deliver) {
skb2 = skb_clone(skb, GFP_ATOMIC);
+ dst_output(skb2);
} else {
- discard = 0;
- skb2 = skb;
+ dst_output(skb);
+ return 0;
}
-
- dst_output(skb2);
}
}
#endif
- if (deliver) {
- discard = 0;
+ if (likely(deliver)) {
ip6_input(skb);
+ return 0;
}
-
- if (discard)
- kfree_skb(skb);
+ /* discard */
+ kfree_skb(skb);
return 0;
}
t->parms = *p;
if ((err = register_netdevice(dev)) < 0) {
- kfree(dev);
+ free_netdev(dev);
return err;
}
dev_hold(dev);
__u16 len;
/* If the packet doesn't contain the original IPv6 header we are
- in trouble since we might need the source address for furter
+ in trouble since we might need the source address for further
processing of the error. */
read_lock(&ip6ip6_lock);
ip6ip6_fb_tnl_dev->init = ip6ip6_fb_tnl_dev_init;
if ((err = register_netdev(ip6ip6_fb_tnl_dev))) {
- kfree(ip6ip6_fb_tnl_dev);
+ free_netdev(ip6ip6_fb_tnl_dev);
goto fail;
}
return 0;
#define IGMP6_UNSOLICITED_IVAL (10*HZ)
#define MLD_QRV_DEFAULT 2
-#define MLD_V1_SEEN(idev) ((idev)->mc_v1_seen && \
- time_before(jiffies, (idev)->mc_v1_seen))
+#define MLD_V1_SEEN(idev) (ipv6_devconf.force_mld_version == 1 || \
+ (idev)->cnf.force_mld_version == 1 || \
+ ((idev)->mc_v1_seen && \
+ time_before(jiffies, (idev)->mc_v1_seen)))
#define MLDV2_MASK(value, nb) ((nb)>=32 ? (value) : ((1<<(nb))-1) & (value))
#define MLDV2_EXP(thresh, nbmant, nbexp, value) \
}
/*
+ * identify MLD packets for MLD filter exceptions
+ */
+int ipv6_is_mld(struct sk_buff *skb, int nexthdr)
+{
+ struct icmp6hdr *pic;
+
+ if (nexthdr != IPPROTO_ICMPV6)
+ return 0;
+
+ if (!pskb_may_pull(skb, sizeof(struct icmp6hdr)))
+ return 0;
+
+ pic = (struct icmp6hdr *)skb->h.raw;
+
+ switch (pic->icmp6_type) {
+ case ICMPV6_MGM_QUERY:
+ case ICMPV6_MGM_REPORT:
+ case ICMPV6_MGM_REDUCTION:
+ case ICMPV6_MLD2_REPORT:
+ return 1;
+ default:
+ break;
+ }
+ return 0;
+}
+
+/*
* check if the interface/address pair is valid
*/
int ipv6_chk_mcast_addr(struct net_device *dev, struct in6_addr *group,
break;
}
if (mc) {
- if (!ipv6_addr_any(src_addr)) {
+ if (src_addr && !ipv6_addr_any(src_addr)) {
struct ip6_sf_list *psf;
spin_lock_bh(&mc->mca_lock);
/* Set to 3 to get tracing... */
#define ND_DEBUG 1
-#define ND_PRINTK(x...) printk(KERN_DEBUG x)
+#define ND_PRINTK(fmt, args...) do { if (net_ratelimit()) { printk(fmt, ## args); } } while(0)
#define ND_NOPRINTK(x...) do { ; } while(0)
#define ND_PRINTK0 ND_PRINTK
#define ND_PRINTK1 ND_NOPRINTK
#define ND_PRINTK2 ND_NOPRINTK
+#define ND_PRINTK3 ND_NOPRINTK
#if ND_DEBUG >= 1
#undef ND_PRINTK1
#define ND_PRINTK1 ND_PRINTK
#undef ND_PRINTK2
#define ND_PRINTK2 ND_PRINTK
#endif
+#if ND_DEBUG >= 3
+#undef ND_PRINTK3
+#define ND_PRINTK3 ND_PRINTK
+#endif
#include <linux/module.h>
#include <linux/config.h>
case ND_OPT_MTU:
case ND_OPT_REDIRECT_HDR:
if (ndopts->nd_opt_array[nd_opt->nd_opt_type]) {
- ND_PRINTK2("ndisc_parse_options(): duplicated ND6 option found: type=%d\n",
- nd_opt->nd_opt_type);
+ ND_PRINTK2(KERN_WARNING
+ "%s(): duplicated ND6 option found: type=%d\n",
+ __FUNCTION__,
+ nd_opt->nd_opt_type);
} else {
ndopts->nd_opt_array[nd_opt->nd_opt_type] = nd_opt;
}
* Unknown options must be silently ignored,
* to accommodate future extension to the protocol.
*/
- ND_PRINTK2(KERN_WARNING
- "ndisc_parse_options(): ignored unsupported option; type=%d, len=%d\n",
+ ND_PRINTK2(KERN_NOTICE
+ "%s(): ignored unsupported option; type=%d, len=%d\n",
+ __FUNCTION__,
nd_opt->nd_opt_type, nd_opt->nd_opt_len);
}
opt_len -= l;
fl->fl_icmp_code = 0;
}
-static void inline ndisc_update(struct neighbour *neigh,
- u8 *lladdr, u32 flags)
-{
- int notify;
- write_lock_bh(&neigh->lock);
- notify = __neigh_update(neigh, lladdr, NUD_STALE, flags);
-#ifdef CONFIG_ARPD
- if (notify > 0 && neigh->parms->app_probes) {
- write_unlock_bh(&neigh->lock);
- neigh_app_notify(neigh);
- } else
-#endif
- write_unlock_bh(&neigh->lock);
-}
-
static void ndisc_send_na(struct net_device *dev, struct neighbour *neigh,
struct in6_addr *daddr, struct in6_addr *solicited_addr,
int router, int solicited, int override, int inc_opt)
inc_opt = 0;
}
- skb = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev) + dst->header_len + 64,
+ skb = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev),
1, &err);
if (skb == NULL) {
- ND_PRINTK1("send_na: alloc skb failed\n");
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 NA: %s() failed to allocate an skb.\n",
+ __FUNCTION__);
dst_release(dst);
return;
}
if (send_llinfo)
len += NDISC_OPT_SPACE(dev->addr_len);
- skb = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev) + dst->header_len + 64,
+ skb = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev),
1, &err);
if (skb == NULL) {
- ND_PRINTK1("send_ns: alloc skb failed\n");
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 NA: %s() failed to allocate an skb.\n",
+ __FUNCTION__);
dst_release(dst);
return;
}
if (dev->addr_len)
len += NDISC_OPT_SPACE(dev->addr_len);
- skb = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev) + dst->header_len + 64,
+ skb = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev),
1, &err);
if (skb == NULL) {
- ND_PRINTK1("send_ns: alloc skb failed\n");
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 RS: %s() failed to allocate an skb.\n",
+ __FUNCTION__);
dst_release(dst);
return;
}
saddr = &skb->nh.ipv6h->saddr;
if ((probes -= neigh->parms->ucast_probes) < 0) {
- if (!(neigh->nud_state&NUD_VALID))
- ND_PRINTK1("trying to ucast probe in NUD_INVALID\n");
+ if (!(neigh->nud_state & NUD_VALID)) {
+ ND_PRINTK1(KERN_DEBUG
+ "%s(): trying to ucast probe in NUD_INVALID: "
+ "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
+ __FUNCTION__,
+ NIP6(*target));
+ }
ndisc_send_ns(dev, neigh, target, target, saddr);
} else if ((probes -= neigh->parms->app_probes) < 0) {
#ifdef CONFIG_ARPD
int inc;
if (ipv6_addr_is_multicast(&msg->target)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP NS: target address is multicast\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NS: multicast target address");
return;
}
daddr->s6_addr32[1] == htonl(0x00000000) &&
daddr->s6_addr32[2] == htonl(0x00000001) &&
daddr->s6_addr [12] == 0xff )) {
- if (net_ratelimit())
- printk(KERN_DEBUG "ICMP6 NS: bad DAD packet (wrong destination)\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NS: bad DAD packet (wrong destination)\n");
return;
}
if (!ndisc_parse_options(msg->opt, ndoptlen, &ndopts)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP NS: invalid ND option, ignored.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NS: invalid ND options\n");
return;
}
lladdr = (u8*)(ndopts.nd_opts_src_lladdr + 1);
lladdrlen = ndopts.nd_opts_src_lladdr->nd_opt_len << 3;
if (lladdrlen != NDISC_OPT_SPACE(dev->addr_len)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP NS: bad lladdr length.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NS: invalid link-layer address length\n");
return;
}
* in the message.
*/
if (dad) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP6 NS: bad DAD packet (link-layer address option)\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NS: bad DAD packet (link-layer address option)\n");
return;
}
}
* sender should delay its response
* by a random time between 0 and
* MAX_ANYCAST_DELAY_TIME seconds.
+ * (RFC2461) -- yoshfuji
*/
struct sk_buff *n = skb_clone(skb, GFP_ATOMIC);
if (n)
struct in6_addr maddr;
ipv6_addr_all_nodes(&maddr);
-#ifdef CONFIG_IPV6_NDISC_NEW
- ndisc_send_na(dev, NULL, &maddr, &msg->target,
- idev->cnf.forwarding, 0, ifp && inc, inc);
-#else
ndisc_send_na(dev, NULL, &maddr, &msg->target,
- idev->cnf.forwarding, 0, ifp != NULL, inc);
-#endif
+ idev->cnf.forwarding, 0, (ifp != NULL), 1);
goto out;
}
* update / create cache entry
* for the source address
*/
-#ifdef CONFIG_IPV6_NDISC_NEW
- neigh = __neigh_lookup(&nd_tbl, saddr, skb->dev, !inc || lladdr || !skb->dev->addr_len);
- if (neigh) {
- ndisc_update(neigh, lladdr, NEIGH_UPDATE_F_IP6NS);
- ndisc_send_na(dev, neigh, saddr, &msg->target,
- idev->cnf.forwarding, 1, (ifp && inc) , inc);
- neigh_release(neigh);
- }
-#else
neigh = neigh_event_ns(&nd_tbl, lladdr, saddr, dev);
if (neigh || !dev->hard_header) {
ndisc_send_na(dev, neigh, saddr, &msg->target,
- idev->cnf.forwarding, 1, ifp != NULL, 1);
+ idev->cnf.forwarding,
+ 1, (ifp != NULL && inc), inc);
if (neigh)
neigh_release(neigh);
}
-#endif
out:
if (ifp)
struct neighbour *neigh;
if (skb->len < sizeof(struct nd_msg)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP NA: packet too short\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NA: packet too short\n");
return;
}
if (ipv6_addr_is_multicast(&msg->target)) {
- if (net_ratelimit())
- printk(KERN_WARNING "NDISC NA: target address is multicast\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NA: target address is multicast.\n");
return;
}
if (ipv6_addr_is_multicast(daddr) &&
msg->icmph.icmp6_solicited) {
- ND_PRINTK0("NDISC: solicited NA is multicasted\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NA: solicited NA is multicasted.\n");
return;
}
if (!ndisc_parse_options(msg->opt, ndoptlen, &ndopts)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP NS: invalid ND option, ignored.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NS: invalid ND option\n");
return;
}
if (ndopts.nd_opts_tgt_lladdr) {
lladdr = (u8*)(ndopts.nd_opts_tgt_lladdr + 1);
lladdrlen = ndopts.nd_opts_tgt_lladdr->nd_opt_len << 3;
if (lladdrlen != NDISC_OPT_SPACE(dev->addr_len)) {
- if (net_ratelimit())
- printk(KERN_WARNING "NDISC NA: invalid lladdr length.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NA: invalid link-layer address length\n");
return;
}
}
about it. It could be misconfiguration, or
an smart proxy agent tries to help us :-)
*/
- ND_PRINTK0("%s: someone advertises our address!\n",
+ ND_PRINTK1(KERN_WARNING
+ "ICMPv6 NA: someone advertises our address on %s!\n",
ifp->idev->dev->name);
in6_ifa_put(ifp);
return;
neigh = neigh_lookup(&nd_tbl, &msg->target, dev);
if (neigh) {
-#ifdef CONFIG_IPV6_NDISC_NEW
- int notify = 0;
- int was_router = 0;
-
- write_lock_bh(&neigh->lock);
- if (!(neigh->nud_state & ~NUD_FAILED))
- goto ignore;
-
- was_router = neigh->flags & NTF_ROUTER;
-
- notify = __neigh_update(neigh, lladdr,
- msg->icmph.icmp6_solicited ? NUD_REACHABLE : NUD_STALE,
- (NEIGH_UPDATE_F_IP6NA|
- (msg->icmph.icmp6_override ? NEIGH_UPDATE_F_OVERRIDE : 0) |
- (msg->icmph.icmp6_router ? NEIGH_UPDATE_F_ISROUTER : 0)));
-
- if (was_router && !(neigh->flags & NTF_ROUTER)) {
- /*
- * Change: router to host
- */
- struct rt6_info *rt;
- rt = rt6_get_dflt_router(saddr, dev);
- if (rt)
- ip6_del_rt(rt, NULL, NULL);
- }
-#else
if (neigh->flags & NTF_ROUTER) {
if (msg->icmph.icmp6_router == 0) {
/*
neigh_update(neigh, lladdr,
msg->icmph.icmp6_solicited ? NUD_REACHABLE : NUD_STALE,
msg->icmph.icmp6_override, 1);
-#endif
-#ifdef CONFIG_IPV6_NDISC_NEW
-ignore:
-#ifdef CONFIG_ARPD
- if (notify > 0 && neigh->parms->app_probes) {
- write_unlock_bh(&neigh->lock);
- neigh_app_notify(neigh);
- } else
-#endif
- write_unlock_bh(&neigh->lock);
-#endif
neigh_release(neigh);
}
}
-static void ndisc_recv_rs(struct sk_buff *skb)
-{
- struct rs_msg *rs_msg = (struct rs_msg *) skb->h.raw;
- unsigned long ndoptlen = skb->len - sizeof(*rs_msg);
- struct neighbour *neigh;
- struct inet6_dev *idev;
- struct in6_addr *saddr = &skb->nh.ipv6h->saddr;
- struct ndisc_options ndopts;
- u8 *lladdr = NULL;
- int lladdrlen = 0;
-
- if (skb->len < sizeof(*rs_msg))
- return;
-
- idev = in6_dev_get(skb->dev);
- if (!idev) {
- if (net_ratelimit())
- ND_PRINTK1("ICMP6 RS: can't find in6 device\n");
- return;
- }
-
- /* Don't accept RS if we're not in router mode */
- if (!idev->cnf.forwarding || idev->cnf.accept_ra)
- goto out;
-
- /*
- * Don't update NCE if src = ::;
- * this implies that the source node has no ip address assigned yet.
- */
- if (ipv6_addr_any(saddr))
- goto out;
-
- /* Parse ND options */
- if (!ndisc_parse_options(rs_msg->opt, ndoptlen, &ndopts)) {
- if (net_ratelimit())
- ND_PRINTK2("ICMP6 NS: invalid ND option, ignored\n");
- goto out;
- }
-
- if (ndopts.nd_opts_src_lladdr) {
- lladdr = (u8 *)(ndopts.nd_opts_src_lladdr + 1);
- lladdrlen = ndopts.nd_opts_src_lladdr->nd_opt_len << 3;
- if (lladdrlen != NDISC_OPT_SPACE(skb->dev->addr_len))
- goto out;
- }
-
- neigh = __neigh_lookup(&nd_tbl, saddr, skb->dev, 1);
- if (neigh) {
-#ifdef CONFIG_IPV6_NDISC_NEW
- ndisc_update(neigh, lladdr, NEIGH_UPDATE_F_IP6RS);
-#else
- neigh_update(neigh, lladdr, NUD_STALE, 1, 1);
-#endif
- neigh_release(neigh);
- }
-out:
- in6_dev_put(idev);
-}
-
static void ndisc_router_discovery(struct sk_buff *skb)
{
struct ra_msg *ra_msg = (struct ra_msg *) skb->h.raw;
optlen = (skb->tail - skb->h.raw) - sizeof(struct ra_msg);
if (!(ipv6_addr_type(&skb->nh.ipv6h->saddr) & IPV6_ADDR_LINKLOCAL)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP RA: source address is not linklocal\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 RA: source address is not link-local.\n");
return;
}
if (optlen < 0) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP RA: packet too short\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 RA: packet too short\n");
return;
}
in6_dev = in6_dev_get(skb->dev);
if (in6_dev == NULL) {
- ND_PRINTK1("RA: can't find in6 device\n");
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 RA: can't find inet6 device for %s.\n",
+ skb->dev->name);
return;
}
if (in6_dev->cnf.forwarding || !in6_dev->cnf.accept_ra) {
if (!ndisc_parse_options(opt, optlen, &ndopts)) {
in6_dev_put(in6_dev);
- if (net_ratelimit())
- ND_PRINTK2(KERN_WARNING
- "ICMP6 RA: invalid ND option, ignored.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMP6 RA: invalid ND options\n");
return;
}
}
if (rt == NULL && lifetime) {
- ND_PRINTK2("ndisc_rdisc: adding default router\n");
+ ND_PRINTK3(KERN_DEBUG
+ "ICMPv6 RA: adding default router.\n");
rt = rt6_add_dflt_router(&skb->nh.ipv6h->saddr, skb->dev);
if (rt == NULL) {
- ND_PRINTK1("route_add failed\n");
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 RA: %s() failed to add default route.\n",
+ __FUNCTION__);
in6_dev_put(in6_dev);
return;
}
neigh = rt->rt6i_nexthop;
if (neigh == NULL) {
- ND_PRINTK1("nd: add default router: null neighbour\n");
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 RA: %s() got default router without neighbour.\n",
+ __FUNCTION__);
dst_release(&rt->u.dst);
in6_dev_put(in6_dev);
return;
*/
if (in6_dev->nd_parms) {
- __u32 rtime = ntohl(ra_msg->retrans_timer);
+ unsigned long rtime = ntohl(ra_msg->retrans_timer);
if (rtime && rtime/1000 < MAX_SCHEDULE_TIMEOUT/HZ) {
rtime = (rtime*HZ)/1000;
lladdr = (u8*)((ndopts.nd_opts_src_lladdr)+1);
lladdrlen = ndopts.nd_opts_src_lladdr->nd_opt_len << 3;
if (lladdrlen != NDISC_OPT_SPACE(skb->dev->addr_len)) {
- if (net_ratelimit())
- ND_PRINTK2(KERN_WARNING
- "ICMP6 RA: Invalid lladdr length.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 RA: invalid link-layer address length\n");
goto out;
}
}
-#ifdef CONFIG_IPV6_NDISC_NEW
- ndisc_update(neigh, lladdr, NEIGH_UPDATE_F_IP6RA);
-#else
neigh_update(neigh, lladdr, NUD_STALE, 1, 1);
-#endif
}
if (ndopts.nd_opts_pi) {
mtu = ntohl(mtu);
if (mtu < IPV6_MIN_MTU || mtu > skb->dev->mtu) {
- if (net_ratelimit()) {
- ND_PRINTK0("NDISC: router announcement with mtu = %d\n",
- mtu);
- }
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 RA: invalid mtu: %d\n",
+ mtu);
} else if (in6_dev->cnf.mtu6 != mtu) {
in6_dev->cnf.mtu6 = mtu;
}
if (ndopts.nd_opts_tgt_lladdr || ndopts.nd_opts_rh) {
- if (net_ratelimit())
- ND_PRINTK0(KERN_WARNING
- "ICMP6 RA: got invalid option with RA");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 RA: invalid RA options");
}
out:
if (rt)
int lladdrlen;
if (!(ipv6_addr_type(&skb->nh.ipv6h->saddr) & IPV6_ADDR_LINKLOCAL)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP redirect: source address is not linklocal\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: source address is not link-local.\n");
return;
}
optlen -= sizeof(struct icmp6hdr) + 2 * sizeof(struct in6_addr);
if (optlen < 0) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP redirect: packet too small\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: packet too short\n");
return;
}
dest = target + 1;
if (ipv6_addr_is_multicast(dest)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP redirect for multicast addr\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: destination address is multicast.\n");
return;
}
if (ipv6_addr_cmp(dest, target) == 0) {
on_link = 1;
} else if (!(ipv6_addr_type(target) & IPV6_ADDR_LINKLOCAL)) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP redirect: target address is not linklocal\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: target address is not link-local.\n");
return;
}
return;
}
- /* XXX: RFC2461 8.1:
+ /* RFC2461 8.1:
* The IP source address of the Redirect MUST be the same as the current
* first-hop router for the specified ICMP Destination Address.
*/
if (!ndisc_parse_options((u8*)(dest + 1), optlen, &ndopts)) {
- if (net_ratelimit())
- ND_PRINTK2(KERN_WARNING
- "ICMP6 Redirect: invalid ND options, rejected.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: invalid ND options\n");
in6_dev_put(in6_dev);
return;
}
lladdr = (u8*)(ndopts.nd_opts_tgt_lladdr + 1);
lladdrlen = ndopts.nd_opts_tgt_lladdr->nd_opt_len << 3;
if (lladdrlen != NDISC_OPT_SPACE(skb->dev->addr_len)) {
- if (net_ratelimit())
- ND_PRINTK2(KERN_WARNING
- "ICMP6 Redirect: invalid lladdr length.\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: invalid link-layer address length\n");
in6_dev_put(in6_dev);
return;
}
neigh = __neigh_lookup(&nd_tbl, target, skb->dev, 1);
if (neigh) {
-#ifdef CONFIG_IPV6_NDISC_NEW
- rt6_redirect(dest, &skb->nh.ipv6h->saddr, neigh, on_link);
-#else
- if (neigh->nud_state&NUD_VALID) {
- if (!rt6_redirect(dest, &skb->nh.ipv6h->saddr, neigh, NULL, on_link))
- neigh_update(neigh, lladdr, NUD_STALE, 1, 1);
- } else {
- write_lock_bh(&neigh->lock);
+ neigh_update(neigh, lladdr, NUD_STALE, 1, 1);
+ if (neigh->nud_state&NUD_VALID)
+ rt6_redirect(dest, &skb->nh.ipv6h->saddr, neigh, on_link);
+ else
__neigh_event_send(neigh, NULL);
- write_unlock_bh(&neigh->lock);
- }
-#endif
neigh_release(neigh);
}
in6_dev_put(in6_dev);
dev = skb->dev;
if (ipv6_get_lladdr(dev, &saddr_buf)) {
- ND_PRINTK1("redirect: no link_local addr for dev\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: no link-local address on %s\n",
+ dev->name);
return;
}
rt = (struct rt6_info *) dst;
if (rt->rt6i_flags & RTF_GATEWAY) {
- ND_PRINTK1("ndisc_send_redirect: not a neighbour\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 Redirect: destination is not a neighbour.\n");
dst_release(dst);
return;
}
rd_len &= ~0x7;
len += rd_len;
- buff = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev) + dst->header_len + 64,
+ buff = sock_alloc_send_skb(sk, MAX_HEADER + len + LL_RESERVED_SPACE(dev),
1, &err);
if (buff == NULL) {
- ND_PRINTK1("ndisc_send_redirect: alloc_skb failed\n");
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 Redirect: %s() failed to allocate an skb.\n",
+ __FUNCTION__);
dst_release(dst);
return;
}
__skb_push(skb, skb->data-skb->h.raw);
if (skb->nh.ipv6h->hop_limit != 255) {
- if (net_ratelimit())
- printk(KERN_WARNING
- "ICMP NDISC: fake message with non-255 Hop Limit received: %d\n",
- skb->nh.ipv6h->hop_limit);
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NDISC: invalid hop-limit: %d\n",
+ skb->nh.ipv6h->hop_limit);
return 0;
}
if (msg->icmph.icmp6_code != 0) {
- if (net_ratelimit())
- printk(KERN_WARNING "ICMP NDISC: code is not zero\n");
+ ND_PRINTK2(KERN_WARNING
+ "ICMPv6 NDISC: invalid ICMPv6 code: %d\n",
+ msg->icmph.icmp6_code);
return 0;
}
ndisc_recv_na(skb);
break;
- case NDISC_ROUTER_SOLICITATION:
- ndisc_recv_rs(skb);
- break;
-
case NDISC_ROUTER_ADVERTISEMENT:
ndisc_router_discovery(skb);
break;
err = sock_create(PF_INET6, SOCK_RAW, IPPROTO_ICMPV6, &ndisc_socket);
if (err < 0) {
- printk(KERN_ERR
- "Failed to initialize the NDISC control socket (err %d).\n",
- err);
+ ND_PRINTK0(KERN_ERR
+ "ICMPv6 NDISC: Failed to initialize the control socket (err %d).\n",
+ err);
ndisc_socket = NULL; /* For safety. */
return err;
}
eui64[0] |= 0x02;
i=0;
- while ((skb->nh.ipv6h->saddr.in6_u.u6_addr8[8+i] ==
+ while ((skb->nh.ipv6h->saddr.s6_addr[8+i] ==
eui64[i]) && (i<8)) i++;
if ( i == 8 )
}
/* ipv4 addr of the socket is invalid. Only the
- * unpecified and mapped address have a v4 equivalent.
+ * unspecified and mapped address have a v4 equivalent.
*/
v4addr = LOOPBACK4_IPV6;
if (!(addr_type & IPV6_ADDR_MULTICAST)) {
* This is next to useless...
* if we demultiplex in network layer we don't need the extra call
* just to queue the skb...
- * maybe we could have the network decide uppon a hint if it
+ * maybe we could have the network decide upon a hint if it
* should call raw_rcv for demultiplexing
*/
int rawv6_rcv(struct sock *sk, struct sk_buff *skb)
if (ipv6_addr_any(daddr)) {
/*
- * unspecfied destination address
+ * unspecified destination address
* treated as error... is this correct ?
*/
fl6_sock_release(flowlabel);
match = sprt;
mpri = m;
if (m >= 12) {
- /* we choose the lastest default router if it
+ /* we choose the last default router if it
* is in (probably) reachable state.
* If route changed, we should do pmtu
* discovery. --yoshfuji
{
struct rt6_info *rt = ip6_dst_alloc();
- BUG_ON(ort->rt6i_flags & RTF_NDISC);
-
if (rt) {
rt->u.dst.input = ort->u.dst.input;
rt->u.dst.output = ort->u.dst.output;
nt->parms = *parms;
if (register_netdevice(dev) < 0) {
- kfree(dev);
+ free_netdev(dev);
goto failed;
}
return err;
fail:
inet_del_protocol(&sit_protocol, IPPROTO_IPV6);
- kfree(ipip6_fb_tunnel_dev);
+ free_netdev(ipip6_fb_tunnel_dev);
goto out;
}
* IPv6 support
*/
+#include <linux/string.h>
#include <net/inet_ecn.h>
#include <net/ip.h>
#include <net/ipv6.h>
#include <net/xfrm.h>
-static inline void ipip6_ecn_decapsulate(struct ipv6hdr *iph,
- struct sk_buff *skb)
+static inline void ipip6_ecn_decapsulate(struct sk_buff *skb)
{
- if (INET_ECN_is_ce(ip6_get_dsfield(iph)) &&
- INET_ECN_is_not_ce(ip6_get_dsfield(skb->nh.ipv6h)))
- IP6_ECN_set_ce(skb->nh.ipv6h);
+ struct ipv6hdr *outer_iph = skb->nh.ipv6h;
+ struct ipv6hdr *inner_iph = skb->h.ipv6h;
+
+ if (INET_ECN_is_ce(ip6_get_dsfield(outer_iph)) &&
+ INET_ECN_is_not_ce(ip6_get_dsfield(inner_iph)))
+ IP6_ECN_set_ce(inner_iph);
}
int xfrm6_rcv(struct sk_buff **pskb, unsigned int *nhoffp)
if (x->props.mode) { /* XXX */
if (nexthdr != IPPROTO_IPV6)
goto drop;
- skb->nh.raw = skb->data;
+ if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
+ goto drop;
+ if (skb_cloned(skb) &&
+ pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
+ goto drop;
if (!(x->props.flags & XFRM_STATE_NOECN))
- ipip6_ecn_decapsulate(iph, skb);
- iph = skb->nh.ipv6h;
+ ipip6_ecn_decapsulate(skb);
+ skb->mac.raw = memmove(skb->data - skb->mac_len,
+ skb->mac.raw, skb->mac_len);
+ skb->nh.raw = skb->data;
decaps = 1;
break;
}
__xfrm6_find_bundle(struct flowi *fl, struct rtable *rt, struct xfrm_policy *policy)
{
struct dst_entry *dst;
- u32 ndisc_bit = 0;
-
- if (fl->proto == IPPROTO_ICMPV6 &&
- (fl->fl_icmp_type == NDISC_NEIGHBOUR_ADVERTISEMENT ||
- fl->fl_icmp_type == NDISC_NEIGHBOUR_SOLICITATION ||
- fl->fl_icmp_type == NDISC_ROUTER_SOLICITATION))
- ndisc_bit = RTF_NDISC;
/* Still not clear if we should set fl->fl6_{src,dst}... */
read_lock_bh(&policy->lock);
struct xfrm_dst *xdst = (struct xfrm_dst*)dst;
struct in6_addr fl_dst_prefix, fl_src_prefix;
- if ((xdst->u.rt6.rt6i_flags & RTF_NDISC) != ndisc_bit)
- continue;
-
ipv6_addr_prefix(&fl_dst_prefix,
&fl->fl6_dst,
xdst->u.rt6.rt6i_dst.plen);
dst_prev->output = dst_prev->xfrm->type->output;
/* Sheit... I remember I did this right. Apparently,
* it was magically lost, so this code needs audit */
- x->u.rt6.rt6i_flags = rt0->rt6i_flags&(RTCF_BROADCAST|RTCF_MULTICAST|RTCF_LOCAL|RTF_NDISC);
+ x->u.rt6.rt6i_flags = rt0->rt6i_flags&(RTCF_BROADCAST|RTCF_MULTICAST|RTCF_LOCAL);
x->u.rt6.rt6i_metric = rt0->rt6i_metric;
x->u.rt6.rt6i_node = rt0->rt6i_node;
x->u.rt6.rt6i_gateway = rt0->rt6i_gateway;
IRDA_DEBUG(2, "%s(), register_netdev() failed!\n",
__FUNCTION__ );
self = NULL;
- kfree(dev);
+ free_netdev(dev);
} else {
rtnl_lock();
list_add_rcu(&self->dev_list, &irlans);
dev->hard_start_xmit = irlan_eth_xmit;
dev->get_stats = irlan_eth_get_stats;
dev->set_multicast_list = irlan_eth_set_multicast_list;
- dev->destructor = (void (*)(struct net_device *)) kfree;
+ dev->destructor = free_netdev;
SET_MODULE_OWNER(dev);
lapb_hold(lapb);
}
-/*
- * Convert the integer token used by the device driver into a pointer
- * to a LAPB control structure.
- */
-static struct lapb_cb *__lapb_tokentostruct(void *token)
+static struct lapb_cb *__lapb_devtostruct(struct net_device *dev)
{
struct list_head *entry;
struct lapb_cb *lapb, *use = NULL;
list_for_each(entry, &lapb_list) {
lapb = list_entry(entry, struct lapb_cb, node);
- if (lapb->token == token) {
+ if (lapb->dev == dev) {
use = lapb;
break;
}
return use;
}
-static struct lapb_cb *lapb_tokentostruct(void *token)
+static struct lapb_cb *lapb_devtostruct(struct net_device *dev)
{
struct lapb_cb *rc;
read_lock_bh(&lapb_list_lock);
- rc = __lapb_tokentostruct(token);
+ rc = __lapb_devtostruct(dev);
read_unlock_bh(&lapb_list_lock);
return rc;
return lapb;
}
-int lapb_register(void *token, struct lapb_register_struct *callbacks)
+int lapb_register(struct net_device *dev, struct lapb_register_struct *callbacks)
{
struct lapb_cb *lapb;
int rc = LAPB_BADTOKEN;
write_lock_bh(&lapb_list_lock);
- lapb = __lapb_tokentostruct(token);
+ lapb = __lapb_devtostruct(dev);
if (lapb) {
lapb_put(lapb);
goto out;
if (!lapb)
goto out;
- lapb->token = token;
+ lapb->dev = dev;
lapb->callbacks = *callbacks;
__lapb_insert_cb(lapb);
return rc;
}
-int lapb_unregister(void *token)
+int lapb_unregister(struct net_device *dev)
{
struct lapb_cb *lapb;
int rc = LAPB_BADTOKEN;
write_unlock_bh(&lapb_list_lock);
- lapb = __lapb_tokentostruct(token);
+ lapb = __lapb_devtostruct(dev);
if (!lapb)
goto out;
return rc;
}
-int lapb_getparms(void *token, struct lapb_parms_struct *parms)
+int lapb_getparms(struct net_device *dev, struct lapb_parms_struct *parms)
{
int rc = LAPB_BADTOKEN;
- struct lapb_cb *lapb = lapb_tokentostruct(token);
+ struct lapb_cb *lapb = lapb_devtostruct(dev);
if (!lapb)
goto out;
return rc;
}
-int lapb_setparms(void *token, struct lapb_parms_struct *parms)
+int lapb_setparms(struct net_device *dev, struct lapb_parms_struct *parms)
{
int rc = LAPB_BADTOKEN;
- struct lapb_cb *lapb = lapb_tokentostruct(token);
+ struct lapb_cb *lapb = lapb_devtostruct(dev);
if (!lapb)
goto out;
return rc;
}
-int lapb_connect_request(void *token)
+int lapb_connect_request(struct net_device *dev)
{
- struct lapb_cb *lapb = lapb_tokentostruct(token);
+ struct lapb_cb *lapb = lapb_devtostruct(dev);
int rc = LAPB_BADTOKEN;
if (!lapb)
lapb_establish_data_link(lapb);
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S0 -> S1\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S0 -> S1\n", lapb->dev);
#endif
lapb->state = LAPB_STATE_1;
return rc;
}
-int lapb_disconnect_request(void *token)
+int lapb_disconnect_request(struct net_device *dev)
{
- struct lapb_cb *lapb = lapb_tokentostruct(token);
+ struct lapb_cb *lapb = lapb_devtostruct(dev);
int rc = LAPB_BADTOKEN;
if (!lapb)
case LAPB_STATE_1:
#if LAPB_DEBUG > 1
- printk(KERN_DEBUG "lapb: (%p) S1 TX DISC(1)\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S1 TX DISC(1)\n", lapb->dev);
#endif
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S1 -> S0\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S1 -> S0\n", lapb->dev);
#endif
lapb_send_control(lapb, LAPB_DISC, LAPB_POLLON, LAPB_COMMAND);
lapb->state = LAPB_STATE_0;
lapb->state = LAPB_STATE_2;
#if LAPB_DEBUG > 1
- printk(KERN_DEBUG "lapb: (%p) S3 DISC(1)\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S3 DISC(1)\n", lapb->dev);
#endif
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S3 -> S2\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S3 -> S2\n", lapb->dev);
#endif
rc = LAPB_OK;
return rc;
}
-int lapb_data_request(void *token, struct sk_buff *skb)
+int lapb_data_request(struct net_device *dev, struct sk_buff *skb)
{
- struct lapb_cb *lapb = lapb_tokentostruct(token);
+ struct lapb_cb *lapb = lapb_devtostruct(dev);
int rc = LAPB_BADTOKEN;
if (!lapb)
return rc;
}
-int lapb_data_received(void *token, struct sk_buff *skb)
+int lapb_data_received(struct net_device *dev, struct sk_buff *skb)
{
- struct lapb_cb *lapb = lapb_tokentostruct(token);
+ struct lapb_cb *lapb = lapb_devtostruct(dev);
int rc = LAPB_BADTOKEN;
if (lapb) {
void lapb_connect_confirmation(struct lapb_cb *lapb, int reason)
{
if (lapb->callbacks.connect_confirmation)
- lapb->callbacks.connect_confirmation(lapb->token, reason);
+ lapb->callbacks.connect_confirmation(lapb->dev, reason);
}
void lapb_connect_indication(struct lapb_cb *lapb, int reason)
{
if (lapb->callbacks.connect_indication)
- lapb->callbacks.connect_indication(lapb->token, reason);
+ lapb->callbacks.connect_indication(lapb->dev, reason);
}
void lapb_disconnect_confirmation(struct lapb_cb *lapb, int reason)
{
if (lapb->callbacks.disconnect_confirmation)
- lapb->callbacks.disconnect_confirmation(lapb->token, reason);
+ lapb->callbacks.disconnect_confirmation(lapb->dev, reason);
}
void lapb_disconnect_indication(struct lapb_cb *lapb, int reason)
{
if (lapb->callbacks.disconnect_indication)
- lapb->callbacks.disconnect_indication(lapb->token, reason);
+ lapb->callbacks.disconnect_indication(lapb->dev, reason);
}
int lapb_data_indication(struct lapb_cb *lapb, struct sk_buff *skb)
{
if (lapb->callbacks.data_indication)
- return lapb->callbacks.data_indication(lapb->token, skb);
+ return lapb->callbacks.data_indication(lapb->dev, skb);
kfree_skb(skb);
return NET_RX_CN_HIGH; /* For now; must be != NET_RX_DROP */
int used = 0;
if (lapb->callbacks.data_transmit) {
- lapb->callbacks.data_transmit(lapb->token, skb);
+ lapb->callbacks.data_transmit(lapb->dev, skb);
used = 1;
}
case LAPB_SABM:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S0 RX SABM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S0 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S0 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S0 -> S3\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
case LAPB_SABME:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S0 RX SABME(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S0 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S0 -> S3\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S0 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
case LAPB_DISC:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S0 RX DISC(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
printk(KERN_DEBUG "lapb: (%p) S0 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
case LAPB_SABM:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 RX SABM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
case LAPB_SABME:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 RX SABME(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
case LAPB_DISC:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 RX DISC(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
printk(KERN_DEBUG "lapb: (%p) S1 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
case LAPB_UA:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 RX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (frame->pf) {
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S1 -> S3\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_stop_t1timer(lapb);
lapb_stop_t2timer(lapb);
case LAPB_DM:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S1 RX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (frame->pf) {
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S1 -> S0\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_clear_queues(lapb);
lapb->state = LAPB_STATE_0;
case LAPB_SABME:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S2 RX {SABM,SABME}(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
printk(KERN_DEBUG "lapb: (%p) S2 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
case LAPB_DISC:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S2 RX DISC(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
printk(KERN_DEBUG "lapb: (%p) S2 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
case LAPB_UA:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S2 RX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (frame->pf) {
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S2 -> S0\n",
- lapb->token);
+ lapb->dev);
#endif
lapb->state = LAPB_STATE_0;
lapb_start_t1timer(lapb);
case LAPB_DM:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S2 RX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (frame->pf) {
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S2 -> S0\n",
- lapb->token);
+ lapb->dev);
#endif
lapb->state = LAPB_STATE_0;
lapb_start_t1timer(lapb);
case LAPB_RR:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S2 RX {I,REJ,RNR,RR}"
- "(%d)\n", lapb->token, frame->pf);
+ "(%d)\n", lapb->dev, frame->pf);
printk(KERN_DEBUG "lapb: (%p) S2 RX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (frame->pf)
lapb_send_control(lapb, LAPB_DM, frame->pf,
case LAPB_SABM:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX SABM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
case LAPB_SABME:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX SABME(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
case LAPB_DISC:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX DISC(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S3 -> S0\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_clear_queues(lapb);
lapb_send_control(lapb, LAPB_UA, frame->pf,
case LAPB_DM:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S3 -> S0\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_clear_queues(lapb);
lapb->state = LAPB_STATE_0;
case LAPB_RNR:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX RNR(%d) R%d\n",
- lapb->token, frame->pf, frame->nr);
+ lapb->dev, frame->pf, frame->nr);
#endif
lapb->condition |= LAPB_PEER_RX_BUSY_CONDITION;
lapb_check_need_response(lapb, frame->cr, frame->pf);
lapb_transmit_frmr(lapb);
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S3 -> S4\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_start_t1timer(lapb);
lapb_stop_t2timer(lapb);
case LAPB_RR:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX RR(%d) R%d\n",
- lapb->token, frame->pf, frame->nr);
+ lapb->dev, frame->pf, frame->nr);
#endif
lapb->condition &= ~LAPB_PEER_RX_BUSY_CONDITION;
lapb_check_need_response(lapb, frame->cr, frame->pf);
lapb_transmit_frmr(lapb);
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S3 -> S4\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_start_t1timer(lapb);
lapb_stop_t2timer(lapb);
case LAPB_REJ:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX REJ(%d) R%d\n",
- lapb->token, frame->pf, frame->nr);
+ lapb->dev, frame->pf, frame->nr);
#endif
lapb->condition &= ~LAPB_PEER_RX_BUSY_CONDITION;
lapb_check_need_response(lapb, frame->cr, frame->pf);
lapb_transmit_frmr(lapb);
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S3 -> S4\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_start_t1timer(lapb);
lapb_stop_t2timer(lapb);
case LAPB_I:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX I(%d) S%d R%d\n",
- lapb->token, frame->pf, frame->ns, frame->nr);
+ lapb->dev, frame->pf, frame->ns, frame->nr);
#endif
if (!lapb_validate_nr(lapb, frame->nr)) {
lapb->frmr_data = *frame;
lapb_transmit_frmr(lapb);
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S3 -> S4\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_start_t1timer(lapb);
lapb_stop_t2timer(lapb);
#if LAPB_DEBUG > 1
printk(KERN_DEBUG
"lapb: (%p) S3 TX REJ(%d) R%d\n",
- lapb->token, frame->pf, lapb->vr);
+ lapb->dev, frame->pf, lapb->vr);
#endif
lapb->condition |= LAPB_REJECT_CONDITION;
lapb_send_control(lapb, LAPB_REJ,
case LAPB_FRMR:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX FRMR(%d) %02X "
- "%02X %02X %02X %02X\n", lapb->token, frame->pf,
+ "%02X %02X %02X %02X\n", lapb->dev, frame->pf,
skb->data[0], skb->data[1], skb->data[2],
skb->data[3], skb->data[4]);
#endif
lapb_establish_data_link(lapb);
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S3 -> S1\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_requeue_frames(lapb);
lapb->state = LAPB_STATE_1;
case LAPB_ILLEGAL:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S3 RX ILLEGAL(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb->frmr_data = *frame;
lapb->frmr_type = LAPB_FRMR_W;
lapb_transmit_frmr(lapb);
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S3 -> S4\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S3 -> S4\n", lapb->dev);
#endif
lapb_start_t1timer(lapb);
lapb_stop_t2timer(lapb);
case LAPB_SABM:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S4 RX SABM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S4 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S4 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S4 -> S3\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
case LAPB_SABME:
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S4 RX SABME(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S4 TX UA(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
#if LAPB_DEBUG > 0
printk(KERN_DEBUG "lapb: (%p) S4 -> S3\n",
- lapb->token);
+ lapb->dev);
#endif
lapb_send_control(lapb, LAPB_UA, frame->pf,
LAPB_RESPONSE);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S4 TX DM(%d)\n",
- lapb->token, frame->pf);
+ lapb->dev, frame->pf);
#endif
lapb_send_control(lapb, LAPB_DM, frame->pf,
LAPB_RESPONSE);
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S%d TX I(%d) S%d R%d\n",
- lapb->token, lapb->state, poll_bit, lapb->vs, lapb->vr);
+ lapb->dev, lapb->state, poll_bit, lapb->vs, lapb->vr);
#endif
lapb_transmit_buffer(lapb, skb, LAPB_COMMAND);
#if LAPB_DEBUG > 2
printk(KERN_DEBUG "lapb: (%p) S%d TX %02X %02X %02X\n",
- lapb->token, lapb->state,
+ lapb->dev, lapb->state,
skb->data[0], skb->data[1], skb->data[2]);
#endif
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S%d TX SABME(1)\n",
- lapb->token, lapb->state);
+ lapb->dev, lapb->state);
#endif
lapb_send_control(lapb, LAPB_SABME, LAPB_POLLON, LAPB_COMMAND);
} else {
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S%d TX SABM(1)\n",
- lapb->token, lapb->state);
+ lapb->dev, lapb->state);
#endif
lapb_send_control(lapb, LAPB_SABM, LAPB_POLLON, LAPB_COMMAND);
}
{
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S%d TX RR(1) R%d\n",
- lapb->token, lapb->state, lapb->vr);
+ lapb->dev, lapb->state, lapb->vr);
#endif
lapb_send_control(lapb, LAPB_RR, LAPB_POLLON, LAPB_RESPONSE);
{
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S%d TX RR(0) R%d\n",
- lapb->token, lapb->state, lapb->vr);
+ lapb->dev, lapb->state, lapb->vr);
#endif
lapb_send_control(lapb, LAPB_RR, LAPB_POLLOFF, LAPB_RESPONSE);
#if LAPB_DEBUG > 2
printk(KERN_DEBUG "lapb: (%p) S%d RX %02X %02X %02X\n",
- lapb->token, lapb->state,
+ lapb->dev, lapb->state,
skb->data[0], skb->data[1], skb->data[2]);
#endif
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S%d TX FRMR %02X %02X %02X %02X %02X\n",
- lapb->token, lapb->state,
+ lapb->dev, lapb->state,
skb->data[1], skb->data[2], skb->data[3],
skb->data[4], skb->data[5]);
#endif
#if LAPB_DEBUG > 1
printk(KERN_DEBUG "lapb: (%p) S%d TX FRMR %02X %02X %02X\n",
- lapb->token, lapb->state, skb->data[1],
+ lapb->dev, lapb->state, skb->data[1],
skb->data[2], skb->data[3]);
#endif
}
lapb->state = LAPB_STATE_0;
lapb_disconnect_indication(lapb, LAPB_TIMEDOUT);
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S1 -> S0\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S1 -> S0\n", lapb->dev);
#endif
return;
} else {
lapb->n2count++;
if (lapb->mode & LAPB_EXTENDED) {
#if LAPB_DEBUG > 1
- printk(KERN_DEBUG "lapb: (%p) S1 TX SABME(1)\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S1 TX SABME(1)\n", lapb->dev);
#endif
lapb_send_control(lapb, LAPB_SABME, LAPB_POLLON, LAPB_COMMAND);
} else {
#if LAPB_DEBUG > 1
- printk(KERN_DEBUG "lapb: (%p) S1 TX SABM(1)\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S1 TX SABM(1)\n", lapb->dev);
#endif
lapb_send_control(lapb, LAPB_SABM, LAPB_POLLON, LAPB_COMMAND);
}
lapb->state = LAPB_STATE_0;
lapb_disconnect_confirmation(lapb, LAPB_TIMEDOUT);
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S2 -> S0\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S2 -> S0\n", lapb->dev);
#endif
return;
} else {
lapb->n2count++;
#if LAPB_DEBUG > 1
- printk(KERN_DEBUG "lapb: (%p) S2 TX DISC(1)\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S2 TX DISC(1)\n", lapb->dev);
#endif
lapb_send_control(lapb, LAPB_DISC, LAPB_POLLON, LAPB_COMMAND);
}
lapb_stop_t2timer(lapb);
lapb_disconnect_indication(lapb, LAPB_TIMEDOUT);
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S3 -> S0\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S3 -> S0\n", lapb->dev);
#endif
return;
} else {
lapb->state = LAPB_STATE_0;
lapb_disconnect_indication(lapb, LAPB_TIMEDOUT);
#if LAPB_DEBUG > 0
- printk(KERN_DEBUG "lapb: (%p) S4 -> S0\n", lapb->token);
+ printk(KERN_DEBUG "lapb: (%p) S4 -> S0\n", lapb->dev);
#endif
return;
} else {
struct fifo_sched_data *q = (void*)sch->data;
if (opt == NULL) {
+ unsigned int limit = sch->dev->tx_queue_len ? : 1;
+
if (sch->ops == &bfifo_qdisc_ops)
- q->limit = sch->dev->tx_queue_len*sch->dev->mtu;
+ q->limit = limit*sch->dev->mtu;
else
- q->limit = sch->dev->tx_queue_len;
+ q->limit = limit;
} else {
struct tc_fifo_qopt *ctl = RTA_DATA(opt);
if (opt->rta_len < RTA_LENGTH(sizeof(*ctl)))
unsigned long qave=0;
int i=0;
- if (!t->initd && skb_queue_len(&sch->q) < sch->dev->tx_queue_len) {
+ if (!t->initd && skb_queue_len(&sch->q) < (sch->dev->tx_queue_len ? : 1)) {
D2PRINTK("NO GRED Queues setup yet! Enqueued anyway\n");
goto do_enqueue;
}
/* Is the slot empty? */
if (q->qs[a].qlen == 0) {
+ q->ht[q->hash[a]] = SFQ_DEPTH;
a = q->next[a];
if (a == old_a) {
q->tail = SFQ_DEPTH;
establishment. It is advised to use either HMAC-MD5 or HMAC-SHA1.
config SCTP_HMAC_SHA1
- bool "HMAC-SHA1" if CRYPTO_HMAC=y && CRYPTO_SHA1=y || CRYPTO_SHA1=m
+ bool "HMAC-SHA1"
+ select CRYPTO
+ select CRYPTO_HMAC
+ select CRYPTO_SHA1
help
Enable the use of HMAC-SHA1 during association establishment. It
is advised to use either HMAC-MD5 or HMAC-SHA1.
config SCTP_HMAC_MD5
- bool "HMAC-MD5" if CRYPTO_HMAC=y && CRYPTO_MD5=y || CRYPTO_MD5=m
+ bool "HMAC-MD5"
+ select CRYPTO
+ select CRYPTO_HMAC
+ select CRYPTO_MD5
help
Enable the use of HMAC-MD5 during association establishment. It is
advised to use either HMAC-MD5 or HMAC-SHA1.
#include <net/sctp/sctp.h>
#include <net/sctp/sm.h>
+#define MAX_KMALLOC_SIZE 131072
/* Storage size needed for map includes 2 headers and then the
* specific needs of in or out streams.
struct sctp_ssnmap *sctp_ssnmap_new(__u16 in, __u16 out, int gfp)
{
struct sctp_ssnmap *retval;
- int order;
-
- order = get_order(sctp_ssnmap_size(in,out));
- retval = (struct sctp_ssnmap *)__get_free_pages(gfp, order);
-
+ int size;
+
+ size = sctp_ssnmap_size(in, out);
+ if (size <= MAX_KMALLOC_SIZE)
+ retval = kmalloc(size, gfp);
+ else
+ retval = (struct sctp_ssnmap *)
+ __get_free_pages(gfp, get_order(size));
if (!retval)
goto fail;
return retval;
fail_map:
- free_pages((unsigned long)retval, order);
+ if (size <= MAX_KMALLOC_SIZE)
+ kfree(retval);
+ else
+ free_pages((unsigned long)retval, get_order(size));
fail:
return NULL;
}
void sctp_ssnmap_free(struct sctp_ssnmap *map)
{
if (map && map->malloced) {
- free_pages((unsigned long)map,
- get_order(sctp_ssnmap_size(map->in.len,
- map->out.len)));
+ int size;
+
+ size = sctp_ssnmap_size(map->in.len, map->out.len);
+ if (size <= MAX_KMALLOC_SIZE)
+ kfree(map);
+ else
+ free_pages((unsigned long)map, get_order(size));
SCTP_DBG_OBJCNT_DEC(ssnmap);
}
}
struct rpc_cred *
rpcauth_lookupcred(struct rpc_auth *auth, int taskflags)
{
- struct auth_cred acred = {
- .uid = current->fsuid,
- .gid = current->fsgid,
- .ngroups = current->ngroups,
- .groups = current->groups,
- };
+ struct auth_cred acred;
+ struct rpc_cred *ret;
+
+ get_group_info(current->group_info);
+ acred.uid = current->fsuid;
+ acred.gid = current->fsgid;
+ acred.group_info = current->group_info;
+
dprintk("RPC: looking up %s cred\n",
auth->au_ops->au_name);
- return rpcauth_lookup_credcache(auth, &acred, taskflags);
+ ret = rpcauth_lookup_credcache(auth, &acred, taskflags);
+ put_group_info(current->group_info);
+ return ret;
}
struct rpc_cred *
rpcauth_bindcred(struct rpc_task *task)
{
struct rpc_auth *auth = task->tk_auth;
- struct auth_cred acred = {
- .uid = current->fsuid,
- .gid = current->fsgid,
- .ngroups = current->ngroups,
- .groups = current->groups,
- };
+ struct auth_cred acred;
+ struct rpc_cred *ret;
+
+ get_group_info(current->group_info);
+ acred.uid = current->fsuid;
+ acred.gid = current->fsgid;
+ acred.group_info = current->group_info;
dprintk("RPC: %4d looking up %s cred\n",
task->tk_pid, task->tk_auth->au_ops->au_name);
task->tk_msg.rpc_cred = rpcauth_lookup_credcache(auth, &acred, task->tk_flags);
if (task->tk_msg.rpc_cred == 0)
task->tk_status = -ENOMEM;
- return task->tk_msg.rpc_cred;
+ ret = task->tk_msg.rpc_cred;
+ put_group_info(current->group_info);
+ return ret;
}
void
cred->uc_gid = cred->uc_pgid = 0;
cred->uc_gids[0] = NOGROUP;
} else {
- int groups = acred->ngroups;
+ int groups = acred->group_info->ngroups;
if (groups > NFS_NGROUPS)
groups = NFS_NGROUPS;
cred->uc_puid = current->uid;
cred->uc_pgid = current->gid;
for (i = 0; i < groups; i++)
- cred->uc_gids[i] = (gid_t) acred->groups[i];
+ cred->uc_gids[i] = GROUP_AT(acred->group_info, i);
if (i < NFS_NGROUPS)
cred->uc_gids[i] = NOGROUP;
}
|| cred->uc_pgid != current->gid)
return 0;
- groups = acred->ngroups;
+ groups = acred->group_info->ngroups;
if (groups > NFS_NGROUPS)
groups = NFS_NGROUPS;
for (i = 0; i < groups ; i++)
- if (cred->uc_gids[i] != (gid_t) acred->groups[i])
+ if (cred->uc_gids[i] != GROUP_AT(acred->group_info, i))
return 0;
return 1;
}
if (current_detail && current_index < current_detail->hash_size) {
struct cache_head *ch, **cp;
+ struct cache_detail *d;
write_lock(¤t_detail->hash_lock);
rv = 1;
}
write_unlock(¤t_detail->hash_lock);
- if (ch)
- current_detail->cache_put(ch, current_detail);
- else
+ d = current_detail;
+ if (!ch)
current_index ++;
- }
- spin_unlock(&cache_list_lock);
+ spin_unlock(&cache_list_lock);
+ if (ch)
+ d->cache_put(ch, d);
+ } else
+ spin_unlock(&cache_list_lock);
return rv;
}
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/svcsock.h>
/*
* Get RPC client stats
*/
-int
-rpc_proc_read(char *buffer, char **start, off_t offset, int count,
- int *eof, void *data)
-{
- struct rpc_stat *statp = (struct rpc_stat *) data;
- struct rpc_program *prog = statp->program;
- struct rpc_version *vers;
- int len, i, j;
+static int rpc_proc_show(struct seq_file *seq, void *v) {
+ const struct rpc_stat *statp = seq->private;
+ const struct rpc_program *prog = statp->program;
+ int i, j;
- len = sprintf(buffer,
+ seq_printf(seq,
"net %d %d %d %d\n",
statp->netcnt,
statp->netudpcnt,
statp->nettcpcnt,
statp->nettcpconn);
- len += sprintf(buffer + len,
+ seq_printf(seq,
"rpc %d %d %d\n",
statp->rpccnt,
statp->rpcretrans,
statp->rpcauthrefresh);
for (i = 0; i < prog->nrvers; i++) {
- if (!(vers = prog->version[i]))
+ const struct rpc_version *vers = prog->version[i];
+ if (!vers)
continue;
- len += sprintf(buffer + len, "proc%d %d",
+ seq_printf(seq, "proc%d %d",
vers->number, vers->nrprocs);
for (j = 0; j < vers->nrprocs; j++)
- len += sprintf(buffer + len, " %d",
+ seq_printf(seq, " %d",
vers->procs[j].p_count);
- buffer[len++] = '\n';
+ seq_putc(seq, '\n');
}
+ return 0;
+}
- if (offset >= len) {
- *start = buffer;
- *eof = 1;
- return 0;
- }
- *start = buffer + offset;
- if ((len -= offset) > count)
- return count;
- *eof = 1;
- return len;
+static int rpc_proc_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, rpc_proc_show, PDE(inode)->data);
}
+static struct file_operations rpc_proc_fops = {
+ .owner = THIS_MODULE,
+ .open = rpc_proc_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
/*
* Get RPC server stats
*/
-int
-svc_proc_read(char *buffer, char **start, off_t offset, int count,
- int *eof, void *data)
-{
- struct svc_stat *statp = (struct svc_stat *) data;
- struct svc_program *prog = statp->program;
- struct svc_procedure *proc;
- struct svc_version *vers;
- int len, i, j;
+void svc_seq_show(struct seq_file *seq, const struct svc_stat *statp) {
+ const struct svc_program *prog = statp->program;
+ const struct svc_procedure *proc;
+ const struct svc_version *vers;
+ int i, j;
- len = sprintf(buffer,
+ seq_printf(seq,
"net %d %d %d %d\n",
statp->netcnt,
statp->netudpcnt,
statp->nettcpcnt,
statp->nettcpconn);
- len += sprintf(buffer + len,
+ seq_printf(seq,
"rpc %d %d %d %d %d\n",
statp->rpccnt,
statp->rpcbadfmt+statp->rpcbadauth+statp->rpcbadclnt,
for (i = 0; i < prog->pg_nvers; i++) {
if (!(vers = prog->pg_vers[i]) || !(proc = vers->vs_proc))
continue;
- len += sprintf(buffer + len, "proc%d %d", i, vers->vs_nproc);
+ seq_printf(seq, "proc%d %d", i, vers->vs_nproc);
for (j = 0; j < vers->vs_nproc; j++, proc++)
- len += sprintf(buffer + len, " %d", proc->pc_count);
- buffer[len++] = '\n';
+ seq_printf(seq, " %d", proc->pc_count);
+ seq_putc(seq, '\n');
}
-
- if (offset >= len) {
- *start = buffer;
- *eof = 1;
- return 0;
- }
- *start = buffer + offset;
- if ((len -= offset) > count)
- return count;
- *eof = 1;
- return len;
}
/*
* Register/unregister RPC proc files
*/
static inline struct proc_dir_entry *
-do_register(const char *name, void *data, int issvc)
+do_register(const char *name, void *data, struct file_operations *fops)
{
+ struct proc_dir_entry *ent;
+
rpc_proc_init();
dprintk("RPC: registering /proc/net/rpc/%s\n", name);
- return create_proc_read_entry(name, 0, proc_net_rpc,
- issvc? svc_proc_read : rpc_proc_read,
- data);
+
+ ent = create_proc_entry(name, 0, proc_net_rpc);
+ if (ent) {
+ ent->proc_fops = fops;
+ ent->data = data;
+ }
+ return ent;
}
struct proc_dir_entry *
rpc_proc_register(struct rpc_stat *statp)
{
- return do_register(statp->program->name, statp, 0);
+ return do_register(statp->program->name, statp, &rpc_proc_fops);
}
void
}
struct proc_dir_entry *
-svc_proc_register(struct svc_stat *statp)
+svc_proc_register(struct svc_stat *statp, struct file_operations *fops)
{
- return do_register(statp->program->pg_name, statp, 1);
+ return do_register(statp->program->pg_name, statp, fops);
}
void
dprintk("RPC: registering /proc/net/rpc\n");
if (!proc_net_rpc) {
struct proc_dir_entry *ent;
- ent = proc_mkdir("net/rpc", 0);
+ ent = proc_mkdir("rpc", proc_net);
if (ent) {
ent->owner = THIS_MODULE;
proc_net_rpc = ent;
#ifdef CONFIG_PROC_FS
EXPORT_SYMBOL(rpc_proc_register);
EXPORT_SYMBOL(rpc_proc_unregister);
-EXPORT_SYMBOL(rpc_proc_read);
EXPORT_SYMBOL(svc_proc_register);
EXPORT_SYMBOL(svc_proc_unregister);
-EXPORT_SYMBOL(svc_proc_read);
+EXPORT_SYMBOL(svc_seq_show);
#endif
/* caching... */
&auth_domain_cache,
auth_domain_hash(item),
auth_domain_match(tmp, item),
- kfree(new); if(!set) return NULL;
+ kfree(new); if(!set) {
+ if (new)
+ write_unlock(&auth_domain_cache.hash_lock);
+ else
+ read_unlock(&auth_domain_cache.hash_lock);
+ return NULL;
+ }
new=item; atomic_inc(&new->h.refcnt),
/* no update */,
0 /* no inplace updates */
}
static inline void ip_map_init(struct ip_map *new, struct ip_map *item)
{
- new->m_class = strdup(item->m_class);
+ new->m_class = item->m_class;
+ item->m_class = NULL;
new->m_addr.s_addr = item->m_addr.s_addr;
}
static inline void ip_map_update(struct ip_map *new, struct ip_map *item)
} else
dom = NULL;
- ipm.m_class = class;
+ ipm.m_class = strdup(class);
+ if (ipm.m_class == NULL)
+ return -ENOMEM;
ipm.m_addr.s_addr =
htonl((((((b1<<8)|b2)<<8)|b3)<<8)|b4);
ipm.h.flags = 0;
ip_map_put(&ipmp->h, &ip_map_cache);
if (dom)
auth_domain_put(dom);
+ if (ipm.m_class) kfree(ipm.m_class);
if (!ipmp)
return -ENOMEM;
cache_flush();
if (dom->flavour != RPC_AUTH_UNIX)
return -EINVAL;
udom = container_of(dom, struct unix_domain, h);
- ip.m_class = "nfsd";
+ ip.m_class = strdup("nfsd");
+ if (!ip.m_class)
+ return -ENOMEM;
ip.m_addr = addr;
ip.m_client = udom;
ip.m_add_change = udom->addr_changes+1;
ip.h.expiry_time = NEVER;
ipmp = ip_map_lookup(&ip, 1);
+ if (ip.m_class) kfree(ip.m_class);
if (ipmp) {
ip_map_put(&ipmp->h, &ip_map_cache);
return 0;
if (slen > 16 || (len -= (slen + 2)*4) < 0)
goto badcred;
for (i = 0; i < slen; i++)
- if (i < NGROUPS)
+ if (i < SVC_CRED_NGROUPS)
cred->cr_groups[i] = ntohl(svc_getu32(argv));
else
svc_getu32(argv);
- if (i < NGROUPS)
+ if (i < SVC_CRED_NGROUPS)
cred->cr_groups[i] = NOGROUP;
if (svc_getu32(argv) != RPC_AUTH_NULL || svc_getu32(argv) != 0) {
--- /dev/null
+#!/bin/sh
+#
+# gcc-version gcc-command
+#
+# Prints the gcc version of `gcc-command' in a canonical 4-digit form
+# such as `0295' for gcc-2.95, `0303' for gcc-3.3, etc.
+#
+
+compiler="$*"
+
+MAJOR=$(echo __GNUC__ | $compiler -E -xc - | tail -n 1)
+MINOR=$(echo __GNUC_MINOR__ | $compiler -E -xc - | tail -n 1)
+printf "%02d%02d\\n" $MAJOR $MINOR
+
endif
host-progs := lxdialog
-always := $(host-progs)
+always := ncurses $(host-progs)
lxdialog-objs := checklist.o menubox.o textbox.o yesno.o inputbox.o \
util.o lxdialog.o msgbox.o
-first_rule: ncurses
-
-.PHONY: ncurses
-ncurses:
+.PHONY: $(obj)/ncurses
+$(obj)/ncurses:
@echo "main() {}" > lxtemp.c
@if $(HOSTCC) lxtemp.c $(HOST_LOADLIBES); then \
rm -f lxtemp.c a.out; \
echo -e "\007" ;\
echo ">> Unable to find the Ncurses libraries." ;\
echo ">>" ;\
- echo ">> You must have Ncurses installed in order" ;\
+ echo ">> You must install ncurses-devel in order" ;\
echo ">> to use 'make menuconfig'" ;\
echo ;\
exit 1 ;\
echo "BuildRoot: /var/tmp/%{name}-%{PACKAGE_VERSION}-root"
echo "Provides: $PROVIDES"
echo "%define __spec_install_post /usr/lib/rpm/brp-compress || :"
+echo "%define debug_package %{nil}"
echo ""
echo "%description"
echo "The Linux Kernel, the operating system core itself"
return 0;
}
-static int dummy_task_setgroups (int gidsetsize, gid_t * grouplist)
+static int dummy_task_setgroups (struct group_info *group_info)
{
return 0;
}
obj-$(CONFIG_SECURITY_SELINUX) := selinux.o ss/
-selinux-y := avc.o hooks.o selinuxfs.o
+selinux-y := avc.o hooks.o selinuxfs.o netlink.o
selinux-$(CONFIG_SECURITY_NETWORK) += netif.o
return task_has_perm(current, p, PROCESS__GETSESSION);
}
-static int selinux_task_setgroups(int gidsetsize, gid_t *grouplist)
+static int selinux_task_setgroups(struct group_info *group_info)
{
/* See the comment for setuid above. */
return 0;
--- /dev/null
+/*
+ * Netlink event notifications for SELinux.
+ *
+ * Author: James Morris <jmorris@redhat.com>
+ *
+ * Copyright (C) 2004 Red Hat, Inc., James Morris <jmorris@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2,
+ * as published by the Free Software Foundation.
+ */
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/skbuff.h>
+#include <linux/netlink.h>
+#include <linux/selinux_netlink.h>
+
+static struct sock *selnl;
+
+static int selnl_msglen(int msgtype)
+{
+ int ret = 0;
+
+ switch (msgtype) {
+ case SELNL_MSG_SETENFORCE:
+ ret = sizeof(struct selnl_msg_setenforce);
+ break;
+
+ case SELNL_MSG_POLICYLOAD:
+ ret = sizeof(struct selnl_msg_policyload);
+ break;
+
+ default:
+ BUG();
+ }
+ return ret;
+}
+
+static void selnl_add_payload(struct nlmsghdr *nlh, int len, int msgtype, void *data)
+{
+ switch (msgtype) {
+ case SELNL_MSG_SETENFORCE: {
+ struct selnl_msg_setenforce *msg = NLMSG_DATA(nlh);
+
+ memset(msg, 0, len);
+ msg->val = *((int *)data);
+ break;
+ }
+
+ case SELNL_MSG_POLICYLOAD: {
+ struct selnl_msg_policyload *msg = NLMSG_DATA(nlh);
+
+ memset(msg, 0, len);
+ msg->seqno = *((u32 *)data);
+ break;
+ }
+
+ default:
+ BUG();
+ }
+}
+
+static void selnl_notify(int msgtype, void *data)
+{
+ int len;
+ unsigned char *tmp;
+ struct sk_buff *skb;
+ struct nlmsghdr *nlh;
+
+ len = selnl_msglen(msgtype);
+
+ skb = alloc_skb(NLMSG_SPACE(len), GFP_USER);
+ if (!skb)
+ goto oom;
+
+ tmp = skb->tail;
+ nlh = NLMSG_PUT(skb, 0, 0, msgtype, len);
+ selnl_add_payload(nlh, len, msgtype, data);
+ nlh->nlmsg_len = skb->tail - tmp;
+ netlink_broadcast(selnl, skb, 0, SELNL_GRP_AVC, GFP_USER);
+out:
+ return;
+
+nlmsg_failure:
+ kfree_skb(skb);
+oom:
+ printk(KERN_ERR "SELinux: OOM in %s\n", __FUNCTION__);
+ goto out;
+}
+
+void selnl_notify_setenforce(int val)
+{
+ selnl_notify(SELNL_MSG_SETENFORCE, &val);
+}
+
+void selnl_notify_policyload(u32 seqno)
+{
+ selnl_notify(SELNL_MSG_POLICYLOAD, &seqno);
+}
+
+static int __init selnl_init(void)
+{
+ selnl = netlink_kernel_create(NETLINK_SELINUX, NULL);
+ if (selnl == NULL)
+ panic("SELinux: Cannot create netlink socket.");
+ netlink_set_nonroot(NETLINK_SELINUX, NL_NONROOT_RECV);
+ return 0;
+}
+
+__initcall(selnl_init);
#include "security.h"
#include "objsec.h"
+extern void selnl_notify_setenforce(int val);
+
/* Check whether a task is allowed to use a security operation. */
int task_has_security(struct task_struct *tsk,
u32 perms)
return -ENOMEM;
memset(page, 0, PAGE_SIZE);
- length = snprintf(page, PAGE_SIZE, "%d", selinux_enforcing);
+ length = scnprintf(page, PAGE_SIZE, "%d", selinux_enforcing);
if (length < 0) {
free_page((unsigned long)page);
return length;
selinux_enforcing = new_value;
if (selinux_enforcing)
avc_ss_reset(0);
+ selnl_notify_setenforce(selinux_enforcing);
}
length = count;
out:
return -ENOMEM;
memset(page, 0, PAGE_SIZE);
- length = snprintf(page, PAGE_SIZE, "%u", POLICYDB_VERSION);
+ length = scnprintf(page, PAGE_SIZE, "%u", POLICYDB_VERSION);
if (length < 0) {
free_page((unsigned long)page);
return length;
if (length < 0)
goto out2;
- length = snprintf(buf, PAYLOAD_SIZE, "%x %x %x %x %u",
+ length = scnprintf(buf, PAYLOAD_SIZE, "%x %x %x %x %u",
avd.allowed, avd.decided,
avd.auditallow, avd.auditdeny,
avd.seqno);
#include "services.h"
#include "mls.h"
+extern void selnl_notify_policyload(u32 seqno);
+
static rwlock_t policy_rwlock = RW_LOCK_UNLOCKED;
#define POLICY_RDLOCK read_lock(&policy_rwlock)
#define POLICY_WRLOCK write_lock_irq(&policy_rwlock)
sidtab_destroy(&oldsidtab);
avc_ss_reset(seqno);
+ selnl_notify_policyload(seqno);
return 0;
if (buffer->stop || buffer->error)
return 0;
va_start(args, fmt);
- res = vsnprintf(sbuffer, sizeof(sbuffer), fmt, args);
+ res = vscnprintf(sbuffer, sizeof(sbuffer), fmt, args);
va_end(args);
if (buffer->size + res >= buffer->len) {
buffer->stop = 1;
The number of ports to be created can be specified via the module
parameter "ports". For example, to create four ports, add the
- following option in /etc/modules.conf:
+ following option in /etc/modprobe.conf:
option snd-seq-dummy ports=4
********************************************************************/
#include <linux/version.h>
-#if LINUX_VERSION_CODE < 0x020101
-# define LINUX20
-#endif
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/delay.h>
#include <linux/mm.h>
-#ifdef LINUX20
-# include <linux/major.h>
-# include <linux/fs.h>
-# include <linux/sound.h>
-# include <asm/segment.h>
-# include "sound_config.h"
-#else
-# include <linux/init.h>
-# include <asm/io.h>
-# include <asm/uaccess.h>
-# include <linux/spinlock.h>
-#endif
+#include <linux/init.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <linux/spinlock.h>
#include <asm/irq.h>
#include "msnd.h"
#include <linux/module.h>
#include <linux/version.h>
-#if LINUX_VERSION_CODE > 131328
-#define LINUX21X
-#endif
-
#ifdef __KERNEL__
#include <linux/utsname.h>
#include <linux/string.h>
const char name[] = "TRAILER!!!";
sprintf(s, "%s%08X%08X%08lX%08lX%08X%08lX"
- "%08X%08X%08X%08X%08X%08ZX%08X",
+ "%08X%08X%08X%08X%08X%08X%08X",
"070701", /* magic */
0, /* ino */
0, /* mode */
0, /* minor */
0, /* rmajor */
0, /* rminor */
- strlen(name) + 1, /* namesize */
+ (unsigned)strlen(name) + 1, /* namesize */
0); /* chksum */
push_hdr(s);
push_rest(name);
time_t mtime = time(NULL);
sprintf(s,"%s%08X%08X%08lX%08lX%08X%08lX"
- "%08X%08X%08X%08X%08X%08ZX%08X",
+ "%08X%08X%08X%08X%08X%08X%08X",
"070701", /* magic */
ino++, /* ino */
S_IFDIR | mode, /* mode */
1, /* minor */
0, /* rmajor */
0, /* rminor */
- strlen(name) + 1, /* namesize */
+ (unsigned)strlen(name) + 1,/* namesize */
0); /* chksum */
push_hdr(s);
push_rest(name);
mode |= S_IFCHR;
sprintf(s,"%s%08X%08X%08lX%08lX%08X%08lX"
- "%08X%08X%08X%08X%08X%08ZX%08X",
+ "%08X%08X%08X%08X%08X%08X%08X",
"070701", /* magic */
ino++, /* ino */
mode, /* mode */
1, /* minor */
maj, /* rmajor */
min, /* rminor */
- strlen(name) + 1, /* namesize */
+ (unsigned)strlen(name) + 1,/* namesize */
0); /* chksum */
push_hdr(s);
push_rest(name);
}
sprintf(s,"%s%08X%08X%08lX%08lX%08X%08lX"
- "%08X%08X%08X%08X%08X%08ZX%08X",
+ "%08X%08X%08X%08X%08X%08X%08X",
"070701", /* magic */
ino++, /* ino */
mode, /* mode */
1, /* minor */
0, /* rmajor */
0, /* rminor */
- strlen(location) + 1, /* namesize */
+ (unsigned)strlen(location) + 1,/* namesize */
0); /* chksum */
push_hdr(s);
push_string(location);