Range: 0 - 8192
Default: 64
+ hpet64 [X86-64,HPET] enable 64-bit mode of the HPET timer (bnc#456700)
+
com20020= [HW,NET] ARCnet - COM20020 chipset
Format:
<io>[,<irq>[,<nodeID>[,<backplane>[,<ckp>[,<timeout>]]]]]
The filter can be disabled or changed to another
driver later using sysfs.
+ drm_kms_helper.edid_firmware=[<connector>:]<file>
+ Broken monitors, graphic adapters and KVMs may
+ send no or incorrect EDID data sets. This parameter
+ allows to specify an EDID data set in the
+ /lib/firmware directory that is used instead.
+ Generic built-in EDID data sets are used, if one of
+ edid/1024x768.bin, edid/1280x1024.bin,
+ edid/1680x1050.bin, or edid/1920x1080.bin is given
+ and no file with the same name exists. Details and
+ instructions how to build your own EDID data are
+ available in Documentation/EDID/HOWTO.txt. An EDID
+ data set will only be used for a particular connector,
+ if its name and a colon are prepended to the EDID
+ name.
+
dscc4.setup= [NET]
earlycon= [KNL] Output early console device and options.
gpt [EFI] Forces disk with valid GPT signature but
invalid Protective MBR to be treated as GPT.
- guestdev= [PCI,ACPI,XEN]
- Format: {<device path>|<sbdf>}][,{<device path>|<sbdf>}[,...]]
- Format of device path: <hid>[:<uid>]-<dev>.<func>[-<dev>.<func>[,...]][+iomul]
- Format of sbdf: [<segment>:]<bus>:<dev>.<func>[+iomul]
- Specifies PCI device for guest domain.
- If PCI-PCI bridge is specified, all PCI devices
- behind PCI-PCI bridge are reserved.
- +iomul means that this PCI function will share
- IO ports with other +iomul functions under same
- switch. NOTE: if +iomul is specfied, all the functions
- of the device will share IO ports.
-
- guestiomuldev= [PCI,ACPI,XEN]
- Format: [sbd][,<sbd>][,...]
- Format of sbdf: [<segment>:]<bus>:<dev>
- Note: function shouldn't be specified.
- Specifies PCI device for IO port multiplexing driver.
-
hashdist= [KNL,NUMA] Large hashes allocated during boot
are distributed across NUMA nodes. Defaults on
for 64-bit NUMA, off otherwise.
controller
i8042.nopnp [HW] Don't use ACPIPnP / PnPBIOS to discover KBD/AUX
controllers
- i8042.notimeout [HW] Ignore timeout condition signalled by conroller
+ i8042.notimeout [HW] Ignore timeout condition signalled by controller
i8042.reset [HW] Reset the controller during init and cleanup
i8042.unlock [HW] Unlock (ignore) the keylock
no_x2apic_optout
BIOS x2APIC opt-out request will be ignored
- inttest= [IA-64]
-
iomem= Disable strict checking of access to MMIO memory
strict regions from userspace.
relaxed
of returning the full 64-bit number.
The default is to return 64-bit inode numbers.
+ nfs.max_session_slots=
+ [NFSv4.1] Sets the maximum number of session slots
+ the client will attempt to negotiate with the server.
+ This limits the number of simultaneous RPC requests
+ that the client can send to the NFSv4.1 server.
+ Note that there is little point in setting this
+ value higher than the max_tcp_slot_table_limit.
+
nfs.nfs4_disable_idmapping=
[NFSv4] When set to the default of '1', this option
ensures that both the RPC level authentication
back to using the idmapper.
To turn off this behaviour, set the value to '0'.
+ nfs.send_implementation_id =
+ [NFSv4.1] Send client implementation identification
+ information in exchange_id requests.
+ If zero, no implementation identification information
+ will be sent.
+ The default is to send the implementation identification
+ information.
+
+ nfsd.nfs4_disable_idmapping=
+ [NFSv4] When set to the default of '1', the NFSv4
+ server will return only numeric uids and gids to
+ clients using auth_sys, and will accept numeric uids
+ and gids from such clients. This is intended to ease
+ migration from NFSv2/v3.
+
+ objlayoutdriver.osd_login_prog=
+ [NFS] [OBJLAYOUT] sets the pathname to the program which
+ is used to automatically discover and login into new
+ osd-targets. Please see:
+ Documentation/filesystems/pnfs.txt for more explanations
+
nmi_debug= [KNL,AVR32,SH] Specify one or more actions to take
when a NMI is triggered.
Format: [state][,regs][,debounce][,die]
shutdown the other cpus. Instead use the REBOOT_VECTOR
irq.
+ nomodule Disable module load
+
nopat [X86] Disable PAT (page attribute table extension of
pagetables) support.
the default.
off: Turn ECRC off
on: Turn ECRC on.
- realloc reallocate PCI resources if allocations done by BIOS
- are erroneous.
-
- pci_reserve= [PCI]
- Format: [<sbdf>[+IO<size>][+MEM<size>]][,<sbdf>...]
- Format of sbdf: [<segment>:]<bus>:<dev>.<func>
- Specifies the least reserved io size or memory size
- which is assigned to PCI bridge even when no child
- pci device exists. This is useful with PCI hotplug.
+ realloc= Enable/disable reallocating PCI bridge resources
+ if allocations done by BIOS are too small to
+ accommodate resources required by all child
+ devices.
+ off: Turn realloc off
+ on: Turn realloc on
+ realloc same as realloc=on
+ noari do not use PCIe ARI.
pcie_aspm= [PCIE] Forcibly enable or disable PCIe Active State Power
Management.
force Enable ASPM even on devices that claim not to support it.
WARNING: Forcing ASPM on may cause system lockups.
+ pcie_hp= [PCIE] PCI Express Hotplug driver options:
+ nomsi Do not use MSI for PCI Express Native Hotplug (this
+ makes all PCIe ports use INTx for hotplug services).
+
pcie_ports= [PCIE] PCIe ports handling:
auto Ask the BIOS whether or not to use native PCIe services
associated with PCIe ports (PME, hot-plug, AER). Use
Run specified binary instead of /init from the ramdisk,
used for early userspace startup. See initrd.
- reassign_resources [PCI,ACPI,XEN]
- Use guestdev= parameter to reassign device's
- resources, or specify =all here.
-
reboot= [BUGS=X86-32,BUGS=ARM,BUGS=IA-64] Rebooting mode
Format: <reboot_mode>[,<reboot_mode2>[,...]]
See arch/*/kernel/reboot.c or arch/*/kernel/process.c
For more information see Documentation/vm/slub.txt.
slub_min_order= [MM, SLUB]
- Determines the mininum page order for slabs. Must be
+ Determines the minimum page order for slabs. Must be
lower than slub_max_order.
For more information see Documentation/vm/slub.txt.
threadirqs [KNL]
Force threading of all interrupt handlers except those
- marked explicitely IRQF_NO_THREAD.
+ marked explicitly IRQF_NO_THREAD.
topology= [S390]
Format: {off | on}
to facilitate early boot debugging.
See also Documentation/trace/events.txt
+ transparent_hugepage=
+ [KNL]
+ Format: [always|madvise|never]
+ Can be used to control the default behavior of the system
+ with respect to transparent hugepages.
+ See Documentation/vm/transhuge.txt for more details.
+
tsc= Disable clocksource stability checks for TSC.
Format: <string>
[x86] reliable: mark tsc clocksource as reliable, this
unknown_nmi_panic
[X86] Cause panic on unknown NMI.
+ unsupported Allow loading of unsupported kernel modules:
+ 0 = only allow supported modules,
+ 1 = warn when loading unsupported modules,
+ 2 = don't warn.
+
+ CONFIG_ENTERPRISE_SUPPORT must be enabled for this
+ to have any effect.
+
usbcore.authorized_default=
[USB] Default USB device authorization:
(default -1 = authorized except for wireless USB,
VERSION = 3
- PATCHLEVEL = 3
+ PATCHLEVEL = 4
SUBLEVEL = 0
- EXTRAVERSION =
+ EXTRAVERSION = -rc1
NAME = Saber-toothed Squirrel
# *DOCUMENTATION*
KBUILD_CHECKSRC = 0
endif
+# Call message checker as part of the C compilation
+#
+# Use 'make D=1' to enable checking
+# Use 'make D=2' to create the message catalog
+
+ifdef D
+ ifeq ("$(origin D)", "command line")
+ KBUILD_KMSG_CHECK = $(D)
+ endif
+endif
+ifndef KBUILD_KMSG_CHECK
+ KBUILD_KMSG_CHECK = 0
+endif
+
# Use make M=dir to specify directory of external module to build
# Old syntax make ... SUBDIRS=$PWD is still supported
# Setting the environment variable KBUILD_EXTMOD take precedence
CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
-Wbitwise -Wno-return-void $(CF)
+KMSG_CHECK = $(srctree)/scripts/kmsg-doc
CFLAGS_MODULE =
AFLAGS_MODULE =
LDFLAGS_MODULE =
KBUILD_CFLAGS_MODULE := -DMODULE
KBUILD_LDFLAGS_MODULE := -T $(srctree)/scripts/module-common.lds
+# Warn about unsupported modules in kernels built inside Autobuild
+ifneq ($(wildcard /.buildenv),)
+CFLAGS += -DUNSUPPORTED_MODULES=2
+endif
+
# Read KERNELRELEASE from include/config/kernel.release (if it exists)
KERNELRELEASE = $(shell cat include/config/kernel.release 2> /dev/null)
KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
export KBUILD_ARFLAGS
+export KBUILD_KMSG_CHECK KMSG_CHECK
# When compiling out-of-tree modules, put MODVERDIR in the module
# tree rather than in the kernel tree. The kernel tree might
endif
endif
+ifdef CONFIG_UNWIND_INFO
+KBUILD_CFLAGS += -fasynchronous-unwind-tables
+LDFLAGS_vmlinux += --eh-frame-hdr
+endif
+
ifdef CONFIG_DEBUG_INFO
KBUILD_CFLAGS += -g
KBUILD_AFLAGS += -gdwarf-2
# ---------------------------------------------------------------------------
# Firmware install
-INSTALL_FW_PATH=$(INSTALL_MOD_PATH)/lib/firmware
+INSTALL_FW_PATH=$(INSTALL_MOD_PATH)/lib/firmware/$(KERNELRELEASE)
export INSTALL_FW_PATH
PHONY += firmware_install
#
clean: rm-dirs := $(CLEAN_DIRS)
clean: rm-files := $(CLEAN_FILES)
- clean-dirs := $(addprefix _clean_, . $(vmlinux-alldirs) Documentation)
+ clean-dirs := $(addprefix _clean_, . $(vmlinux-alldirs) Documentation samples)
PHONY += $(clean-dirs) clean archclean
$(clean-dirs):
#define MODULES_END (END_MEM)
#define MODULES_VADDR (PHYS_OFFSET)
+ #define XIP_VIRT_ADDR(physaddr) (physaddr)
+
#endif /* !CONFIG_MMU */
/*
#define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page)))
#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys)))
+#ifndef CONFIG_ARM_PATCH_PHYS_VIRT
+#ifndef PHYS_OFFSET
+#ifdef PLAT_PHYS_OFFSET
+#define PHYS_OFFSET PLAT_PHYS_OFFSET
+#else
+#define PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
+#endif
+#endif
+#endif
+
#ifndef __ASSEMBLY__
/*
#endif
#endif
-#ifndef PHYS_OFFSET
-#ifdef PLAT_PHYS_OFFSET
-#define PHYS_OFFSET PLAT_PHYS_OFFSET
-#else
-#define PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
-#endif
-#endif
-
/*
* PFNs are used to describe any physical page; this means
* PFN 0 == physical address 0.
config IA64_XEN_GUEST
bool "Xen guest"
select SWIOTLB
- depends on PARAVIRT_XEN
+ depends on XEN
help
Build a kernel that runs on Xen guest domain. At this moment only
16KB page size in supported.
config SGI_SN
def_bool y if (IA64_SGI_SN2 || IA64_GENERIC)
+ select HAVE_UNSTABLE_SCHED_CLOCK
config IA64_ESI
bool "ESI (Extensible SAL Interface) support"
#include <asm/irq.h>
#include <asm/io.h>
#include <asm/smp.h>
- #include <asm/system.h>
#include <asm/mmu.h>
#include <asm/pgtable.h>
#include <asm/pci.h>
#include <linux/linux_logo.h>
/*
- * Properties whose value is longer than this get excluded from our
- * copy of the device tree. This value does need to be big enough to
- * ensure that we don't lose things like the interrupt-map property
- * on a PCI-PCI bridge.
- */
- #define MAX_PROPERTY_LENGTH (1UL * 1024 * 1024)
-
- /*
* Eventually bump that one up
*/
#define DEVTREE_CHUNK_SIZE 0x100000
static unsigned long __initdata prom_initrd_start, prom_initrd_end;
+static int __initdata prom_no_display;
#ifdef CONFIG_PPC64
static int __initdata prom_iommu_force_on;
static int __initdata prom_iommu_off;
if (RELOC(of_platform) == PLATFORM_POWERMAC)
asm("trap\n");
- /* ToDo: should put up an SRC here on p/iSeries */
+ /* ToDo: should put up an SRC here on pSeries */
call_prom("exit", 0, 0);
for (;;) /* should never get here */
#endif /* CONFIG_CMDLINE */
prom_printf("command line: %s\n", RELOC(prom_cmd_line));
+ opt = strstr(RELOC(prom_cmd_line), RELOC("prom="));
+ if (opt) {
+ opt += 5;
+ while (*opt && *opt == ' ')
+ opt++;
+ if (!strncmp(opt, RELOC("nodisplay"), 9))
+ RELOC(prom_no_display) = 1;
+ }
#ifdef CONFIG_PPC64
opt = strstr(RELOC(prom_cmd_line), RELOC("iommu="));
if (opt) {
/* sanity checks */
if (l == PROM_ERROR)
continue;
- if (l > MAX_PROPERTY_LENGTH) {
- prom_printf("WARNING: ignoring large property ");
- /* It seems OF doesn't null-terminate the path :-( */
- prom_printf("[%s] ", path);
- prom_printf("%s length 0x%x\n", RELOC(pname), l);
- continue;
- }
/* push property head */
dt_push_token(OF_DT_PROP, mem_start, mem_end);
/*
* Initialize display devices
*/
+ if (RELOC(prom_no_display) == 0)
prom_check_displays();
#ifdef CONFIG_PPC64
#include <linux/security.h>
#include <linux/signal.h>
#include <linux/compat.h>
+#include <linux/elf.h>
#include <asm/uaccess.h>
#include <asm/page.h>
#include <asm/pgtable.h>
- #include <asm/system.h>
+ #include <asm/switch_to.h>
+#include "ppc32.h"
+
/*
* does not yet catch signals sent when the child dies.
* in exit.c or in signal.c.
#define FPRINDEX(i) TS_FPRWIDTH * FPRNUMBER(i) * 2 + FPRHALF(i)
#define FPRINDEX_3264(i) (TS_FPRWIDTH * ((i) - PT_FPR0))
+static int compat_ptrace_getsiginfo(struct task_struct *child, compat_siginfo_t __user *data)
+{
+ siginfo_t lastinfo;
+ int error = -ESRCH;
+
+ read_lock(&tasklist_lock);
+ if (likely(child->sighand != NULL)) {
+ error = -EINVAL;
+ spin_lock_irq(&child->sighand->siglock);
+ if (likely(child->last_siginfo != NULL)) {
+ lastinfo = *child->last_siginfo;
+ error = 0;
+ }
+ spin_unlock_irq(&child->sighand->siglock);
+ }
+ read_unlock(&tasklist_lock);
+ if (!error)
+ return copy_siginfo_to_user32(data, &lastinfo);
+ return error;
+}
+
long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
compat_ulong_t caddr, compat_ulong_t cdata)
{
0, PT_REGS_COUNT * sizeof(compat_long_t),
compat_ptr(data));
+ case PTRACE_GETSIGINFO:
+ return compat_ptrace_getsiginfo(child, compat_ptr(data));
+
case PTRACE_GETFPREGS:
case PTRACE_SETFPREGS:
case PTRACE_GETVRREGS:
if (!property)
goto out_put;
if (!strcmp(property, "failsafe") || !strcmp(property, "serial"))
- add_preferred_console("ttyS", 0, NULL);
+ add_preferred_console("ttyS", 0, "115200");
out_put:
of_node_put(node);
}
if (len > 1)
isu_size = iranges[3];
- chrp_mpic = mpic_alloc(np, opaddr, 0, isu_size, 0, " MPIC ");
+ chrp_mpic = mpic_alloc(np, opaddr, MPIC_NO_RESET,
+ isu_size, 0, " MPIC ");
if (chrp_mpic == NULL) {
printk(KERN_ERR "Failed to allocate MPIC structure\n");
goto bail;
BUG_ON(openpic_addr == 0);
/* Setup the openpic driver */
- mpic = mpic_alloc(pSeries_mpic_node, openpic_addr, 0,
- 16, 250, /* isu size, irq count */
- " MPIC ");
+ mpic = mpic_alloc(pSeries_mpic_node, openpic_addr,
+ MPIC_NO_RESET, 16, 0, " MPIC ");
BUG_ON(mpic == NULL);
/* Add ISUs */
switch (action) {
case PSERIES_RECONFIG_ADD:
pci = np->parent->data;
- if (pci)
+ if (pci) {
update_dn_pci_info(np, pci->phb);
+
+ /* Create EEH device for the OF node */
+ eeh_dev_init(np, pci->phb);
+ }
break;
default:
err = NOTIFY_DONE;
fwnmi_init();
+ /* By default, only probe PCI (can be overriden by rtas_pci) */
+ pci_add_flags(PCI_PROBE_ONLY);
+
/* Find and initialize PCI host bridges */
init_pci_config_tokens();
+ eeh_pseries_init();
find_and_init_phbs();
pSeries_reconfig_notifier_register(&pci_dn_reconfig_nb);
eeh_init();
static int __init pSeries_init_panel(void)
{
/* Manually leave the kernel version on the panel. */
- ppc_md.progress("Linux ppc64\n", 0);
+ ppc_md.progress("SUSE Linux\n", 0);
ppc_md.progress(init_utsname()->version, 0);
return 0;
#include <asm/irq_regs.h>
#include <asm/spu.h>
#include <asm/spu_priv1.h>
- #include <asm/firmware.h>
#include <asm/setjmp.h>
#include <asm/reg.h>
+ #include <asm/debug.h>
#ifdef CONFIG_PPC64
#include <asm/hvcall.h>
static int do_step(struct pt_regs *);
static void bpt_cmds(void);
static void cacheflush(void);
+static void xmon_show_dmesg(void);
static int cpu_cmd(void);
static void csum(void);
static void bootcmds(void);
#endif
"\
C checksum\n\
+ D show dmesg (printk) buffer\n\
d dump bytes\n\
di dump instructions\n\
df dump float values\n\
case 'd':
dump();
break;
+ case 'D':
+ xmon_show_dmesg();
+ break;
case 'l':
symbol_lookup();
break;
printf(" current = 0x%lx\n", current);
#ifdef CONFIG_PPC64
- printf(" paca = 0x%lx\n", get_paca());
+ printf(" paca = 0x%lx\t softe: %d\t irq_happened: 0x%02x\n",
+ local_paca, local_paca->soft_enabled, local_paca->irq_happened);
#endif
if (current) {
printf(" pid = %ld, comm = %s\n",
mfspr(SPRN_DEC), mfspr(SPRN_SPRG2));
printf("sp = "REG" sprg3= "REG"\n", sp, mfspr(SPRN_SPRG3));
printf("toc = "REG" dar = "REG"\n", toc, mfspr(SPRN_DAR));
- #ifdef CONFIG_PPC_ISERIES
- if (firmware_has_feature(FW_FEATURE_ISERIES)) {
- struct paca_struct *ptrPaca;
- struct lppaca *ptrLpPaca;
-
- /* Dump out relevant Paca data areas. */
- printf("Paca: \n");
- ptrPaca = get_paca();
-
- printf(" Local Processor Control Area (LpPaca): \n");
- ptrLpPaca = ptrPaca->lppaca_ptr;
- printf(" Saved Srr0=%.16lx Saved Srr1=%.16lx \n",
- ptrLpPaca->saved_srr0, ptrLpPaca->saved_srr1);
- printf(" Saved Gpr3=%.16lx Saved Gpr4=%.16lx \n",
- ptrLpPaca->saved_gpr3, ptrLpPaca->saved_gpr4);
- printf(" Saved Gpr5=%.16lx \n",
- ptrLpPaca->gpr5_dword.saved_gpr5);
- }
- #endif
return;
}
printf("%s", after);
}
+extern void kdb_syslog_data(char *syslog_data[]);
+#define SYSLOG_WRAP(p) if (p < syslog_data[0]) p = syslog_data[1]-1; \
+ else if (p >= syslog_data[1]) p = syslog_data[0];
+
+static void xmon_show_dmesg(void)
+{
+ char *syslog_data[4], *start, *end, c;
+ int logsize;
+
+ /* syslog_data[0,1] physical start, end+1.
+ * syslog_data[2,3] logical start, end+1.
+ */
+ kdb_syslog_data(syslog_data);
+ if (syslog_data[2] == syslog_data[3])
+ return;
+ logsize = syslog_data[1] - syslog_data[0];
+ start = syslog_data[0] + (syslog_data[2] - syslog_data[0]) % logsize;
+ end = syslog_data[0] + (syslog_data[3] - syslog_data[0]) % logsize;
+
+ /* Do a line at a time (max 200 chars) to reduce overhead */
+ c = '\0';
+ while(1) {
+ char *p;
+ int chars = 0;
+ if (!*start) {
+ while (!*start) {
+ ++start;
+ SYSLOG_WRAP(start);
+ if (start == end)
+ break;
+ }
+ if (start == end)
+ break;
+ }
+ p = start;
+ while (*start && chars < 200) {
+ c = *start;
+ ++chars;
+ ++start;
+ SYSLOG_WRAP(start);
+ if (start == end || c == '\n')
+ break;
+ }
+ if (chars)
+ printf("%.*s", chars, p);
+ if (start == end)
+ break;
+ }
+ if (c != '\n')
+ printf("\n");
+}
+
#ifdef CONFIG_PPC_BOOK3S_64
static void dump_slb(void)
{
static void dump_stab(void)
{
int i;
- unsigned long *tmp = (unsigned long *)get_paca()->stab_addr;
+ unsigned long *tmp = (unsigned long *)local_paca->stab_addr;
printf("Segment table contents of cpu %x\n", smp_processor_id());
static void xmon_init(int enable)
{
- #ifdef CONFIG_PPC_ISERIES
- if (firmware_has_feature(FW_FEATURE_ISERIES))
- return;
- #endif
if (enable) {
__debugger = xmon;
__debugger_ipi = xmon_ipi;
static int __init setup_xmon_sysrq(void)
{
- #ifdef CONFIG_PPC_ISERIES
- if (firmware_has_feature(FW_FEATURE_ISERIES))
- return 0;
- #endif
register_sysrq_key('x', &sysrq_xmon_op);
return 0;
}
config S390
def_bool y
select USE_GENERIC_SMP_HELPERS if SMP
+ select GENERIC_CPU_DEVICES if !SMP
select HAVE_SYSCALL_WRAPPERS
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
prompt "Kernel support for 31 bit emulation"
depends on 64BIT
select COMPAT_BINFMT_ELF
+ select ARCH_WANT_OLD_COMPAT_IPC
help
Select this option if you want to enable your system kernel to
handle system-calls from ELF binaries for 31 bit ESA. This option
virtio transport. If KVM is detected, the virtio console will be
the default console.
+config KMSG_IDS
+ bool "Kernel message numbers"
+ default y
+ help
+ Select this option if you want to include a message number to the
+ prefix for kernel messages issued by the s390 architecture and
+ driver code. See "Documentation/s390/kmsg.txt" for more details.
+
config SECCOMP
def_bool y
prompt "Enable seccomp to safely compute untrusted bytecode"
config X86_32
def_bool !64BIT
- select CLKSRC_I8253 if !XEN
+ select CLKSRC_I8253
config X86_64
def_bool 64BIT
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_IDE
select HAVE_OPROFILE
- select HAVE_PCSPKR_PLATFORM if !XEN_UNPRIVILEGED_GUEST
+ select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS
select HAVE_IRQ_WORK
select HAVE_IOREMAP_PROT
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_FTRACE_NMI_ENTER if DYNAMIC_FTRACE
select HAVE_SYSCALL_TRACEPOINTS
- select HAVE_KVM if !XEN
- select HAVE_ARCH_KGDB if !XEN
+ select HAVE_KVM
+ select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK
select HAVE_GENERIC_DMA_COHERENT if X86_32
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_DMA_API_DEBUG
select HAVE_KERNEL_GZIP
- select HAVE_KERNEL_BZIP2 if !XEN
- select HAVE_KERNEL_LZMA if !XEN
- select HAVE_KERNEL_XZ if !XEN
- select HAVE_KERNEL_LZO if !XEN
+ select HAVE_KERNEL_BZIP2
+ select HAVE_KERNEL_LZMA
+ select HAVE_KERNEL_XZ
+ select HAVE_KERNEL_LZO
select HAVE_HW_BREAKPOINT
select HAVE_MIXED_BREAKPOINTS_REGS
select PERF_EVENTS
- select HAVE_PERF_EVENTS_NMI if !XEN
+ select HAVE_PERF_EVENTS_NMI
select ANON_INODES
select HAVE_ALIGNED_STRUCT_PAGE if SLUB && !M386
select HAVE_CMPXCHG_LOCAL if !M386
select HAVE_ARCH_JUMP_LABEL
select HAVE_TEXT_POKE_SMP
select HAVE_GENERIC_HARDIRQS
- select HAVE_SPARSE_IRQ
select SPARSE_IRQ
select GENERIC_FIND_FIRST_BIT
select GENERIC_IRQ_PROBE
select IRQ_FORCED_THREADING
select USE_GENERIC_SMP_HELPERS if SMP
select HAVE_BPF_JIT if (X86_64 && NET)
- select CLKEVT_I8253 if !XEN
+ select CLKEVT_I8253
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select GENERIC_IOMAP
+ select DCACHE_WORD_ACCESS if !DEBUG_PAGEALLOC
config INSTRUCTION_DECODER
def_bool (KPROBES || PERF_EVENTS)
config CLOCKSOURCE_WATCHDOG
def_bool y
- depends on !XEN
config GENERIC_CLOCKEVENTS
def_bool y
config ARCH_CLOCKSOURCE_DATA
def_bool y
- depends on X86_64 && !XEN
+ depends on X86_64
config GENERIC_CLOCKEVENTS_BROADCAST
def_bool y
depends on X86_64 || (X86_32 && X86_LOCAL_APIC)
- depends on !XEN
config LOCKDEP_SUPPORT
def_bool y
bool
config NEED_DMA_MAP_STATE
- def_bool (X86_64 || INTEL_IOMMU || DMA_API_DEBUG || SWIOTLB)
+ def_bool (X86_64 || INTEL_IOMMU || DMA_API_DEBUG)
config NEED_SG_DMA_LENGTH
def_bool y
config ARCH_HIBERNATION_POSSIBLE
def_bool y
- depends on !XEN
config ARCH_SUSPEND_POSSIBLE
def_bool y
config X86_HT
def_bool y
- depends on SMP && !XEN
-
- config X86_NO_TSS
- def_bool y
- depends on XEN
-
- config X86_NO_IDT
- def_bool y
- depends on XEN
+ depends on SMP
config X86_32_LAZY_GS
def_bool y
config ARCH_CPU_PROBE_RELEASE
def_bool y
- depends on HOTPLUG_CPU && !XEN
+ depends on HOTPLUG_CPU
source "init/Kconfig"
source "kernel/Kconfig.freezer"
For old smp systems that do not have proper acpi support. Newer systems
(esp with 64bit cpus) with acpi support, MADT and DSDT will override it
- config X86_XEN
- bool "Xen-compatible"
- depends on X86_32
- select XEN
- select X86_PAE
- help
- Choose this option if you plan to run this kernel on top of the
- Xen Hypervisor.
-
config X86_BIGSMP
bool "Support for big SMP systems with more than 8 CPUs"
- depends on X86_32 && SMP && !XEN
+ depends on X86_32 && SMP
---help---
This option is needed for the systems that have more than 8 CPUs
- if X86_32 && !XEN
+ if X86_32
config X86_EXTENDED_PLATFORM
bool "Support for extended (non-PC) x86 platforms"
default y
generic distribution kernel, say Y here - otherwise say N.
endif
- config X86_64_XEN
- bool "Enable Xen compatible kernel"
- depends on X86_64
- select XEN
- help
- This option will compile a kernel compatible with Xen hypervisor
-
- if X86_64 && !XEN
+ if X86_64
config X86_EXTENDED_PLATFORM
bool "Support for extended (non-PC) x86 platforms"
default y
select X86_REBOOTFIXUPS
select OF
select OF_EARLY_FLATTREE
+ select IRQ_DOMAIN
---help---
Select for the Intel CE media processor (CE4100) SOC.
This option compiles in support for the CE4100 SOC for settop
config X86_INTEL_MID
bool
- config X86_MRST
- bool "Moorestown MID platform"
- depends on PCI
- depends on PCI_GOANY
- depends on X86_IO_APIC
- select X86_INTEL_MID
- select SFI
- select DW_APB_TIMER
- select APB_TIMER
- select I2C
- select SPI
- select INTEL_SCU_IPC
- select X86_PLATFORM_DEVICES
- ---help---
- Moorestown is Intel's Low Power Intel Architecture (LPIA) based Moblin
- Internet Device(MID) platform. Moorestown consists of two chips:
- Lincroft (CPU core, graphics, and memory controller) and Langwell IOH.
- Unlike standard x86 PCs, Moorestown does not have many legacy devices
- nor standard legacy replacement devices/features. e.g. Moorestown does
- not contain i8259, i8254, HPET, legacy BIOS, most of the io ports.
-
config X86_MDFLD
bool "Medfield MID platform"
depends on PCI
select SPI
select INTEL_SCU_IPC
select X86_PLATFORM_DEVICES
+ select MFD_INTEL_MSIC
---help---
Medfield is Intel's Low Power Intel Architecture (LPIA) based Moblin
Internet Device(MID) platform.
config X86_32_IRIS
tristate "Eurobraille/Iris poweroff module"
- depends on X86_32 && !XEN
+ depends on X86_32
---help---
The Iris machines from EuroBraille do not have APM or ACPI support
to shut themselves down properly. A special I/O sequence is
config SCHED_OMIT_FRAME_POINTER
def_bool y
prompt "Single-depth WCHAN output"
- depends on X86
+ depends on X86 && !STACK_UNWIND
---help---
Calculate simpler /proc/<PID>/wchan values. If this option
is disabled then wchan values will recurse back to the
menuconfig PARAVIRT_GUEST
bool "Paravirtualized guest support"
- depends on !XEN
---help---
Say Y here to get to see options related to running Linux under
various hypervisors. This option alone does not add any kernel code.
config MEMTEST
bool "Memtest"
- depends on !XEN
---help---
This option adds a kernel parameter 'memtest', which allows memtest
to be set.
config HPET_TIMER
def_bool X86_64
prompt "HPET Timer Support" if X86_32
- depends on !XEN
---help---
Use the IA-PC HPET (High Precision Event Timer) to manage
time in preference to the PIT and RTC, if a HPET is
config DMI
default y
bool "Enable DMI scanning" if EXPERT
- depends on !XEN_UNPRIVILEGED_GUEST
---help---
Enabled scanning of DMI to identify machine quirks. Say Y
here unless you have verified that your setup is not
bool "GART IOMMU support" if EXPERT
default y
select SWIOTLB
- depends on X86_64 && PCI && AMD_NB && !X86_64_XEN
+ depends on X86_64 && PCI && AMD_NB
---help---
Support for full DMA access of devices with 32bit memory access only
on systems with more than 3GB. This is usually needed for USB,
config CALGARY_IOMMU
bool "IBM Calgary IOMMU support"
select SWIOTLB
- depends on X86_64 && PCI && !X86_64_XEN && EXPERIMENTAL
+ depends on X86_64 && PCI && EXPERIMENTAL
---help---
Support for hardware IOMMUs in IBM's xSeries x366 and x460
systems. Needed to run systems with more than 3GB of memory
# need this always selected by IOMMU for the VIA workaround
config SWIOTLB
- def_bool y if X86_64 || XEN
- prompt "Software I/O TLB" if XEN_UNPRIVILEGED_GUEST && !XEN_PCIDEV_FRONTEND
+ def_bool y if X86_64
---help---
Support for software bounce buffers used on x86-64 systems
which don't have a hardware IOMMU (e.g. the current generation
config NR_CPUS
int "Maximum number of CPUs" if SMP && !MAXSMP
- range 2 8 if SMP && X86_32 && !X86_BIGSMP && !X86_XEN
+ range 2 8 if SMP && X86_32 && !X86_BIGSMP
range 2 512 if SMP && !MAXSMP
default "1" if !SMP
default "4096" if MAXSMP
default "32" if SMP && (X86_NUMAQ || X86_SUMMIT || X86_BIGSMP || X86_ES7000)
- default "16" if X86_64_XEN
default "8" if SMP
---help---
This allows you to specify the maximum number of CPUs which this
config X86_UP_APIC
bool "Local APIC support on uniprocessors"
- depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !XEN_UNPRIVILEGED_GUEST
+ depends on X86_32 && !SMP && !X86_32_NON_STANDARD
---help---
A local APIC (Advanced Programmable Interrupt Controller) is an
integrated interrupt controller in the CPU. If you have a single-CPU
config X86_LOCAL_APIC
def_bool y
depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC
- depends on !XEN_UNPRIVILEGED_GUEST
config X86_IO_APIC
def_bool y
depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC
- depends on !XEN_UNPRIVILEGED_GUEST
config X86_VISWS_APIC
def_bool y
config X86_REROUTE_FOR_BROKEN_BOOT_IRQS
bool "Reroute for broken boot IRQs"
- depends on X86_IO_APIC && !XEN
+ depends on X86_IO_APIC
---help---
This option enables a workaround that fixes a source of
spurious interrupts. This is recommended when threaded
config X86_MCE
bool "Machine Check / overheating reporting"
- depends on !XEN_UNPRIVILEGED_GUEST
---help---
Machine Check support allows the processor to notify the
kernel if it detects a problem (e.g. overheating, data corruption).
config X86_MCE_INTEL
def_bool y
prompt "Intel MCE features"
- depends on X86_MCE && X86_LOCAL_APIC && !XEN
+ depends on X86_MCE && X86_LOCAL_APIC
---help---
Additional support for intel specific MCE features such as
the thermal monitor.
config X86_MCE_AMD
def_bool y
prompt "AMD MCE features"
- depends on X86_MCE && X86_LOCAL_APIC && !XEN
+ depends on X86_MCE && X86_LOCAL_APIC
---help---
Additional support for AMD specific MCE features such as
the DRAM Error Threshold.
config X86_ANCIENT_MCE
bool "Support for old Pentium 5 / WinChip machine checks"
- depends on X86_32 && X86_MCE && !XEN
+ depends on X86_32 && X86_MCE
---help---
Include support for machine check handling on old Pentium 5 or WinChip
systems. These typically need to be enabled explicitely on the command
If you don't know what a machine check is and you don't do kernel
QA it is safe to say n.
- config X86_XEN_MCE
- def_bool y
- depends on XEN && X86_MCE
-
config X86_THERMAL_VECTOR
def_bool y
depends on X86_MCE_INTEL
config X86_REBOOTFIXUPS
bool "Enable X86 board specific fixups for reboot"
- depends on X86_32 && !XEN
+ depends on X86_32
---help---
This enables chipset and/or board specific fixups to be done
in order to get reboot to work correctly. This is only needed on
config MICROCODE
tristate "/dev/cpu/microcode - microcode support"
- depends on !XEN_UNPRIVILEGED_GUEST
select FW_LOADER
---help---
If you say Y here, you will be able to update the microcode on
config MICROCODE_INTEL
bool "Intel microcode patch loading support"
- depends on MICROCODE && !XEN
+ depends on MICROCODE
default MICROCODE
select FW_LOADER
---help---
config MICROCODE_AMD
bool "AMD microcode patch loading support"
- depends on MICROCODE && !XEN
+ depends on MICROCODE
select FW_LOADER
---help---
If you select this option, microcode patch loading support for AMD
config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support"
- select XEN_DOMCTL if XEN_PRIVILEGED_GUEST
---help---
This device gives privileged processes access to the x86
Model-Specific Registers (MSRs). It is a character device with
choice
prompt "High Memory Support"
- default HIGHMEM64G if X86_NUMAQ || XEN
+ default HIGHMEM64G if X86_NUMAQ
default HIGHMEM4G
depends on X86_32
config HIGHMEM4G
bool "4GB"
- depends on !X86_NUMAQ && !XEN
+ depends on !X86_NUMAQ
---help---
Select this if you have a 32-bit processor and between 1 and 4
gigabytes of physical RAM.
def_bool X86_64 || X86_PAE
config ARCH_DMA_ADDR_T_64BIT
- def_bool X86_64 || XEN || HIGHMEM64G
+ def_bool X86_64 || HIGHMEM64G
config DIRECT_GBPAGES
bool "Enable 1GB pages for kernel pagetables" if EXPERT
default y
- depends on X86_64 && !XEN
+ depends on X86_64
---help---
Allow the kernel linear mapping to use 1GB pages on CPUs that
support it. This can improve the kernel's performance a tiny bit by
# Common NUMA Features
config NUMA
bool "Numa Memory Allocation and Scheduler Support"
- depends on SMP && !XEN
+ depends on SMP
depends on X86_64 || (X86_32 && HIGHMEM64G && (X86_NUMAQ || X86_BIGSMP || X86_SUMMIT && ACPI) && EXPERIMENTAL)
default y if (X86_NUMAQ || X86_SUMMIT || X86_BIGSMP)
---help---
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on X86_64 || NUMA || (EXPERIMENTAL && X86_32) || X86_32_NON_STANDARD
- depends on !XEN
select SPARSEMEM_STATIC if X86_32
select SPARSEMEM_VMEMMAP_ENABLE if X86_64
config ARCH_SPARSEMEM_DEFAULT
def_bool y
- depends on X86_64 && !X86_64_XEN
+ depends on X86_64
config ARCH_SELECT_MEMORY_MODEL
def_bool y
config X86_CHECK_BIOS_CORRUPTION
bool "Check for low memory corruption"
- depends on !XEN
---help---
Periodically check for memory corruption in low memory, which
is suspected to be caused by BIOS. Even when enabled in the
config X86_RESERVE_LOW
int "Amount of low memory, in kilobytes, to reserve for the BIOS"
- depends on !XEN
default 64
range 4 640
---help---
config MATH_EMULATION
bool
prompt "Math emulation" if X86_32
- depends on !XEN
---help---
Linux can emulate a math coprocessor (used for floating point
operations) if you don't have one. 486DX and Pentium processors have
config MTRR
def_bool y
prompt "MTRR (Memory Type Range Register) support" if EXPERT
- depends on !XEN_UNPRIVILEGED_GUEST
---help---
On Intel P6 family processors (Pentium Pro, Pentium II and later)
the Memory Type Range Registers (MTRRs) may be used to control
config MTRR_SANITIZER
def_bool y
prompt "MTRR cleanup support"
- depends on MTRR && !XEN
+ depends on MTRR
---help---
Convert MTRR layout from continuous to discrete, so X drivers can
add writeback entries.
config X86_PAT
def_bool y
- prompt "x86 PAT support" if EXPERT || XEN_UNPRIVILEGED_GUEST
- depends on MTRR || (XEN_UNPRIVILEGED_GUEST && XEN_PCIDEV_FRONTEND)
+ prompt "x86 PAT support" if EXPERT
+ depends on MTRR
---help---
Use PAT attributes to setup page level cache control.
config EFI
bool "EFI runtime service support"
- depends on ACPI && !XEN_UNPRIVILEGED_GUEST
+ depends on ACPI
---help---
This enables the kernel to use EFI runtime services that are
available (such as the EFI variable services).
config EFI_STUB
bool "EFI stub support"
- depends on EFI && !XEN
+ depends on EFI
---help---
This kernel feature allows a bzImage to be loaded directly
by EFI firmware without the use of a bootloader.
config KEXEC
bool "kexec system call"
- depends on !XEN_UNPRIVILEGED_GUEST
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
config CRASH_DUMP
bool "kernel crash dumps"
depends on X86_64 || (X86_32 && HIGHMEM)
- depends on !XEN
---help---
Generate crash dump after being started by kexec.
This should be normally only set in special crash dump kernels
code in physical address mode via KEXEC
config PHYSICAL_START
- hex "Physical address where the kernel is loaded" if (EXPERT || CRASH_DUMP || XEN)
- default 0x100000 if XEN
+ hex "Physical address where the kernel is loaded" if (EXPERT || CRASH_DUMP)
default "0x1000000"
---help---
This gives the physical address where the kernel is loaded.
config RELOCATABLE
bool "Build a relocatable kernel"
- depends on !XEN
default y
---help---
This builds a kernel image that retains relocation information
depends on X86_32 && RELOCATABLE
config PHYSICAL_ALIGN
- hex "Alignment value to which kernel should be aligned" if X86_32 && !XEN
- default 0x2000 if XEN
+ hex "Alignment value to which kernel should be aligned" if X86_32
default "0x1000000"
range 0x2000 0x1000000
---help---
config ARCH_ENABLE_MEMORY_HOTPLUG
def_bool y
depends on X86_64 || (X86_32 && HIGHMEM)
- depends on !XEN
config ARCH_ENABLE_MEMORY_HOTREMOVE
def_bool y
source "kernel/power/Kconfig"
- if !XEN_UNPRIVILEGED_GUEST
-
source "drivers/acpi/Kconfig"
source "drivers/sfi/Kconfig"
menuconfig APM
tristate "APM (Advanced Power Management) BIOS support"
- depends on X86_32 && PM_SLEEP && !XEN
+ depends on X86_32 && PM_SLEEP
---help---
APM is a BIOS specification for saving power using several different
techniques. This is mostly useful for battery powered laptops with
source "drivers/idle/Kconfig"
- endif # !XEN_UNPRIVILEGED_GUEST
-
endmenu
bool "PCI support"
default y
select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC)
- select ARCH_SUPPORTS_MSI if (XEN_UNPRIVILEGED_GUEST && XEN_PCIDEV_FRONTEND)
---help---
Find out whether you have a PCI motherboard. PCI is the name of a
bus system, i.e. the way the CPU talks to the other stuff inside
config PCI_GOBIOS
bool "BIOS"
config PCI_GOMMCONFIG
bool "MMConfig"
config PCI_GODIRECT
bool "Direct"
config PCI_GOOLPC
bool "OLPC XO-1"
- depends on OLPC && !XEN_UNPRIVILEGED_GUEST
-
- config PCI_GOXEN_FE
- bool "Xen PCI Frontend"
- depends on X86_XEN
- help
- The PCI device frontend driver allows the kernel to import arbitrary
- PCI devices from a PCI backend to support PCI driver domains.
+ depends on OLPC
config PCI_GOANY
bool "Any"
- depends on !XEN_UNPRIVILEGED_GUEST
endchoice
config PCI_BIOS
def_bool y
- depends on X86_32 && PCI && !XEN && (PCI_GOBIOS || PCI_GOANY)
+ depends on X86_32 && PCI && (PCI_GOBIOS || PCI_GOANY)
# x86-64 doesn't support PCI BIOS access from long mode so always go direct.
config PCI_DIRECT
config PCI_XEN
def_bool y
- depends on PCI && PARAVIRT_XEN
+ depends on PCI && XEN
select SWIOTLB_XEN
config PCI_DOMAINS
# x86_64 have no ISA slots, but can have ISA-style DMA.
config ISA_DMA_API
- bool "ISA-style DMA support" if ((X86_64 || XEN) && EXPERT) || XEN_UNPRIVILEGED_GUEST
+ bool "ISA-style DMA support" if (X86_64 && EXPERT)
default y
help
Enables ISA-style DMA support for devices requiring such controllers.
config ISA
bool "ISA support"
- depends on !XEN
---help---
Find out whether you have ISA slots on your motherboard. ISA is the
name of a bus system, i.e. the way the CPU talks to the other stuff
config MCA
bool "MCA support"
- depends on !XEN
---help---
MicroChannel Architecture is found in some IBM PS/2 machines and
laptops. It is a bus system similar to PCI or ISA. See
config OLPC
bool "One Laptop Per Child support"
- depends on !X86_PAE && !XEN
+ depends on !X86_PAE
select GPIOLIB
select OF
select OF_PROMTREE
+ select IRQ_DOMAIN
---help---
Add support for detecting the unique features of the OLPC
XO hardware.
Note: You have to set alix.force=1 for boards with Award BIOS.
+ config NET5501
+ bool "Soekris Engineering net5501 System Support (LEDS, GPIO, etc)"
+ select GPIOLIB
+ ---help---
+ This option enables system support for the Soekris Engineering net5501.
+
+ config GEOS
+ bool "Traverse Technologies GEOS System Support (LEDS, GPIO, etc)"
+ select GPIOLIB
+ depends on DMI
+ ---help---
+ This option enables system support for the Traverse Technologies GEOS.
+
endif # X86_32
config AMD_NB
def_bool y
- depends on CPU_SUP_AMD && PCI && !XEN_UNPRIVILEGED_GUEST
+ depends on CPU_SUP_AMD && PCI
source "drivers/pcmcia/Kconfig"
depends on X86_64
select COMPAT_BINFMT_ELF
---help---
- Include code to run 32-bit programs under a 64-bit kernel. You should
- likely turn this on, unless you're 100% sure that you don't have any
- 32-bit programs left.
+ Include code to run legacy 32-bit programs under a
+ 64-bit kernel. You should likely turn this on, unless you're
+ 100% sure that you don't have any 32-bit programs left.
config IA32_AOUT
tristate "IA32 a.out support"
---help---
Support old a.out binaries in the 32bit emulation.
+ config X86_X32
+ bool "x32 ABI for 64-bit mode (EXPERIMENTAL)"
+ depends on X86_64 && IA32_EMULATION && EXPERIMENTAL
+ ---help---
+ Include code to run binaries for the x32 native 32-bit ABI
+ for 64-bit processors. An x32 process gets access to the
+ full 64-bit register file and wide data path while leaving
+ pointers at 32 bits for smaller memory footprint.
+
+ You will need a recent binutils (2.22 or later) with
+ elf32_x86_64 support enabled to compile a kernel with this
+ option set.
+
config COMPAT
def_bool y
- depends on IA32_EMULATION
+ depends on IA32_EMULATION || X86_X32
+ select ARCH_WANT_OLD_COMPAT_IPC
config COMPAT_FOR_U64_ALIGNMENT
def_bool COMPAT
source "drivers/Kconfig"
- if !XEN_UNPRIVILEGED_GUEST
source "drivers/firmware/Kconfig"
- endif
source "fs/Kconfig"
endif
endif
+ ifdef CONFIG_X86_X32
+ x32_ld_ok := $(call try-run,\
+ /bin/echo -e '1: .quad 1b' | \
+ $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" - && \
+ $(OBJCOPY) -O elf32-x86-64 "$$TMP" "$$TMPO" && \
+ $(LD) -m elf32_x86_64 "$$TMPO" -o "$$TMP",y,n)
+ ifeq ($(x32_ld_ok),y)
+ CONFIG_X86_X32_ABI := y
+ KBUILD_AFLAGS += -DCONFIG_X86_X32_ABI
+ KBUILD_CFLAGS += -DCONFIG_X86_X32_ABI
+ else
+ $(warning CONFIG_X86_X32 enabled but no binutils support)
+ endif
+ endif
+ export CONFIG_X86_X32_ABI
+
# Don't unroll struct assignments with kmemcheck enabled
ifeq ($(CONFIG_KMEMCHECK),y)
KBUILD_CFLAGS += $(call cc-option,-fno-builtin-memcpy)
# Workaround for a gcc prelease that unfortunately was shipped in a suse release
KBUILD_CFLAGS += -Wno-sign-compare
#
+ifneq ($(CONFIG_UNWIND_INFO),y)
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
+endif
# prevent gcc from generating any FP code by mistake
KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
BOOT_TARGETS = bzlilo bzdisk fdimage fdimage144 fdimage288 isoimage
- PHONY += bzImage vmlinuz $(BOOT_TARGETS)
-
- ifdef CONFIG_XEN
- LINUXINCLUDE := -D__XEN_INTERFACE_VERSION__=$(CONFIG_XEN_INTERFACE_VERSION) \
- -I$(srctree)/arch/x86/include/mach-xen $(LINUXINCLUDE)
-
- ifdef CONFIG_X86_64
- LDFLAGS_vmlinux := -e startup_64
- endif
-
- # Default kernel to build
- all: vmlinuz
-
- # KBUILD_IMAGE specifies the target image being built
- KBUILD_IMAGE := $(boot)/vmlinuz
+ PHONY += bzImage $(BOOT_TARGETS)
- vmlinuz: vmlinux
- $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
- $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
- $(Q)ln -fsn ../../x86/boot/$@ $(objtree)/arch/$(UTS_MACHINE)/boot/$@
- else
# Default kernel to build
all: bzImage
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
$(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@
- endif
$(BOOT_TARGETS): vmlinux
$(Q)$(MAKE) $(build)=$(boot) $@
--- /dev/null
+ #ifndef _ASM_X86_SWITCH_TO_H
+ #define _ASM_X86_SWITCH_TO_H
+
+ struct task_struct; /* one of the stranger aspects of C forward declarations */
+ struct task_struct *__switch_to(struct task_struct *prev,
+ struct task_struct *next);
+ struct tss_struct;
+ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
+ struct tss_struct *tss);
+
+ #ifdef CONFIG_X86_32
+
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ #define __switch_canary \
+ "movl %P[task_canary](%[next]), %%ebx\n\t" \
+ "movl %%ebx, "__percpu_arg([stack_canary])"\n\t"
+ #define __switch_canary_oparam \
+ , [stack_canary] "=m" (stack_canary.canary)
+ #define __switch_canary_iparam \
+ , [task_canary] "i" (offsetof(struct task_struct, stack_canary))
+ #else /* CC_STACKPROTECTOR */
+ #define __switch_canary
+ #define __switch_canary_oparam
+ #define __switch_canary_iparam
+ #endif /* CC_STACKPROTECTOR */
+
+ /*
+ * Saving eflags is important. It switches not only IOPL between tasks,
+ * it also protects other tasks from NT leaking through sysenter etc.
+ */
+ #define switch_to(prev, next, last) \
+ do { \
+ /* \
+ * Context-switching clobbers all registers, so we clobber \
+ * them explicitly, via unused output variables. \
+ * (EAX and EBP is not listed because EBP is saved/restored \
+ * explicitly for wchan access and EAX is the return value of \
+ * __switch_to()) \
+ */ \
+ unsigned long ebx, ecx, edx, esi, edi; \
+ \
+ asm volatile("pushfl\n\t" /* save flags */ \
+ "pushl %%ebp\n\t" /* save EBP */ \
+ "movl %%esp,%[prev_sp]\n\t" /* save ESP */ \
+ "movl %[next_sp],%%esp\n\t" /* restore ESP */ \
+ "movl $1f,%[prev_ip]\n\t" /* save EIP */ \
+ "pushl %[next_ip]\n\t" /* restore EIP */ \
+ __switch_canary \
+ "jmp __switch_to\n" /* regparm call */ \
+ "1:\t" \
+ "popl %%ebp\n\t" /* restore EBP */ \
+ "popfl\n" /* restore flags */ \
+ \
+ /* output parameters */ \
+ : [prev_sp] "=m" (prev->thread.sp), \
+ [prev_ip] "=m" (prev->thread.ip), \
+ "=a" (last), \
+ \
+ /* clobbered output registers: */ \
+ "=b" (ebx), "=c" (ecx), "=d" (edx), \
+ "=S" (esi), "=D" (edi) \
+ \
+ __switch_canary_oparam \
+ \
+ /* input parameters: */ \
+ : [next_sp] "m" (next->thread.sp), \
+ [next_ip] "m" (next->thread.ip), \
+ \
+ /* regparm parameters for __switch_to(): */ \
+ [prev] "a" (prev), \
+ [next] "d" (next) \
+ \
+ __switch_canary_iparam \
+ \
+ : /* reloaded segment registers */ \
+ "memory"); \
+ } while (0)
+
+ #else /* CONFIG_X86_32 */
+
+ /* frame pointer must be last for get_wchan */
+ #define SAVE_CONTEXT "pushf ; pushq %%rbp ; movq %%rsi,%%rbp\n\t"
+ #define RESTORE_CONTEXT "movq %%rbp,%%rsi ; popq %%rbp ; popf\t"
+
+ #define __EXTRA_CLOBBER \
+ , "rcx", "rbx", "rdx", "r8", "r9", "r10", "r11", \
+ "r12", "r13", "r14", "r15"
+
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ #define __switch_canary \
+ "movq %P[task_canary](%%rsi),%%r8\n\t" \
+ "movq %%r8,"__percpu_arg([gs_canary])"\n\t"
+ #define __switch_canary_oparam \
+ , [gs_canary] "=m" (irq_stack_union.stack_canary)
+ #define __switch_canary_iparam \
+ , [task_canary] "i" (offsetof(struct task_struct, stack_canary))
+ #else /* CC_STACKPROTECTOR */
+ #define __switch_canary
+ #define __switch_canary_oparam
+ #define __switch_canary_iparam
+ #endif /* CC_STACKPROTECTOR */
+
++/* The stack unwind code needs this but it pollutes traces otherwise */
++#ifdef CONFIG_UNWIND_INFO
++#define THREAD_RETURN_SYM \
++ ".globl thread_return\n" \
++ "thread_return:\n\t"
++#else
++#define THREAD_RETURN_SYM
++#endif
++
+ /* Save restore flags to clear handle leaking NT */
+ #define switch_to(prev, next, last) \
+ asm volatile(SAVE_CONTEXT \
+ "movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */ \
+ "movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */ \
+ "call __switch_to\n\t" \
++ THREAD_RETURN_SYM \
+ "movq "__percpu_arg([current_task])",%%rsi\n\t" \
+ __switch_canary \
+ "movq %P[thread_info](%%rsi),%%r8\n\t" \
+ "movq %%rax,%%rdi\n\t" \
+ "testl %[_tif_fork],%P[ti_flags](%%r8)\n\t" \
+ "jnz ret_from_fork\n\t" \
+ RESTORE_CONTEXT \
+ : "=a" (last) \
+ __switch_canary_oparam \
+ : [next] "S" (next), [prev] "D" (prev), \
+ [threadrsp] "i" (offsetof(struct task_struct, thread.sp)), \
+ [ti_flags] "i" (offsetof(struct thread_info, flags)), \
+ [_tif_fork] "i" (_TIF_FORK), \
+ [thread_info] "i" (offsetof(struct task_struct, stack)), \
+ [current_task] "m" (current_task) \
+ __switch_canary_iparam \
+ : "memory", "cc" __EXTRA_CLOBBER)
+
+ #endif /* CONFIG_X86_32 */
+
+ #endif /* _ASM_X86_SWITCH_TO_H */
u8 acpi_sci_flags __initdata;
int acpi_sci_override_gsi __initdata;
- #ifndef CONFIG_XEN
int acpi_skip_timer_override __initdata;
int acpi_use_timer_override __initdata;
int acpi_fix_pin2_polarity __initdata;
#ifdef CONFIG_X86_LOCAL_APIC
static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
#endif
- #else
- #define acpi_skip_timer_override 0
- #define acpi_fix_pin2_polarity 0
- #endif
#ifndef __HAVE_ARCH_CMPXCHG
#warning ACPI uses CMPXCHG, i486 and later hardware
return -ENODEV;
}
- #ifndef CONFIG_XEN
if (madt->address) {
acpi_lapic_addr = (u64) madt->address;
default_acpi_madt_oem_check(madt->header.oem_id,
madt->header.oem_table_id);
- #endif
return 0;
}
static void __cpuinit acpi_register_lapic(int id, u8 enabled)
{
- #ifndef CONFIG_XEN
unsigned int ver = 0;
if (id >= (MAX_LOCAL_APIC-1)) {
ver = apic_version[boot_cpu_physical_apicid];
generic_processor_info(id, ver);
- #endif
}
static int __init
* to not preallocating memory for all NR_CPUS
* when we use CPU hotplug.
*/
- if (!cpu_has_x2apic && (apic_id >= 0xff) && enabled)
+ if (!apic->apic_id_valid(apic_id) && enabled)
printk(KERN_WARNING PREFIX "x2apic entry ignored\n");
else
acpi_register_lapic(apic_id, enabled);
acpi_parse_lapic_addr_ovr(struct acpi_subtable_header * header,
const unsigned long end)
{
- #ifndef CONFIG_XEN
struct acpi_madt_local_apic_override *lapic_addr_ovr = NULL;
lapic_addr_ovr = (struct acpi_madt_local_apic_override *)header;
return -EINVAL;
acpi_lapic_addr = lapic_addr_ovr->address;
- #endif
return 0;
}
#ifdef CONFIG_ACPI_HOTPLUG_CPU
#include <acpi/processor.h>
- #ifndef CONFIG_XEN
- static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
+ static void __cpuinitdata acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
{
#ifdef CONFIG_ACPI_NUMA
int nid;
kfree(buffer.pointer);
buffer.length = ACPI_ALLOCATE_BUFFER;
buffer.pointer = NULL;
+ lapic = NULL;
if (!alloc_cpumask_var(&tmp_map, GFP_KERNEL))
goto out;
goto free_tmp_map;
cpumask_copy(tmp_map, cpu_present_mask);
- acpi_register_lapic(physid, lapic->lapic_flags & ACPI_MADT_ENABLED);
+ acpi_register_lapic(physid, ACPI_MADT_ENABLED);
/*
* If mp_register_lapic successfully generates a new logical cpu
out:
return retval;
}
- #else
- #define _acpi_map_lsapic(h, p) (-EINVAL)
- #endif
/* wrapper to silence section mismatch warning */
int __ref acpi_map_lsapic(acpi_handle handle, int *pcpu)
int acpi_unmap_lsapic(int cpu)
{
per_cpu(x86_cpu_to_apicid, cpu) = -1;
set_cpu_present(cpu, false);
num_processors--;
- #endif
return (0);
}
return 0;
}
- #ifndef CONFIG_XEN
/*
* Force ignoring BIOS IRQ0 pin2 override
*/
}
return 0;
}
- #endif
+static int __init force_acpi_rsdt(const struct dmi_system_id *d)
+{
+ if (!acpi_force) {
+ printk(KERN_NOTICE "%s detected: force use of acpi=rsdt\n",
+ d->ident);
+ acpi_rsdt_forced = 1;
+ } else {
+ printk(KERN_NOTICE
+ "Warning: acpi=force overrules DMI blacklist: "
+ "acpi=rsdt\n");
+ }
+ return 0;
+
+}
+
/*
* If your system is blacklisted here, but you find that acpi=force
* works for you, please contact linux-acpi@vger.kernel.org
DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 360"),
},
},
+
+ /*
+ * Boxes that need RSDT as ACPI root table
+ */
+ {
+ .callback = force_acpi_rsdt,
+ .ident = "ThinkPad ", /* R40e, broken C-states */
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "1SET")},
+ },
+ {
+ .callback = force_acpi_rsdt,
+ .ident = "ThinkPad ", /* R50e, slow booting */
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "1WET")},
+ },
+ {
+ .callback = force_acpi_rsdt,
+ .ident = "ThinkPad ", /* T40, T40p, T41, T41p, T42, T42p
+ R50, R50p */
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "1RET")},
+ },
{}
};
- #ifndef CONFIG_XEN
/* second table for DMI checks that should run after early-quirks */
static struct dmi_system_id __initdata acpi_dmi_table_late[] = {
/*
},
{}
};
- #endif
/*
* acpi_boot_table_init() and acpi_boot_init()
int __init acpi_boot_init(void)
{
- #ifndef CONFIG_XEN
/* those are executed after early-quirks are executed */
dmi_check_system(acpi_dmi_table_late);
- #endif
/*
* If acpi_disabled, bail out
}
early_param("acpi", parse_acpi);
+/* Alias for acpi=rsdt for compatibility with openSUSE 11.1 and SLE11 */
+static int __init parse_acpi_root_table(char *opt)
+{
+ if (!strcmp(opt, "rsdt")) {
+ acpi_rsdt_forced = 1;
+ printk(KERN_WARNING "acpi_root_table=rsdt is deprecated. "
+ "Please use acpi=rsdt instead.\n");
+ }
+ return 0;
+}
+early_param("acpi_root_table", parse_acpi_root_table);
+
/* FIXME: Using pci= for an ACPI parameter is a travesty. */
static int __init parse_pci(char *arg)
{
return 0;
}
- #if defined(CONFIG_X86_IO_APIC) && !defined(CONFIG_XEN)
+ #ifdef CONFIG_X86_IO_APIC
static int __init parse_acpi_skip_timer_override(char *arg)
{
acpi_skip_timer_override = 1;
static int dmi_bigsmp; /* can be set by dmi scanners */
-static int hp_ht_bigsmp(const struct dmi_system_id *d)
+static int force_bigsmp_apic(const struct dmi_system_id *d)
{
printk(KERN_NOTICE "%s detected: force use of apic=bigsmp\n", d->ident);
dmi_bigsmp = 1;
static const struct dmi_system_id bigsmp_dmi_table[] = {
- { hp_ht_bigsmp, "HP ProLiant DL760 G2",
+ { force_bigsmp_apic, "HP ProLiant DL760 G2",
{ DMI_MATCH(DMI_BIOS_VENDOR, "HP"),
DMI_MATCH(DMI_BIOS_VERSION, "P44-"),
}
},
- { hp_ht_bigsmp, "HP ProLiant DL740",
+ { force_bigsmp_apic, "HP ProLiant DL740",
{ DMI_MATCH(DMI_BIOS_VENDOR, "HP"),
DMI_MATCH(DMI_BIOS_VERSION, "P47-"),
}
},
+
+ { force_bigsmp_apic, "IBM x260 / x366 / x460",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "-[ZT"),
+ }
+ },
+
+ { force_bigsmp_apic, "IBM x3800 / x3850 / x3950",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "-[ZU"),
+ }
+ },
+
+ { force_bigsmp_apic, "IBM x3800 / x3850 / x3950",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "-[ZS"),
+ }
+ },
+
+ { force_bigsmp_apic, "IBM x3850 M2 / x3950 M2",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "-[A3"),
+ }
+ },
{ } /* NULL entry stops DMI scanning */
};
.name = "bigsmp",
.probe = probe_bigsmp,
.acpi_madt_oem_check = NULL,
+ .apic_id_valid = default_apic_id_valid,
.apic_id_registered = bigsmp_apic_id_registered,
.irq_delivery_mode = dest_Fixed,
.name = "default",
.probe = probe_default,
.acpi_madt_oem_check = NULL,
+ .apic_id_valid = default_apic_id_valid,
.apic_id_registered = default_apic_id_registered,
.irq_delivery_mode = dest_LowestPrio,
if (!(*drv)->mps_oem_check(mpc, oem, productid))
continue;
- if (!cmdline_apic) {
+ if (!cmdline_apic && apic == &apic_default) {
apic = *drv;
printk(KERN_INFO "Switched to APIC driver `%s'.\n",
apic->name);
if (!(*drv)->acpi_madt_oem_check(oem_id, oem_table_id))
continue;
- if (!cmdline_apic) {
+ if (!cmdline_apic && apic == &apic_default) {
apic = *drv;
printk(KERN_INFO "Switched to APIC driver `%s'.\n",
apic->name);
#include <linux/syscore_ops.h>
#include <linux/i8253.h>
- #include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/desc.h>
#include <asm/olpc.h>
unsigned long offset;
unsigned short segment;
} apm_bios_entry;
+#ifdef CONFIG_APM_CPU_IDLE
static int clock_slowed;
static int idle_threshold __read_mostly = DEFAULT_IDLE_THRESHOLD;
static int idle_period __read_mostly = DEFAULT_IDLE_PERIOD;
static int set_pm_idle;
+#endif
static int suspends_pending;
static int standbys_pending;
static int ignore_sys_suspend;
return set_power_state(APM_DEVICE_ALL, state);
}
+#ifdef CONFIG_APM_CPU_IDLE
/**
* apm_do_idle - perform power saving
*
local_irq_enable();
}
+#endif
/**
* apm_power_off - ask the BIOS to power off
struct apm_user *as;
dpm_suspend_start(PMSG_SUSPEND);
-
- dpm_suspend_noirq(PMSG_SUSPEND);
+ dpm_suspend_end(PMSG_SUSPEND);
local_irq_disable();
syscore_suspend();
syscore_resume();
local_irq_enable();
- dpm_resume_noirq(PMSG_RESUME);
-
+ dpm_resume_start(PMSG_RESUME);
dpm_resume_end(PMSG_RESUME);
+
queue_event(APM_NORMAL_RESUME, NULL);
spin_lock(&user_list_lock);
for (as = user_list; as != NULL; as = as->next) {
{
int err;
- dpm_suspend_noirq(PMSG_SUSPEND);
+ dpm_suspend_end(PMSG_SUSPEND);
local_irq_disable();
syscore_suspend();
syscore_resume();
local_irq_enable();
- dpm_resume_noirq(PMSG_RESUME);
+ dpm_resume_start(PMSG_RESUME);
}
static apm_event_t get_event(void)
if ((strncmp(str, "bounce-interval=", 16) == 0) ||
(strncmp(str, "bounce_interval=", 16) == 0))
bounce_interval = simple_strtol(str + 16, NULL, 0);
+#ifdef CONFIG_APM_CPU_IDLE
if ((strncmp(str, "idle-threshold=", 15) == 0) ||
(strncmp(str, "idle_threshold=", 15) == 0))
idle_threshold = simple_strtol(str + 15, NULL, 0);
if ((strncmp(str, "idle-period=", 12) == 0) ||
(strncmp(str, "idle_period=", 12) == 0))
idle_period = simple_strtol(str + 12, NULL, 0);
+#endif
invert = (strncmp(str, "no-", 3) == 0) ||
(strncmp(str, "no_", 3) == 0);
if (invert)
if (misc_register(&apm_device))
printk(KERN_WARNING "apm: Could not register misc device.\n");
+#ifdef CONFIG_APM_CPU_IDLE
if (HZ != 100)
idle_period = (idle_period * HZ) / 100;
if (idle_threshold < 100) {
pm_idle = apm_cpu_idle;
set_pm_idle = 1;
}
+#endif
return 0;
}
{
int error;
+#ifdef CONFIG_APM_CPU_IDLE
if (set_pm_idle) {
pm_idle = original_pm_idle;
/*
*/
cpu_idle_wait();
}
+#endif
if (((apm_info.bios.flags & APM_BIOS_DISENGAGED) == 0)
&& (apm_info.connection_version > 0x0100)) {
error = apm_engage_power_management(APM_DEVICE_ALL, 0);
module_param(realmode_power_off, bool, 0444);
MODULE_PARM_DESC(realmode_power_off,
"Switch to real mode before powering off");
+#ifdef CONFIG_APM_CPU_IDLE
module_param(idle_threshold, int, 0444);
MODULE_PARM_DESC(idle_threshold,
"System idle percentage above which to make APM BIOS idle calls");
module_param(idle_period, int, 0444);
MODULE_PARM_DESC(idle_period,
"Period (in sec/100) over which to caculate the idle percentage");
+#endif
module_param(smp, bool, 0444);
MODULE_PARM_DESC(smp,
"Set this to enable APM use on an SMP platform. Use with caution on older systems");
#include <linux/slab.h>
#include <linux/cpu.h>
#include <linux/bitops.h>
+ #include <linux/device.h>
#include <asm/apic.h>
#include <asm/stacktrace.h>
#include <asm/nmi.h>
- #include <asm/compat.h>
#include <asm/smp.h>
#include <asm/alternative.h>
+ #include <asm/timer.h>
#include "perf_event.h"
return 0;
}
+ /*
+ * check that branch_sample_type is compatible with
+ * settings needed for precise_ip > 1 which implies
+ * using the LBR to capture ALL taken branches at the
+ * priv levels of the measurement
+ */
+ static inline int precise_br_compat(struct perf_event *event)
+ {
+ u64 m = event->attr.branch_sample_type;
+ u64 b = 0;
+
+ /* must capture all branches */
+ if (!(m & PERF_SAMPLE_BRANCH_ANY))
+ return 0;
+
+ m &= PERF_SAMPLE_BRANCH_KERNEL | PERF_SAMPLE_BRANCH_USER;
+
+ if (!event->attr.exclude_user)
+ b |= PERF_SAMPLE_BRANCH_USER;
+
+ if (!event->attr.exclude_kernel)
+ b |= PERF_SAMPLE_BRANCH_KERNEL;
+
+ /*
+ * ignore PERF_SAMPLE_BRANCH_HV, not supported on x86
+ */
+
+ return m == b;
+ }
+
int x86_pmu_hw_config(struct perf_event *event)
{
if (event->attr.precise_ip) {
if (event->attr.precise_ip > precise)
return -EOPNOTSUPP;
+ /*
+ * check that PEBS LBR correction does not conflict with
+ * whatever the user is asking with attr->branch_sample_type
+ */
+ if (event->attr.precise_ip > 1) {
+ u64 *br_type = &event->attr.branch_sample_type;
+
+ if (has_branch_stack(event)) {
+ if (!precise_br_compat(event))
+ return -EOPNOTSUPP;
+
+ /* branch_sample_type is compatible */
+
+ } else {
+ /*
+ * user did not specify branch_sample_type
+ *
+ * For PEBS fixups, we capture all
+ * the branches at the priv level of the
+ * event.
+ */
+ *br_type = PERF_SAMPLE_BRANCH_ANY;
+
+ if (!event->attr.exclude_user)
+ *br_type |= PERF_SAMPLE_BRANCH_USER;
+
+ if (!event->attr.exclude_kernel)
+ *br_type |= PERF_SAMPLE_BRANCH_KERNEL;
+ }
+ }
}
/*
/* mark unused */
event->hw.extra_reg.idx = EXTRA_REG_NONE;
+ /* mark not used */
+ event->hw.extra_reg.idx = EXTRA_REG_NONE;
+ event->hw.branch_reg.idx = EXTRA_REG_NONE;
+
return x86_pmu.hw_config(event);
}
/* Prefer fixed purpose counters */
if (x86_pmu.num_counters_fixed) {
idx = X86_PMC_IDX_FIXED;
- for_each_set_bit_cont(idx, c->idxmsk, X86_PMC_IDX_MAX) {
+ for_each_set_bit_from(idx, c->idxmsk, X86_PMC_IDX_MAX) {
if (!__test_and_set_bit(idx, sched->state.used))
goto done;
}
}
/* Grab the first unused counter starting with idx */
idx = sched->state.counter;
- for_each_set_bit_cont(idx, c->idxmsk, X86_PMC_IDX_FIXED) {
+ for_each_set_bit_from(idx, c->idxmsk, X86_PMC_IDX_FIXED) {
if (!__test_and_set_bit(idx, sched->state.used))
goto done;
}
break;
case CPU_STARTING:
+ if (x86_pmu.attr_rdpmc)
+ set_in_cr4(X86_CR4_PCE);
if (x86_pmu.cpu_starting)
x86_pmu.cpu_starting(cpu);
break;
pr_info("no hardware sampling interrupt available.\n");
}
+ static struct attribute_group x86_pmu_format_group = {
+ .name = "format",
+ .attrs = NULL,
+ };
+
static int __init init_hw_perf_events(void)
{
struct x86_pmu_quirk *quirk;
}
}
+ x86_pmu.attr_rdpmc = 1; /* enable userspace RDPMC usage by default */
+ x86_pmu_format_group.attrs = x86_pmu.format_attrs;
+
pr_info("... version: %d\n", x86_pmu.version);
pr_info("... bit width: %d\n", x86_pmu.cntval_bits);
pr_info("... generic registers: %d\n", x86_pmu.num_counters);
return err;
}
+ static int x86_pmu_event_idx(struct perf_event *event)
+ {
+ int idx = event->hw.idx;
+
+ if (!x86_pmu.attr_rdpmc)
+ return 0;
+
+ if (x86_pmu.num_counters_fixed && idx >= X86_PMC_IDX_FIXED) {
+ idx -= X86_PMC_IDX_FIXED;
+ idx |= 1 << 30;
+ }
+
+ return idx + 1;
+ }
+
+ static ssize_t get_attr_rdpmc(struct device *cdev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+ return snprintf(buf, 40, "%d\n", x86_pmu.attr_rdpmc);
+ }
+
+ static void change_rdpmc(void *info)
+ {
+ bool enable = !!(unsigned long)info;
+
+ if (enable)
+ set_in_cr4(X86_CR4_PCE);
+ else
+ clear_in_cr4(X86_CR4_PCE);
+ }
+
+ static ssize_t set_attr_rdpmc(struct device *cdev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+ unsigned long val = simple_strtoul(buf, NULL, 0);
+
+ if (!!val != !!x86_pmu.attr_rdpmc) {
+ x86_pmu.attr_rdpmc = !!val;
+ smp_call_function(change_rdpmc, (void *)val, 1);
+ }
+
+ return count;
+ }
+
+ static DEVICE_ATTR(rdpmc, S_IRUSR | S_IWUSR, get_attr_rdpmc, set_attr_rdpmc);
+
+ static struct attribute *x86_pmu_attrs[] = {
+ &dev_attr_rdpmc.attr,
+ NULL,
+ };
+
+ static struct attribute_group x86_pmu_attr_group = {
+ .attrs = x86_pmu_attrs,
+ };
+
+ static const struct attribute_group *x86_pmu_attr_groups[] = {
+ &x86_pmu_attr_group,
+ &x86_pmu_format_group,
+ NULL,
+ };
+
+ static void x86_pmu_flush_branch_stack(void)
+ {
+ if (x86_pmu.flush_branch_stack)
+ x86_pmu.flush_branch_stack();
+ }
+
static struct pmu pmu = {
- .pmu_enable = x86_pmu_enable,
- .pmu_disable = x86_pmu_disable,
+ .pmu_enable = x86_pmu_enable,
+ .pmu_disable = x86_pmu_disable,
+
+ .attr_groups = x86_pmu_attr_groups,
.event_init = x86_pmu_event_init,
- .add = x86_pmu_add,
- .del = x86_pmu_del,
- .start = x86_pmu_start,
- .stop = x86_pmu_stop,
- .read = x86_pmu_read,
+ .add = x86_pmu_add,
+ .del = x86_pmu_del,
+ .start = x86_pmu_start,
+ .stop = x86_pmu_stop,
+ .read = x86_pmu_read,
.start_txn = x86_pmu_start_txn,
.cancel_txn = x86_pmu_cancel_txn,
.commit_txn = x86_pmu_commit_txn,
+
+ .event_idx = x86_pmu_event_idx,
+ .flush_branch_stack = x86_pmu_flush_branch_stack,
};
+ void arch_perf_update_userpage(struct perf_event_mmap_page *userpg, u64 now)
+ {
+ userpg->cap_usr_time = 0;
+ userpg->cap_usr_rdpmc = x86_pmu.attr_rdpmc;
+ userpg->pmc_width = x86_pmu.cntval_bits;
+
+ if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
+ return;
+
+ if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
+ return;
+
+ userpg->cap_usr_time = 1;
+ userpg->time_mult = this_cpu_read(cyc2ns);
+ userpg->time_shift = CYC2NS_SCALE_FACTOR;
+ userpg->time_offset = this_cpu_read(cyc2ns_offset) - now;
+ }
+
/*
* callchain support
*/
+static void
+backtrace_warning_symbol(void *data, char *msg, unsigned long symbol)
+{
+ /* Ignore warnings */
+}
+
+static void backtrace_warning(void *data, char *msg)
+{
+ /* Ignore warnings */
+}
+
static int backtrace_stack(void *data, char *name)
{
return 0;
}
static const struct stacktrace_ops backtrace_ops = {
+ .warning = backtrace_warning,
+ .warning_symbol = backtrace_warning_symbol,
.stack = backtrace_stack,
.address = backtrace_address,
.walk_stack = print_context_stack_bp,
}
#ifdef CONFIG_COMPAT
+
+ #include <asm/compat.h>
+
static inline int
perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry)
{
#include <linux/sysfs.h>
#include <asm/stacktrace.h>
+#include <linux/unwind.h>
int panic_on_unrecovered_nmi;
int panic_on_io_nmi;
unsigned int code_bytes = 64;
int kstack_depth_to_print = 3 * STACKSLOTS_PER_LINE;
+#ifdef CONFIG_STACK_UNWIND
+static int call_trace = 1;
+#else
+#define call_trace (-1)
+#endif
static int die_counter;
void printk_address(unsigned long address, int reliable)
const struct stacktrace_ops *ops,
struct thread_info *tinfo, int *graph)
{
- struct task_struct *task = tinfo->task;
+ struct task_struct *task;
unsigned long ret_addr;
- int index = task->curr_ret_stack;
+ int index;
if (addr != (unsigned long)return_to_handler)
return;
+ task = tinfo->task;
+ index = task->curr_ret_stack;
+
if (!task->ret_stack || index < *graph)
return;
{ }
#endif
+int asmlinkage dump_trace_unwind(struct unwind_frame_info *info,
+ const struct stacktrace_ops *ops, void *data)
+{
+ int n = 0;
+#ifdef CONFIG_STACK_UNWIND
+ unsigned long sp = UNW_SP(info);
+
+ if (arch_unw_user_mode(info))
+ return -1;
+ while (unwind(info) == 0 && UNW_PC(info)) {
+ n++;
+ ops->address(data, UNW_PC(info), 1);
+ if (arch_unw_user_mode(info))
+ break;
+ if ((sp & ~(PAGE_SIZE - 1)) == (UNW_SP(info) & ~(PAGE_SIZE - 1))
+ && sp > UNW_SP(info))
+ break;
+ sp = UNW_SP(info);
+ }
+#endif
+ return n;
+}
+
+int try_stack_unwind(struct task_struct *task, struct pt_regs *regs,
+ unsigned long **stack, unsigned long *bp,
+ const struct stacktrace_ops *ops, void *data)
+{
+#ifdef CONFIG_STACK_UNWIND
+ int unw_ret = 0;
+ struct unwind_frame_info info;
+ if (call_trace < 0)
+ return 0;
+
+ if (regs) {
+ if (unwind_init_frame_info(&info, task, regs) == 0)
+ unw_ret = dump_trace_unwind(&info, ops, data);
+ } else if (task == current)
+ unw_ret = unwind_init_running(&info, dump_trace_unwind, ops, data);
+#ifdef CONFIG_SMP
+ else if (task->on_cpu)
+ /* nothing */;
+#endif
+ else if (unwind_init_blocked(&info, task) == 0)
+ unw_ret = dump_trace_unwind(&info, ops, data);
+ if (unw_ret > 0) {
+ if (call_trace == 1 && !arch_unw_user_mode(&info)) {
+ ops->warning_symbol(data, "DWARF2 unwinder stuck at %s\n",
+ UNW_PC(&info));
+ if (UNW_SP(&info) >= PAGE_OFFSET) {
+ ops->warning(data, "Leftover inexact backtrace:\n");
+ *stack = (void *)UNW_SP(&info);
+ *bp = UNW_FP(&info);
+ return 0;
+ }
+ } else if (call_trace >= 1)
+ return -1;
+ ops->warning(data, "Full inexact backtrace again:\n");
+ } else
+ ops->warning(data, "Inexact backtrace:\n");
+#endif
+ return 0;
+}
+
/*
* x86-64 can have up to three kernel stacks:
* process stack
}
EXPORT_SYMBOL_GPL(print_context_stack_bp);
+
+static void
+print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
+{
+ printk(data);
+ print_symbol(msg, symbol);
+ printk("\n");
+}
+
+static void print_trace_warning(void *data, char *msg)
+{
+ printk("%s%s\n", (char *)data, msg);
+}
+
static int print_trace_stack(void *data, char *name)
{
printk("%s <%s> ", (char *)data, name);
}
static const struct stacktrace_ops print_trace_ops = {
+ .warning = print_trace_warning,
+ .warning_symbol = print_trace_warning_symbol,
.stack = print_trace_stack,
.address = print_trace_address,
.walk_stack = print_context_stack,
#endif
printk("\n");
if (notify_die(DIE_OOPS, str, regs, err,
- current->thread.trap_no, SIGSEGV) == NOTIFY_STOP)
+ current->thread.trap_nr, SIGSEGV) == NOTIFY_STOP)
return 1;
show_registers(regs);
return 1;
}
__setup("code_bytes=", code_bytes_setup);
+
+#ifdef CONFIG_STACK_UNWIND
+static int __init call_trace_setup(char *s)
+{
+ if (!s)
+ return -EINVAL;
+ if (strcmp(s, "old") == 0)
+ call_trace = -1;
+ else if (strcmp(s, "both") == 0)
+ call_trace = 0;
+ else if (strcmp(s, "newfallback") == 0)
+ call_trace = 1;
+ else if (strcmp(s, "new") == 0)
+ call_trace = 2;
+ return 0;
+}
+early_param("call_trace", call_trace_setup);
+#endif
if (!task)
task = current;
+ bp = stack_frame(task, regs);
+ if (try_stack_unwind(task, regs, &stack, &bp, ops, data))
+ return;
+
if (!stack) {
unsigned long dummy;
int i;
print_modules();
- __show_regs(regs, 0);
+ __show_regs(regs, !user_mode_vm(regs));
printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)\n",
TASK_COMM_LEN, current->comm, task_pid_nr(current),
#include <linux/bug.h>
#include <linux/nmi.h>
+#include <linux/unwind.h>
#include <asm/stacktrace.h>
#define N_EXCEPTION_STACKS_END \
(N_EXCEPTION_STACKS + DEBUG_STKSZ/EXCEPTION_STKSZ - 2)
- #ifndef CONFIG_X86_NO_TSS
static char x86_stack_ids[][8] = {
[ DEBUG_STACK-1 ] = "#DB",
[ NMI_STACK-1 ] = "NMI",
N_EXCEPTION_STACKS_END ] = "#DB[?]"
#endif
};
- #endif
static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
unsigned *usedp, char **idp)
{
- #ifndef CONFIG_X86_NO_TSS
unsigned k;
/*
}
#endif
}
- #endif /* CONFIG_X86_NO_TSS */
return NULL;
}
if (!task)
task = current;
+ bp = stack_frame(task, regs);
+ if (try_stack_unwind(task, regs, &stack, &bp, ops, data)) {
+ put_cpu();
+ return;
+ }
+
if (!stack) {
if (regs)
stack = (unsigned long *)regs->sp;
#endif
.endm
- #ifdef CONFIG_VM86
- #define resume_userspace_sig check_userspace
- #else
- #define resume_userspace_sig resume_userspace
- #endif
-
/*
* User gs save/restore
*
preempt_stop(CLBR_ANY)
ret_from_intr:
GET_THREAD_INFO(%ebp)
- check_userspace:
+ resume_userspace_sig:
+ #ifdef CONFIG_VM86
movl PT_EFLAGS(%esp), %eax # mix EFLAGS and CS
movb PT_CS(%esp), %al
andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
+ #else
+ /*
+ * We can be coming here from a syscall done in the kernel space,
+ * e.g. a failed kernel_execve().
+ */
+ movl PT_CS(%esp), %eax
+ andl $SEGMENT_RPL_MASK, %eax
+ #endif
cmpl $USER_RPL, %eax
jb resume_kernel # not returning to v8086 or userspace
CFI_SIGNAL_FRAME
CFI_DEF_CFA esp, 0
CFI_REGISTER esp, ebp
- movl SYSENTER_stack_sp0(%esp),%esp
+ movl TSS_sysenter_sp0(%esp),%esp
sysenter_past_esp:
/*
* Interrupts are disabled here, but we can't trace it until
*/
.popsection
+#ifdef CONFIG_STACK_UNWIND
+ENTRY(arch_unwind_init_running)
+ CFI_STARTPROC
+ movl 4(%esp), %edx
+ movl (%esp), %ecx
+ leal 4(%esp), %eax
+ movl %ebx, PT_EBX(%edx)
+ xorl %ebx, %ebx
+ movl %ebx, PT_ECX(%edx)
+ movl %ebx, PT_EDX(%edx)
+ movl %esi, PT_ESI(%edx)
+ movl %edi, PT_EDI(%edx)
+ movl %ebp, PT_EBP(%edx)
+ movl %ebx, PT_EAX(%edx)
+ movl $__USER_DS, PT_DS(%edx)
+ movl $__USER_DS, PT_ES(%edx)
+ movl $__KERNEL_PERCPU, PT_FS(%edx)
+ movl $__KERNEL_STACK_CANARY, PT_GS(%edx)
+ movl %eax, PT_OLDESP(%edx)
+ movl 16(%esp), %eax
+ movl %ebx, PT_ORIG_EAX(%edx)
+ movl %ecx, PT_EIP(%edx)
+ movl 12(%esp), %ecx
+ movl $__KERNEL_CS, PT_CS(%edx)
+ movl %eax, 12(%esp)
+ movl 8(%esp), %eax
+ movl %ecx, 8(%esp)
+ movl %ebx, PT_EFLAGS(%edx)
+ movl PT_EBX(%edx), %ebx
+ movl $__KERNEL_DS, PT_OLDSS(%edx)
+ jmpl *%eax
+ CFI_ENDPROC
+ENDPROC(arch_unwind_init_running)
+#endif
+
ENTRY(kernel_thread_helper)
pushl $0 # fake return address for unwinder
CFI_STARTPROC
CFI_ENDPROC
ENDPROC(kernel_thread_helper)
- #ifdef CONFIG_PARAVIRT_XEN
+ #ifdef CONFIG_XEN
/* Xen doesn't set %esp to be precisely what the normal sysenter
entrypoint expects, so fix it up before using the normal path. */
ENTRY(xen_sysenter_target)
BUILD_INTERRUPT3(xen_hvm_callback_vector, XEN_HVM_EVTCHN_CALLBACK,
xen_evtchn_do_upcall)
- #endif /* CONFIG_PARAVIRT_XEN */
+ #endif /* CONFIG_XEN */
#ifdef CONFIG_FUNCTION_TRACER
#ifdef CONFIG_DYNAMIC_FTRACE
* that sets up the real kernel stack. Check here, since we can't
* allow the wrong stack to be used.
*
- * "SYSENTER_stack_sp0+12" is because the NMI/debug handler will have
+ * "TSS_sysenter_sp0+12" is because the NMI/debug handler will have
* already pushed 3 words if it hits on the sysenter instruction:
* eflags, cs and eip.
*
cmpw $__KERNEL_CS, 4(%esp)
jne \ok
\label:
- movl SYSENTER_stack_sp0 + \offset(%esp), %esp
+ movl TSS_sysenter_sp0 + \offset(%esp), %esp
CFI_DEF_CFA esp, 0
CFI_UNDEFINED eip
pushfl_cfi
/*
* initial frame state for interrupts (and exceptions without error code)
*/
- .macro EMPTY_FRAME start=1 offset=0
- .if \start
+ .macro EMPTY_FRAME offset=0
CFI_STARTPROC simple
CFI_SIGNAL_FRAME
- CFI_DEF_CFA rsp,8+\offset
- .else
- CFI_DEF_CFA_OFFSET 8+\offset
- .endif
+ CFI_DEF_CFA rsp,\offset
.endm
/*
* initial frame state for interrupts (and exceptions without error code)
*/
.macro INTR_FRAME start=1 offset=0
- EMPTY_FRAME \start, SS+8+\offset-RIP
+ .if \start
+ EMPTY_FRAME SS+8+\offset-RIP
+ .else
+ CFI_DEF_CFA_OFFSET SS+8+\offset-RIP
+ .endif
/*CFI_REL_OFFSET ss, SS+\offset-RIP*/
CFI_REL_OFFSET rsp, RSP+\offset-RIP
/*CFI_REL_OFFSET rflags, EFLAGS+\offset-RIP*/
*/
.macro XCPT_FRAME start=1 offset=0
INTR_FRAME \start, RIP+\offset-ORIG_RAX
- /*CFI_REL_OFFSET orig_rax, ORIG_RAX-ORIG_RAX*/
.endm
/*
* frame that enables calling into C.
*/
.macro PARTIAL_FRAME start=1 offset=0
+ .if \start >= 0
XCPT_FRAME \start, ORIG_RAX+\offset-ARGOFFSET
+ .endif
CFI_REL_OFFSET rdi, RDI+\offset-ARGOFFSET
CFI_REL_OFFSET rsi, RSI+\offset-ARGOFFSET
CFI_REL_OFFSET rdx, RDX+\offset-ARGOFFSET
* frame that enables passing a complete pt_regs to a C function.
*/
.macro DEFAULT_FRAME start=1 offset=0
+ .if \start >= -1
PARTIAL_FRAME \start, R11+\offset-R15
+ .endif
CFI_REL_OFFSET rbx, RBX+\offset
CFI_REL_OFFSET rbp, RBP+\offset
CFI_REL_OFFSET r12, R12+\offset
movq %rsp, %rsi
leaq -RBP(%rsp),%rdi /* arg1 for handler */
- testl $3, CS(%rdi)
+ testl $3, CS-RBP(%rsi)
je 1f
SWAPGS
/*
* moving irq_enter into assembly, which would be too much work)
*/
1: incl PER_CPU_VAR(irq_count)
- jne 2f
- mov PER_CPU_VAR(irq_stack_ptr),%rsp
+ cmovzq PER_CPU_VAR(irq_stack_ptr),%rsp
CFI_DEF_CFA_REGISTER rsi
- 2: /* Store previous stack value */
+ /* Store previous stack value */
pushq %rsi
CFI_ESCAPE 0x0f /* DW_CFA_def_cfa_expression */, 6, \
0x77 /* DW_OP_breg7 */, 0, \
.endm
ENTRY(save_rest)
- PARTIAL_FRAME 1 REST_SKIP+8
+ CFI_STARTPROC
movq 5*8+16(%rsp), %r11 /* save return address */
- movq_cfi rbx, RBX+16
- movq_cfi rbp, RBP+16
- movq_cfi r12, R12+16
- movq_cfi r13, R13+16
- movq_cfi r14, R14+16
- movq_cfi r15, R15+16
+ movq %rbx, RBX+16(%rsp)
+ movq %rbp, RBP+16(%rsp)
+ movq %r12, R12+16(%rsp)
+ movq %r13, R13+16(%rsp)
+ movq %r14, R14+16(%rsp)
+ movq %r15, R15+16(%rsp)
movq %r11, 8(%rsp) /* return address */
FIXUP_TOP_OF_STACK %r11, 16
ret
/* save complete stack frame */
.pushsection .kprobes.text, "ax"
ENTRY(save_paranoid)
- XCPT_FRAME 1 RDI+8
+ XCPT_FRAME offset=ORIG_RAX-R15+8
cld
- movq_cfi rdi, RDI+8
- movq_cfi rsi, RSI+8
+ movq %rdi, RDI+8(%rsp)
+ movq %rsi, RSI+8(%rsp)
movq_cfi rdx, RDX+8
movq_cfi rcx, RCX+8
movq_cfi rax, RAX+8
- movq_cfi r8, R8+8
- movq_cfi r9, R9+8
- movq_cfi r10, R10+8
- movq_cfi r11, R11+8
+ movq %r8, R8+8(%rsp)
+ movq %r9, R9+8(%rsp)
+ movq %r10, R10+8(%rsp)
+ movq %r11, R11+8(%rsp)
movq_cfi rbx, RBX+8
- movq_cfi rbp, RBP+8
- movq_cfi r12, R12+8
- movq_cfi r13, R13+8
- movq_cfi r14, R14+8
- movq_cfi r15, R15+8
+ movq %rbp, RBP+8(%rsp)
+ movq %r12, R12+8(%rsp)
+ movq %r13, R13+8(%rsp)
+ movq %r14, R14+8(%rsp)
+ movq %r15, R15+8(%rsp)
movl $1,%ebx
movl $MSR_GS_BASE,%ecx
rdmsr
testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
jnz tracesys
system_call_fastpath:
+ #if __SYSCALL_MASK == ~0
cmpq $__NR_syscall_max,%rax
+ #else
+ andl $__SYSCALL_MASK,%eax
+ cmpl $__NR_syscall_max,%eax
+ #endif
ja badsys
movq %r10,%rcx
call *sys_call_table(,%rax,8) # XXX: rip relative
*/
LOAD_ARGS ARGOFFSET, 1
RESTORE_REST
+ #if __SYSCALL_MASK == ~0
cmpq $__NR_syscall_max,%rax
+ #else
+ andl $__SYSCALL_MASK,%eax
+ cmpl $__NR_syscall_max,%eax
+ #endif
ja int_ret_from_sys_call /* RAX(%rsp) set to -ENOSYS above */
movq %r10,%rcx /* fixup for C */
call *sys_call_table(,%rax,8)
subq $REST_SKIP, %rsp
CFI_ADJUST_CFA_OFFSET REST_SKIP
call save_rest
- DEFAULT_FRAME 0 8 /* offset 8: return address */
+ DEFAULT_FRAME -2 8 /* offset 8: return address */
leaq 8(%rsp), \arg /* pt_regs pointer */
call \func
jmp ptregscall_common
CFI_ENDPROC
END(stub_rt_sigreturn)
+ #ifdef CONFIG_X86_X32_ABI
+ PTREGSCALL stub_x32_sigaltstack, sys32_sigaltstack, %rdx
+
+ ENTRY(stub_x32_rt_sigreturn)
+ CFI_STARTPROC
+ addq $8, %rsp
+ PARTIAL_FRAME 0
+ SAVE_REST
+ movq %rsp,%rdi
+ FIXUP_TOP_OF_STACK %r11
+ call sys32_x32_rt_sigreturn
+ movq %rax,RAX(%rsp) # fixme, this could be done at the higher layer
+ RESTORE_REST
+ jmp int_ret_from_sys_call
+ CFI_ENDPROC
+ END(stub_x32_rt_sigreturn)
+
+ ENTRY(stub_x32_execve)
+ CFI_STARTPROC
+ addq $8, %rsp
+ PARTIAL_FRAME 0
+ SAVE_REST
+ FIXUP_TOP_OF_STACK %r11
+ movq %rsp, %rcx
+ call sys32_execve
+ RESTORE_TOP_OF_STACK %r11
+ movq %rax,RAX(%rsp)
+ RESTORE_REST
+ jmp int_ret_from_sys_call
+ CFI_ENDPROC
+ END(stub_x32_execve)
+
+ #endif
+
/*
* Build the entry stubs and pointer table with some assembler magic.
* We pack 7 stubs into a single 32-byte chunk, which will fit in a
/* Restore saved previous stack */
popq %rsi
- CFI_DEF_CFA_REGISTER rsi
+ CFI_DEF_CFA rsi,SS+8-RBP /* reg/off reset after def_cfa_expr */
leaq ARGOFFSET-RBP(%rsi), %rsp
CFI_DEF_CFA_REGISTER rsp
CFI_ADJUST_CFA_OFFSET RBP-ARGOFFSET
subq $ORIG_RAX-R15, %rsp
CFI_ADJUST_CFA_OFFSET ORIG_RAX-R15
call error_entry
- DEFAULT_FRAME 0
+ DEFAULT_FRAME -1
movq %rsp,%rdi /* pt_regs pointer */
xorl %esi,%esi /* no error code */
call \do_sym
subq $ORIG_RAX-R15, %rsp
CFI_ADJUST_CFA_OFFSET ORIG_RAX-R15
call save_paranoid
+ DEFAULT_FRAME -1
TRACE_IRQS_OFF
movq %rsp,%rdi /* pt_regs pointer */
xorl %esi,%esi /* no error code */
subq $ORIG_RAX-R15, %rsp
CFI_ADJUST_CFA_OFFSET ORIG_RAX-R15
call save_paranoid
+ DEFAULT_FRAME -1
TRACE_IRQS_OFF
movq %rsp,%rdi /* pt_regs pointer */
xorl %esi,%esi /* no error code */
subq $ORIG_RAX-R15, %rsp
CFI_ADJUST_CFA_OFFSET ORIG_RAX-R15
call error_entry
- DEFAULT_FRAME 0
+ DEFAULT_FRAME -1
movq %rsp,%rdi /* pt_regs pointer */
movq ORIG_RAX(%rsp),%rsi /* get error code */
movq $-1,ORIG_RAX(%rsp) /* no syscall to restart */
subq $ORIG_RAX-R15, %rsp
CFI_ADJUST_CFA_OFFSET ORIG_RAX-R15
call save_paranoid
- DEFAULT_FRAME 0
+ DEFAULT_FRAME -1
TRACE_IRQS_OFF
movq %rsp,%rdi /* pt_regs pointer */
movq ORIG_RAX(%rsp),%rsi /* get error code */
CFI_ENDPROC
END(call_softirq)
+#ifdef CONFIG_STACK_UNWIND
+ENTRY(arch_unwind_init_running)
+ CFI_STARTPROC
+ movq %r15, R15(%rdi)
+ movq %r14, R14(%rdi)
+ xchgq %rsi, %rdx
+ movq %r13, R13(%rdi)
+ movq %r12, R12(%rdi)
+ xorl %eax, %eax
+ movq %rbp, RBP(%rdi)
+ movq %rbx, RBX(%rdi)
+ movq (%rsp), %r9
+ xchgq %rdx, %rcx
+ movq %rax, R11(%rdi)
+ movq %rax, R10(%rdi)
+ movq %rax, R9(%rdi)
+ movq %rax, R8(%rdi)
+ movq %rax, RAX(%rdi)
+ movq %rax, RCX(%rdi)
+ movq %rax, RDX(%rdi)
+ movq %rax, RSI(%rdi)
+ movq %rax, RDI(%rdi)
+ movq %rax, ORIG_RAX(%rdi)
+ movq %r9, RIP(%rdi)
+ leaq 8(%rsp), %r9
+ movq $__KERNEL_CS, CS(%rdi)
+ movq %rax, EFLAGS(%rdi)
+ movq %r9, RSP(%rdi)
+ movq $__KERNEL_DS, SS(%rdi)
+ jmpq *%rcx
+ CFI_ENDPROC
+END(arch_unwind_init_running)
+#endif
+
- #ifdef CONFIG_PARAVIRT_XEN
+ #ifdef CONFIG_XEN
zeroentry xen_hypervisor_callback xen_do_hypervisor_callback
/*
apicinterrupt XEN_HVM_EVTCHN_CALLBACK \
xen_hvm_callback_vector xen_evtchn_do_upcall
- #endif /* CONFIG_PARAVIRT_XEN */
+ #endif /* CONFIG_XEN */
/*
* Some functions should be protected against kprobes
paranoidzeroentry_ist debug do_debug DEBUG_STACK
paranoidzeroentry_ist int3 do_int3 DEBUG_STACK
paranoiderrorentry stack_segment do_stack_segment
- #ifdef CONFIG_PARAVIRT_XEN
+ #ifdef CONFIG_XEN
zeroentry xen_debug do_debug
zeroentry xen_int3 do_int3
errorentry xen_stack_segment do_stack_segment
* returns in "no swapgs flag" in %ebx.
*/
ENTRY(error_entry)
- XCPT_FRAME
- CFI_ADJUST_CFA_OFFSET 15*8
+ XCPT_FRAME offset=ORIG_RAX-R15+8
/* oldrax contains error code */
cld
- movq_cfi rdi, RDI+8
- movq_cfi rsi, RSI+8
- movq_cfi rdx, RDX+8
- movq_cfi rcx, RCX+8
- movq_cfi rax, RAX+8
- movq_cfi r8, R8+8
- movq_cfi r9, R9+8
- movq_cfi r10, R10+8
- movq_cfi r11, R11+8
+ movq %rdi, RDI+8(%rsp)
+ movq %rsi, RSI+8(%rsp)
+ movq %rdx, RDX+8(%rsp)
+ movq %rcx, RCX+8(%rsp)
+ movq %rax, RAX+8(%rsp)
+ movq %r8, R8+8(%rsp)
+ movq %r9, R9+8(%rsp)
+ movq %r10, R10+8(%rsp)
+ movq %r11, R11+8(%rsp)
movq_cfi rbx, RBX+8
- movq_cfi rbp, RBP+8
- movq_cfi r12, R12+8
- movq_cfi r13, R13+8
- movq_cfi r14, R14+8
- movq_cfi r15, R15+8
+ movq %rbp, RBP+8(%rsp)
+ movq %r12, R12+8(%rsp)
+ movq %r13, R13+8(%rsp)
+ movq %r14, R14+8(%rsp)
+ movq %r15, R15+8(%rsp)
xorl %ebx,%ebx
testl $3,CS+8(%rsp)
je error_kernelspace
* compat mode. Check for these here too.
*/
error_kernelspace:
+ CFI_REL_OFFSET rcx, RCX+8
incl %ebx
leaq irq_return(%rip),%rcx
cmpq %rcx,RIP+8(%rsp)
/* Use %rdx as out temp variable throughout */
pushq_cfi %rdx
+ CFI_REL_OFFSET rdx, 0
/*
* If %cs was not the kernel segment, then the NMI triggered in user
*/
lea 6*8(%rsp), %rdx
test_in_nmi rdx, 4*8(%rsp), nested_nmi, first_nmi
+ CFI_REMEMBER_STATE
nested_nmi:
/*
nested_nmi_out:
popq_cfi %rdx
+ CFI_RESTORE rdx
/* No need to check faults here */
INTERRUPT_RETURN
+ CFI_RESTORE_STATE
first_nmi:
/*
* Because nested NMIs will use the pushed location that we
* | pt_regs |
* +-------------------------+
*
- * The saved RIP is used to fix up the copied RIP that a nested
- * NMI may zero out. The original stack frame and the temp storage
+ * The saved stack frame is used to fix up the copied stack frame
+ * that a nested NMI may change to make the interrupted NMI iret jump
+ * to the repeat_nmi. The original stack frame and the temp storage
* is also used by nested NMIs and can not be trusted on exit.
*/
+ /* Do not pop rdx, nested NMIs will corrupt that part of the stack */
+ movq (%rsp), %rdx
+ CFI_RESTORE rdx
+
/* Set the NMI executing variable on the stack. */
pushq_cfi $1
.rept 5
pushq_cfi 6*8(%rsp)
.endr
+ CFI_DEF_CFA_OFFSET SS+8-RIP
+
+ /* Everything up to here is safe from nested NMIs */
+
+ /*
+ * If there was a nested NMI, the first NMI's iret will return
+ * here. But NMIs are still enabled and we can take another
+ * nested NMI. The nested NMI checks the interrupted RIP to see
+ * if it is between repeat_nmi and end_repeat_nmi, and if so
+ * it will just return, as we are about to repeat an NMI anyway.
+ * This makes it safe to copy to the stack frame that a nested
+ * NMI will update.
+ */
+ repeat_nmi:
+ /*
+ * Update the stack variable to say we are still in NMI (the update
+ * is benign for the non-repeat case, where 1 was pushed just above
+ * to this very stack slot).
+ */
+ movq $1, 5*8(%rsp)
/* Make another copy, this one may be modified by nested NMIs */
.rept 5
pushq_cfi 4*8(%rsp)
.endr
-
- /* Do not pop rdx, nested NMIs will corrupt it */
- movq 11*8(%rsp), %rdx
+ CFI_DEF_CFA_OFFSET SS+8-RIP
+ end_repeat_nmi:
/*
* Everything below this point can be preempted by a nested
- * NMI if the first NMI took an exception. Repeated NMIs
- * caused by an exception and nested NMI will start here, and
- * can still be preempted by another NMI.
+ * NMI if the first NMI took an exception and reset our iret stack
+ * so that we repeat another NMI.
*/
- restart_nmi:
pushq_cfi $-1 /* ORIG_RAX: no syscall to restart */
subq $ORIG_RAX-R15, %rsp
CFI_ADJUST_CFA_OFFSET ORIG_RAX-R15
* exceptions might do.
*/
call save_paranoid
- DEFAULT_FRAME 0
+ DEFAULT_FRAME -1
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
movq %rsp,%rdi
movq $-1,%rsi
CFI_ENDPROC
END(nmi)
- /*
- * If an NMI hit an iret because of an exception or breakpoint,
- * it can lose its NMI context, and a nested NMI may come in.
- * In that case, the nested NMI will change the preempted NMI's
- * stack to jump to here when it does the final iret.
- */
- repeat_nmi:
- INTR_FRAME
- /* Update the stack variable to say we are still in NMI */
- movq $1, 5*8(%rsp)
-
- /* copy the saved stack back to copy stack */
- .rept 5
- pushq_cfi 4*8(%rsp)
- .endr
-
- jmp restart_nmi
- CFI_ENDPROC
- end_repeat_nmi:
-
ENTRY(ignore_sysret)
CFI_STARTPROC
mov $-ENOSYS,%eax
#ifdef CONFIG_X86_32
#define LOAD_OFFSET __PAGE_OFFSET
- #elif !defined(CONFIG_XEN) || CONFIG_XEN_COMPAT > 0x030002
- #define LOAD_OFFSET __START_KERNEL_map
#else
- #define LOAD_OFFSET 0
+ #define LOAD_OFFSET __START_KERNEL_map
#endif
#include <asm-generic/vmlinux.lds.h>
jiffies_64 = jiffies;
#endif
- #if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA) && !defined(CONFIG_XEN)
+ #if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
/*
* On 64-bit, align RODATA to 2MB so that even with CONFIG_DEBUG_RODATA
* we retain large page mappings for boundaries spanning kernel text, rodata
{
#ifdef CONFIG_X86_32
. = LOAD_OFFSET + LOAD_PHYSICAL_ADDR;
- #if defined(CONFIG_XEN) && CONFIG_XEN_COMPAT <= 0x030002
- #undef LOAD_OFFSET
- #define LOAD_OFFSET 0
- #endif
phys_startup_32 = startup_32 - LOAD_OFFSET;
#else
. = __START_KERNEL;
/* Sections to be discarded */
DISCARDS
+#ifndef CONFIG_UNWIND_INFO
/DISCARD/ : { *(.eh_frame) }
+#endif
}
0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW);
/* cpuid 1.ecx */
const u32 kvm_supported_word4_x86_features =
- F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
+ F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64 */ | F(MWAIT) |
0 /* DS-CPL, VMX, SMX, EST */ |
0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
const u32 kvm_supported_word6_x86_features =
F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
- F(3DNOWPREFETCH) | 0 /* OSVW */ | 0 /* IBS */ | F(XOP) |
+ F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
/* cpuid 0xC0000001.edx */
#define MSRPM_OFFSETS 16
static u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
+ /*
+ * Set osvw_len to higher value when updated Revision Guides
+ * are published and we know what the new status bits are
+ */
+ static uint64_t osvw_len = 4, osvw_status;
+
struct vcpu_svm {
struct kvm_vcpu vcpu;
struct vmcb *vmcb;
#else
static bool npt_enabled;
#endif
- static int npt = 1;
+ /* allow nested paging (virtualized MMU) for all guests */
+ static int npt = true;
module_param(npt, int, S_IRUGO);
- static int nested = 1;
+ /* allow nested virtualization in KVM/SVM */
+ static int nested = true;
module_param(nested, int, S_IRUGO);
static void svm_flush_tlb(struct kvm_vcpu *vcpu);
erratum_383_found = true;
}
+ static void svm_init_osvw(struct kvm_vcpu *vcpu)
+ {
+ /*
+ * Guests should see errata 400 and 415 as fixed (assuming that
+ * HLT and IO instructions are intercepted).
+ */
+ vcpu->arch.osvw.length = (osvw_len >= 3) ? (osvw_len) : 3;
+ vcpu->arch.osvw.status = osvw_status & ~(6ULL);
+
+ /*
+ * By increasing VCPU's osvw.length to 3 we are telling the guest that
+ * all osvw.status bits inside that length, including bit 0 (which is
+ * reserved for erratum 298), are valid. However, if host processor's
+ * osvw_len is 0 then osvw_status[0] carries no information. We need to
+ * be conservative here and therefore we tell the guest that erratum 298
+ * is present (because we really don't know).
+ */
+ if (osvw_len == 0 && boot_cpu_data.x86 == 0x10)
+ vcpu->arch.osvw.status |= 1;
+ }
+
static int has_svm(void)
{
const char *msg;
__get_cpu_var(current_tsc_ratio) = TSC_RATIO_DEFAULT;
}
+
+ /*
+ * Get OSVW bits.
+ *
+ * Note that it is possible to have a system with mixed processor
+ * revisions and therefore different OSVW bits. If bits are not the same
+ * on different processors then choose the worst case (i.e. if erratum
+ * is present on one processor and not on another then assume that the
+ * erratum is present everywhere).
+ */
+ if (cpu_has(&boot_cpu_data, X86_FEATURE_OSVW)) {
+ uint64_t len, status = 0;
+ int err;
+
+ len = native_read_msr_safe(MSR_AMD64_OSVW_ID_LENGTH, &err);
+ if (!err)
+ status = native_read_msr_safe(MSR_AMD64_OSVW_STATUS,
+ &err);
+
+ if (err)
+ osvw_status = osvw_len = 0;
+ else {
+ if (len < osvw_len)
+ osvw_len = len;
+ osvw_status |= status;
+ osvw_status &= (1ULL << osvw_len) - 1;
+ }
+ } else
+ osvw_status = osvw_len = 0;
+
svm_init_erratum_383();
amd_pmu_enable_virt();
return _tsc;
}
- static void svm_set_tsc_khz(struct kvm_vcpu *vcpu, u32 user_tsc_khz)
+ static void svm_set_tsc_khz(struct kvm_vcpu *vcpu, u32 user_tsc_khz, bool scale)
{
struct vcpu_svm *svm = to_svm(vcpu);
u64 ratio;
u64 khz;
- /* TSC scaling supported? */
- if (!boot_cpu_has(X86_FEATURE_TSCRATEMSR))
+ /* Guest TSC same frequency as host TSC? */
+ if (!scale) {
+ svm->tsc_ratio = TSC_RATIO_DEFAULT;
return;
+ }
- /* TSC-Scaling disabled or guest TSC same frequency as host TSC? */
- if (user_tsc_khz == 0) {
- vcpu->arch.virtual_tsc_khz = 0;
- svm->tsc_ratio = TSC_RATIO_DEFAULT;
+ /* TSC scaling supported? */
+ if (!boot_cpu_has(X86_FEATURE_TSCRATEMSR)) {
+ if (user_tsc_khz > tsc_khz) {
+ vcpu->arch.tsc_catchup = 1;
+ vcpu->arch.tsc_always_catchup = 1;
+ } else
+ WARN(1, "user requested TSC rate below hardware speed\n");
return;
}
user_tsc_khz);
return;
}
- vcpu->arch.virtual_tsc_khz = user_tsc_khz;
svm->tsc_ratio = ratio;
}
mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
}
- static void svm_adjust_tsc_offset(struct kvm_vcpu *vcpu, s64 adjustment)
+ static void svm_adjust_tsc_offset(struct kvm_vcpu *vcpu, s64 adjustment, bool host)
{
struct vcpu_svm *svm = to_svm(vcpu);
+ WARN_ON(adjustment < 0);
+ if (host)
+ adjustment = svm_scale_tsc(vcpu, adjustment);
+
svm->vmcb->control.tsc_offset += adjustment;
if (is_guest_mode(vcpu))
svm->nested.hsave->control.tsc_offset += adjustment;
if (kvm_vcpu_is_bsp(&svm->vcpu))
svm->vcpu.arch.apic_base |= MSR_IA32_APICBASE_BSP;
+ svm_init_osvw(&svm->vcpu);
+
return &svm->vcpu;
free_page4:
wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
}
+ static void svm_update_cpl(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_svm *svm = to_svm(vcpu);
+ int cpl;
+
+ if (!is_protmode(vcpu))
+ cpl = 0;
+ else if (svm->vmcb->save.rflags & X86_EFLAGS_VM)
+ cpl = 3;
+ else
+ cpl = svm->vmcb->save.cs.selector & 0x3;
+
+ svm->vmcb->save.cpl = cpl;
+ }
+
static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu)
{
return to_svm(vcpu)->vmcb->save.rflags;
static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
{
+ unsigned long old_rflags = to_svm(vcpu)->vmcb->save.rflags;
+
to_svm(vcpu)->vmcb->save.rflags = rflags;
+ if ((old_rflags ^ rflags) & X86_EFLAGS_VM)
+ svm_update_cpl(vcpu);
}
static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT;
}
if (seg == VCPU_SREG_CS)
- svm->vmcb->save.cpl
- = (svm->vmcb->save.cs.attrib
- >> SVM_SELECTOR_DPL_SHIFT) & 3;
+ svm_update_cpl(vcpu);
mark_dirty(svm->vmcb, VMCB_SEG);
}
return 1;
}
+static int monitor_interception(struct vcpu_svm *svm)
+{
+ svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
+ skip_emulated_instruction(&svm->vcpu);
+
+ return 1;
+}
+
+static int mwait_interception(struct vcpu_svm *svm)
+{
+ svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
+ skip_emulated_instruction(&svm->vcpu);
+
+ return kvm_emulate_halt(&svm->vcpu);
+}
+
static int invalid_op_interception(struct vcpu_svm *svm)
{
kvm_queue_exception(&svm->vcpu, UD_VECTOR);
(int_vec == OF_VECTOR || int_vec == BP_VECTOR)))
skip_emulated_instruction(&svm->vcpu);
- if (kvm_task_switch(&svm->vcpu, tss_selector, reason,
+ if (int_type != SVM_EXITINTINFO_TYPE_SOFT)
+ int_vec = -1;
+
+ if (kvm_task_switch(&svm->vcpu, tss_selector, int_vec, reason,
has_error_code, error_code) == EMULATE_FAIL) {
svm->vcpu.run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
svm->vcpu.run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
[SVM_EXIT_CLGI] = clgi_interception,
[SVM_EXIT_SKINIT] = skinit_interception,
[SVM_EXIT_WBINVD] = emulate_on_interception,
- [SVM_EXIT_MONITOR] = invalid_op_interception,
- [SVM_EXIT_MWAIT] = invalid_op_interception,
+ [SVM_EXIT_MONITOR] = monitor_interception,
+ [SVM_EXIT_MWAIT] = mwait_interception,
[SVM_EXIT_XSETBV] = xsetbv_interception,
[SVM_EXIT_NPF] = pf_interception,
};
#include <asm/mtrr.h>
#include <asm/mce.h>
#include <asm/i387.h>
+ #include <asm/fpu-internal.h> /* Ugh! */
#include <asm/xcr.h>
#include <asm/pvclock.h>
#include <asm/div64.h>
u32 kvm_max_guest_tsc_khz;
EXPORT_SYMBOL_GPL(kvm_max_guest_tsc_khz);
+ /* tsc tolerance in parts per million - default to 1/2 of the NTP threshold */
+ static u32 tsc_tolerance_ppm = 250;
+ module_param(tsc_tolerance_ppm, uint, S_IRUGO | S_IWUSR);
+
#define KVM_NR_SHARED_MSRS 16
struct kvm_shared_msrs_global {
static DEFINE_PER_CPU(unsigned long, cpu_tsc_khz);
unsigned long max_tsc_khz;
- static inline int kvm_tsc_changes_freq(void)
+ static inline u64 nsec_to_cycles(struct kvm_vcpu *vcpu, u64 nsec)
{
- int cpu = get_cpu();
- int ret = !boot_cpu_has(X86_FEATURE_CONSTANT_TSC) &&
- cpufreq_quick_get(cpu) != 0;
- put_cpu();
- return ret;
+ return pvclock_scale_delta(nsec, vcpu->arch.virtual_tsc_mult,
+ vcpu->arch.virtual_tsc_shift);
}
- u64 vcpu_tsc_khz(struct kvm_vcpu *vcpu)
+ static u32 adjust_tsc_khz(u32 khz, s32 ppm)
{
- if (vcpu->arch.virtual_tsc_khz)
- return vcpu->arch.virtual_tsc_khz;
- else
- return __this_cpu_read(cpu_tsc_khz);
+ u64 v = (u64)khz * (1000000 + ppm);
+ do_div(v, 1000000);
+ return v;
}
- static inline u64 nsec_to_cycles(struct kvm_vcpu *vcpu, u64 nsec)
+ static void kvm_set_tsc_khz(struct kvm_vcpu *vcpu, u32 this_tsc_khz)
{
- u64 ret;
+ u32 thresh_lo, thresh_hi;
+ int use_scaling = 0;
- WARN_ON(preemptible());
- if (kvm_tsc_changes_freq())
- printk_once(KERN_WARNING
- "kvm: unreliable cycle conversion on adjustable rate TSC\n");
- ret = nsec * vcpu_tsc_khz(vcpu);
- do_div(ret, USEC_PER_SEC);
- return ret;
- }
-
- static void kvm_init_tsc_catchup(struct kvm_vcpu *vcpu, u32 this_tsc_khz)
- {
/* Compute a scale to convert nanoseconds in TSC cycles */
kvm_get_time_scale(this_tsc_khz, NSEC_PER_SEC / 1000,
- &vcpu->arch.tsc_catchup_shift,
- &vcpu->arch.tsc_catchup_mult);
+ &vcpu->arch.virtual_tsc_shift,
+ &vcpu->arch.virtual_tsc_mult);
+ vcpu->arch.virtual_tsc_khz = this_tsc_khz;
+
+ /*
+ * Compute the variation in TSC rate which is acceptable
+ * within the range of tolerance and decide if the
+ * rate being applied is within that bounds of the hardware
+ * rate. If so, no scaling or compensation need be done.
+ */
+ thresh_lo = adjust_tsc_khz(tsc_khz, -tsc_tolerance_ppm);
+ thresh_hi = adjust_tsc_khz(tsc_khz, tsc_tolerance_ppm);
+ if (this_tsc_khz < thresh_lo || this_tsc_khz > thresh_hi) {
+ pr_debug("kvm: requested TSC rate %u falls outside tolerance [%u,%u]\n", this_tsc_khz, thresh_lo, thresh_hi);
+ use_scaling = 1;
+ }
+ kvm_x86_ops->set_tsc_khz(vcpu, this_tsc_khz, use_scaling);
}
static u64 compute_guest_tsc(struct kvm_vcpu *vcpu, s64 kernel_ns)
{
- u64 tsc = pvclock_scale_delta(kernel_ns-vcpu->arch.last_tsc_nsec,
- vcpu->arch.tsc_catchup_mult,
- vcpu->arch.tsc_catchup_shift);
- tsc += vcpu->arch.last_tsc_write;
+ u64 tsc = pvclock_scale_delta(kernel_ns-vcpu->arch.this_tsc_nsec,
+ vcpu->arch.virtual_tsc_mult,
+ vcpu->arch.virtual_tsc_shift);
+ tsc += vcpu->arch.this_tsc_write;
return tsc;
}
struct kvm *kvm = vcpu->kvm;
u64 offset, ns, elapsed;
unsigned long flags;
- s64 sdiff;
+ s64 usdiff;
raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags);
offset = kvm_x86_ops->compute_tsc_offset(vcpu, data);
ns = get_kernel_ns();
elapsed = ns - kvm->arch.last_tsc_nsec;
- sdiff = data - kvm->arch.last_tsc_write;
- if (sdiff < 0)
- sdiff = -sdiff;
+
+ /* n.b - signed multiplication and division required */
+ usdiff = data - kvm->arch.last_tsc_write;
+ #ifdef CONFIG_X86_64
+ usdiff = (usdiff * 1000) / vcpu->arch.virtual_tsc_khz;
+ #else
+ /* do_div() only does unsigned */
+ asm("idivl %2; xor %%edx, %%edx"
+ : "=A"(usdiff)
+ : "A"(usdiff * 1000), "rm"(vcpu->arch.virtual_tsc_khz));
+ #endif
+ do_div(elapsed, 1000);
+ usdiff -= elapsed;
+ if (usdiff < 0)
+ usdiff = -usdiff;
/*
- * Special case: close write to TSC within 5 seconds of
- * another CPU is interpreted as an attempt to synchronize
- * The 5 seconds is to accommodate host load / swapping as
- * well as any reset of TSC during the boot process.
- *
- * In that case, for a reliable TSC, we can match TSC offsets,
- * or make a best guest using elapsed value.
- */
- if (sdiff < nsec_to_cycles(vcpu, 5ULL * NSEC_PER_SEC) &&
- elapsed < 5ULL * NSEC_PER_SEC) {
+ * Special case: TSC write with a small delta (1 second) of virtual
+ * cycle time against real time is interpreted as an attempt to
+ * synchronize the CPU.
+ *
+ * For a reliable TSC, we can match TSC offsets, and for an unstable
+ * TSC, we add elapsed time in this computation. We could let the
+ * compensation code attempt to catch up if we fall behind, but
+ * it's better to try to match offsets from the beginning.
+ */
+ if (usdiff < USEC_PER_SEC &&
+ vcpu->arch.virtual_tsc_khz == kvm->arch.last_tsc_khz) {
if (!check_tsc_unstable()) {
- offset = kvm->arch.last_tsc_offset;
+ offset = kvm->arch.cur_tsc_offset;
pr_debug("kvm: matched tsc offset for %llu\n", data);
} else {
u64 delta = nsec_to_cycles(vcpu, elapsed);
- offset += delta;
+ data += delta;
+ offset = kvm_x86_ops->compute_tsc_offset(vcpu, data);
pr_debug("kvm: adjusted tsc offset by %llu\n", delta);
}
- ns = kvm->arch.last_tsc_nsec;
+ } else {
+ /*
+ * We split periods of matched TSC writes into generations.
+ * For each generation, we track the original measured
+ * nanosecond time, offset, and write, so if TSCs are in
+ * sync, we can match exact offset, and if not, we can match
+ * exact software computaion in compute_guest_tsc()
+ *
+ * These values are tracked in kvm->arch.cur_xxx variables.
+ */
+ kvm->arch.cur_tsc_generation++;
+ kvm->arch.cur_tsc_nsec = ns;
+ kvm->arch.cur_tsc_write = data;
+ kvm->arch.cur_tsc_offset = offset;
+ pr_debug("kvm: new tsc generation %u, clock %llu\n",
+ kvm->arch.cur_tsc_generation, data);
}
+
+ /*
+ * We also track th most recent recorded KHZ, write and time to
+ * allow the matching interval to be extended at each write.
+ */
kvm->arch.last_tsc_nsec = ns;
kvm->arch.last_tsc_write = data;
- kvm->arch.last_tsc_offset = offset;
- kvm_x86_ops->write_tsc_offset(vcpu, offset);
- raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
+ kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz;
/* Reset of TSC must disable overshoot protection below */
vcpu->arch.hv_clock.tsc_timestamp = 0;
- vcpu->arch.last_tsc_write = data;
- vcpu->arch.last_tsc_nsec = ns;
+ vcpu->arch.last_guest_tsc = data;
+
+ /* Keep track of which generation this VCPU has synchronized to */
+ vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation;
+ vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec;
+ vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
+
+ kvm_x86_ops->write_tsc_offset(vcpu, offset);
+ raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
}
+
EXPORT_SYMBOL_GPL(kvm_write_tsc);
static int kvm_guest_time_update(struct kvm_vcpu *v)
local_irq_save(flags);
tsc_timestamp = kvm_x86_ops->read_l1_tsc(v);
kernel_ns = get_kernel_ns();
- this_tsc_khz = vcpu_tsc_khz(v);
+ this_tsc_khz = __get_cpu_var(cpu_tsc_khz);
if (unlikely(this_tsc_khz == 0)) {
local_irq_restore(flags);
kvm_make_request(KVM_REQ_CLOCK_UPDATE, v);
if (vcpu->tsc_catchup) {
u64 tsc = compute_guest_tsc(v, kernel_ns);
if (tsc > tsc_timestamp) {
- kvm_x86_ops->adjust_tsc_offset(v, tsc - tsc_timestamp);
+ adjust_tsc_offset_guest(v, tsc - tsc_timestamp);
tsc_timestamp = tsc;
}
}
* observed by the guest and ensure the new system time is greater.
*/
max_kernel_ns = 0;
- if (vcpu->hv_clock.tsc_timestamp && vcpu->last_guest_tsc) {
+ if (vcpu->hv_clock.tsc_timestamp) {
max_kernel_ns = vcpu->last_guest_tsc -
vcpu->hv_clock.tsc_timestamp;
max_kernel_ns = pvclock_scale_delta(max_kernel_ns,
*/
vcpu->hv_clock.version += 2;
- shared_kaddr = kmap_atomic(vcpu->time_page, KM_USER0);
+ shared_kaddr = kmap_atomic(vcpu->time_page);
memcpy(shared_kaddr + vcpu->time_offset, &vcpu->hv_clock,
sizeof(vcpu->hv_clock));
- kunmap_atomic(shared_kaddr, KM_USER0);
+ kunmap_atomic(shared_kaddr);
mark_page_dirty(v->kvm, vcpu->time >> PAGE_SHIFT);
return 0;
case MSR_K7_HWCR:
data &= ~(u64)0x40; /* ignore flush filter disable */
data &= ~(u64)0x100; /* ignore ignne emulation enable */
+ data &= ~(u64)0x8; /* ignore TLB cache disable */
if (data != 0) {
pr_unimpl(vcpu, "unimplemented HWCR wrmsr: 0x%llx\n",
data);
case MSR_VM_HSAVE_PA:
case MSR_AMD64_PATCH_LOADER:
break;
+ case MSR_NHM_SNB_PKG_CST_CFG_CTL: /* 0xe2 */
case 0x200 ... 0x2ff:
return set_msr_mtrr(vcpu, msr, data);
case MSR_IA32_APICBASE:
*/
pr_unimpl(vcpu, "ignored wrmsr: 0x%x data %llx\n", msr, data);
break;
+ case MSR_AMD64_OSVW_ID_LENGTH:
+ if (!guest_cpuid_has_osvw(vcpu))
+ return 1;
+ vcpu->arch.osvw.length = data;
+ break;
+ case MSR_AMD64_OSVW_STATUS:
+ if (!guest_cpuid_has_osvw(vcpu))
+ return 1;
+ vcpu->arch.osvw.status = data;
+ break;
default:
if (msr && (msr == vcpu->kvm->arch.xen_hvm_config.msr))
return xen_hvm_config(vcpu, data);
case MSR_K8_INT_PENDING_MSG:
case MSR_AMD64_NB_CFG:
case MSR_FAM10H_MMIO_CONF_BASE:
+ case MSR_NHM_SNB_PKG_CST_CFG_CTL: /* 0xe2 */
data = 0;
break;
case MSR_P6_PERFCTR0:
*/
data = 0xbe702111;
break;
+ case MSR_AMD64_OSVW_ID_LENGTH:
+ if (!guest_cpuid_has_osvw(vcpu))
+ return 1;
+ data = vcpu->arch.osvw.length;
+ break;
+ case MSR_AMD64_OSVW_STATUS:
+ if (!guest_cpuid_has_osvw(vcpu))
+ return 1;
+ data = vcpu->arch.osvw.status;
+ break;
default:
if (kvm_pmu_msr(vcpu, msr))
return kvm_pmu_get_msr(vcpu, msr, pdata);
case KVM_CAP_XSAVE:
case KVM_CAP_ASYNC_PF:
case KVM_CAP_GET_TSC_KHZ:
+ case KVM_CAP_PCI_2_3:
r = 1;
break;
case KVM_CAP_COALESCED_MMIO:
}
kvm_x86_ops->vcpu_load(vcpu, cpu);
- if (unlikely(vcpu->cpu != cpu) || check_tsc_unstable()) {
- /* Make sure TSC doesn't go backwards */
- s64 tsc_delta;
- u64 tsc;
- tsc = kvm_x86_ops->read_l1_tsc(vcpu);
- tsc_delta = !vcpu->arch.last_guest_tsc ? 0 :
- tsc - vcpu->arch.last_guest_tsc;
+ /* Apply any externally detected TSC adjustments (due to suspend) */
+ if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
+ adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
+ vcpu->arch.tsc_offset_adjustment = 0;
+ set_bit(KVM_REQ_CLOCK_UPDATE, &vcpu->requests);
+ }
+ if (unlikely(vcpu->cpu != cpu) || check_tsc_unstable()) {
+ s64 tsc_delta = !vcpu->arch.last_host_tsc ? 0 :
+ native_read_tsc() - vcpu->arch.last_host_tsc;
if (tsc_delta < 0)
mark_tsc_unstable("KVM discovered backwards TSC");
if (check_tsc_unstable()) {
- kvm_x86_ops->adjust_tsc_offset(vcpu, -tsc_delta);
+ u64 offset = kvm_x86_ops->compute_tsc_offset(vcpu,
+ vcpu->arch.last_guest_tsc);
+ kvm_x86_ops->write_tsc_offset(vcpu, offset);
vcpu->arch.tsc_catchup = 1;
}
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
{
kvm_x86_ops->vcpu_put(vcpu);
kvm_put_guest_fpu(vcpu);
- vcpu->arch.last_guest_tsc = kvm_x86_ops->read_l1_tsc(vcpu);
+ vcpu->arch.last_host_tsc = native_read_tsc();
}
static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
u32 user_tsc_khz;
r = -EINVAL;
- if (!kvm_has_tsc_control)
- break;
-
user_tsc_khz = (u32)arg;
if (user_tsc_khz >= kvm_max_guest_tsc_khz)
goto out;
- kvm_x86_ops->set_tsc_khz(vcpu, user_tsc_khz);
+ if (user_tsc_khz == 0)
+ user_tsc_khz = tsc_khz;
+
+ kvm_set_tsc_khz(vcpu, user_tsc_khz);
r = 0;
goto out;
}
case KVM_GET_TSC_KHZ: {
- r = -EIO;
- if (check_tsc_unstable())
- goto out;
-
- r = vcpu_tsc_khz(vcpu);
-
+ r = vcpu->arch.virtual_tsc_khz;
goto out;
}
default:
return r;
}
+ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+ {
+ return VM_FAULT_SIGBUS;
+ }
+
static int kvm_vm_ioctl_set_tss_addr(struct kvm *kvm, unsigned long addr)
{
int ret;
unsigned long *dirty_bitmap,
unsigned long nr_dirty_pages)
{
+ spin_lock(&kvm->mmu_lock);
+
/* Not many dirty pages compared to # of shadow pages. */
if (nr_dirty_pages < kvm->arch.n_used_mmu_pages) {
unsigned long gfn_offset;
for_each_set_bit(gfn_offset, dirty_bitmap, memslot->npages) {
unsigned long gfn = memslot->base_gfn + gfn_offset;
- spin_lock(&kvm->mmu_lock);
kvm_mmu_rmap_write_protect(kvm, gfn, memslot);
- spin_unlock(&kvm->mmu_lock);
}
kvm_flush_remote_tlbs(kvm);
- } else {
- spin_lock(&kvm->mmu_lock);
+ } else
kvm_mmu_slot_remove_write_access(kvm, memslot->id);
- spin_unlock(&kvm->mmu_lock);
- }
+
+ spin_unlock(&kvm->mmu_lock);
}
/*
r = -EEXIST;
if (kvm->arch.vpic)
goto create_irqchip_unlock;
+ r = -EINVAL;
+ if (atomic_read(&kvm->online_vcpus))
+ goto create_irqchip_unlock;
r = -ENOMEM;
vpic = kvm_create_pic(kvm);
if (vpic) {
goto emul_write;
}
- kaddr = kmap_atomic(page, KM_USER0);
+ kaddr = kmap_atomic(page);
kaddr += offset_in_page(gpa);
switch (bytes) {
case 1:
default:
BUG();
}
- kunmap_atomic(kaddr, KM_USER0);
+ kunmap_atomic(kaddr);
kvm_release_page_dirty(page);
if (!exchanged)
return res;
}
+ static void emulator_set_rflags(struct x86_emulate_ctxt *ctxt, ulong val)
+ {
+ kvm_set_rflags(emul_to_vcpu(ctxt), val);
+ }
+
static int emulator_get_cpl(struct x86_emulate_ctxt *ctxt)
{
return kvm_x86_ops->get_cpl(emul_to_vcpu(ctxt));
.set_idt = emulator_set_idt,
.get_cr = emulator_get_cr,
.set_cr = emulator_set_cr,
+ .set_rflags = emulator_set_rflags,
.cpl = emulator_get_cpl,
.get_dr = emulator_get_dr,
.set_dr = emulator_set_dr,
profile_hit(KVM_PROFILING, (void *)rip);
}
+ if (unlikely(vcpu->arch.tsc_always_catchup))
+ kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
kvm_lapic_sync_from_vapic(vcpu);
return 0;
}
- int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason,
- bool has_error_code, u32 error_code)
+ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index,
+ int reason, bool has_error_code, u32 error_code)
{
struct x86_emulate_ctxt *ctxt = &vcpu->arch.emulate_ctxt;
int ret;
init_emulate_ctxt(vcpu);
- ret = emulator_task_switch(ctxt, tss_selector, reason,
+ ret = emulator_task_switch(ctxt, tss_selector, idt_index, reason,
has_error_code, error_code);
if (ret)
struct kvm *kvm;
struct kvm_vcpu *vcpu;
int i;
+ int ret;
+ u64 local_tsc;
+ u64 max_tsc = 0;
+ bool stable, backwards_tsc = false;
kvm_shared_msr_cpu_online();
- list_for_each_entry(kvm, &vm_list, vm_list)
- kvm_for_each_vcpu(i, vcpu, kvm)
- if (vcpu->cpu == smp_processor_id())
- kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
- return kvm_x86_ops->hardware_enable(garbage);
+ ret = kvm_x86_ops->hardware_enable(garbage);
+ if (ret != 0)
+ return ret;
+
+ local_tsc = native_read_tsc();
+ stable = !check_tsc_unstable();
+ list_for_each_entry(kvm, &vm_list, vm_list) {
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ if (!stable && vcpu->cpu == smp_processor_id())
+ set_bit(KVM_REQ_CLOCK_UPDATE, &vcpu->requests);
+ if (stable && vcpu->arch.last_host_tsc > local_tsc) {
+ backwards_tsc = true;
+ if (vcpu->arch.last_host_tsc > max_tsc)
+ max_tsc = vcpu->arch.last_host_tsc;
+ }
+ }
+ }
+
+ /*
+ * Sometimes, even reliable TSCs go backwards. This happens on
+ * platforms that reset TSC during suspend or hibernate actions, but
+ * maintain synchronization. We must compensate. Fortunately, we can
+ * detect that condition here, which happens early in CPU bringup,
+ * before any KVM threads can be running. Unfortunately, we can't
+ * bring the TSCs fully up to date with real time, as we aren't yet far
+ * enough into CPU bringup that we know how much real time has actually
+ * elapsed; our helper function, get_kernel_ns() will be using boot
+ * variables that haven't been updated yet.
+ *
+ * So we simply find the maximum observed TSC above, then record the
+ * adjustment to TSC in each VCPU. When the VCPU later gets loaded,
+ * the adjustment will be applied. Note that we accumulate
+ * adjustments, in case multiple suspend cycles happen before some VCPU
+ * gets a chance to run again. In the event that no KVM threads get a
+ * chance to run, we will miss the entire elapsed period, as we'll have
+ * reset last_host_tsc, so VCPUs will not have the TSC adjusted and may
+ * loose cycle time. This isn't too big a deal, since the loss will be
+ * uniform across all VCPUs (not to mention the scenario is extremely
+ * unlikely). It is possible that a second hibernate recovery happens
+ * much faster than a first, causing the observed TSC here to be
+ * smaller; this would require additional padding adjustment, which is
+ * why we set last_host_tsc to the local tsc observed here.
+ *
+ * N.B. - this code below runs only on platforms with reliable TSC,
+ * as that is the only way backwards_tsc is set above. Also note
+ * that this runs for ALL vcpus, which is not a bug; all VCPUs should
+ * have the same delta_cyc adjustment applied if backwards_tsc
+ * is detected. Note further, this adjustment is only done once,
+ * as we reset last_host_tsc on all VCPUs to stop this from being
+ * called multiple times (one for each physical CPU bringup).
+ *
+ * Platforms with unnreliable TSCs don't have to deal with this, they
+ * will be compensated by the logic in vcpu_load, which sets the TSC to
+ * catchup mode. This will catchup all VCPUs to real time, but cannot
+ * guarantee that they stay in perfect synchronization.
+ */
+ if (backwards_tsc) {
+ u64 delta_cyc = max_tsc - local_tsc;
+ list_for_each_entry(kvm, &vm_list, vm_list) {
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ vcpu->arch.tsc_offset_adjustment += delta_cyc;
+ vcpu->arch.last_host_tsc = local_tsc;
+ }
+
+ /*
+ * We have to disable TSC offset matching.. if you were
+ * booting a VM while issuing an S4 host suspend....
+ * you may have some problem. Solving this issue is
+ * left as an exercise to the reader.
+ */
+ kvm->arch.last_tsc_nsec = 0;
+ kvm->arch.last_tsc_write = 0;
+ }
+
+ }
+ return 0;
}
void kvm_arch_hardware_disable(void *garbage)
kvm_x86_ops->check_processor_compatibility(rtn);
}
+ bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu)
+ {
+ return irqchip_in_kernel(vcpu->kvm) == (vcpu->arch.apic != NULL);
+ }
+
int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
{
struct page *page;
}
vcpu->arch.pio_data = page_address(page);
- kvm_init_tsc_catchup(vcpu, max_tsc_khz);
+ kvm_set_tsc_khz(vcpu, max_tsc_khz);
r = kvm_mmu_create(vcpu);
if (r < 0)
free_page((unsigned long)vcpu->arch.pio_data);
}
- int kvm_arch_init_vm(struct kvm *kvm)
+ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
{
+ if (type)
+ return -EINVAL;
+
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
INIT_LIST_HEAD(&kvm->arch.assigned_dev_head);
put_page(kvm->arch.ept_identity_pagetable);
}
+ void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont)
+ {
+ int i;
+
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ if (!dont || free->arch.lpage_info[i] != dont->arch.lpage_info[i]) {
+ vfree(free->arch.lpage_info[i]);
+ free->arch.lpage_info[i] = NULL;
+ }
+ }
+ }
+
+ int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+ {
+ int i;
+
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ unsigned long ugfn;
+ int lpages;
+ int level = i + 2;
+
+ lpages = gfn_to_index(slot->base_gfn + npages - 1,
+ slot->base_gfn, level) + 1;
+
+ slot->arch.lpage_info[i] =
+ vzalloc(lpages * sizeof(*slot->arch.lpage_info[i]));
+ if (!slot->arch.lpage_info[i])
+ goto out_free;
+
+ if (slot->base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1))
+ slot->arch.lpage_info[i][0].write_count = 1;
+ if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1))
+ slot->arch.lpage_info[i][lpages - 1].write_count = 1;
+ ugfn = slot->userspace_addr >> PAGE_SHIFT;
+ /*
+ * If the gfn and userspace address are not aligned wrt each
+ * other, or if explicitly asked to, disable large page
+ * support for this slot
+ */
+ if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
+ !kvm_largepages_enabled()) {
+ unsigned long j;
+
+ for (j = 0; j < lpages; ++j)
+ slot->arch.lpage_info[i][j].write_count = 1;
+ }
+ }
+
+ return 0;
+
+ out_free:
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ vfree(slot->arch.lpage_info[i]);
+ slot->arch.lpage_info[i] = NULL;
+ }
+ return -ENOMEM;
+ }
+
int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
#include <asm/ptrace.h>
#include <asm/stacktrace.h>
+static void backtrace_warning_symbol(void *data, char *msg,
+ unsigned long symbol)
+{
+ /* Ignore warnings */
+}
+
+static void backtrace_warning(void *data, char *msg)
+{
+ /* Ignore warnings */
+}
+
static int backtrace_stack(void *data, char *name)
{
/* Yes, we want all stacks */
}
static struct stacktrace_ops backtrace_ops = {
+ .warning = backtrace_warning,
+ .warning_symbol = backtrace_warning_symbol,
.stack = backtrace_stack,
.address = backtrace_address,
.walk_stack = print_context_stack,
{
struct stack_frame_ia32 *head;
- /* User process is 32-bit */
+ /* User process is IA32 */
if (!current || !test_thread_flag(TIF_IA32))
return 0;
u32 address32;
u32 i;
- /* Update the local FADT table header length */
-
- acpi_gbl_FADT.header.length = sizeof(struct acpi_table_fadt);
-
/*
* Expand the 32-bit FACS and DSDT addresses to 64-bit as necessary.
* Later code will always use the X 64-bit field. Also, check for an
acpi_gbl_FADT.boot_flags = 0;
}
+ /* Update the local FADT table header length */
+
+ acpi_gbl_FADT.header.length = sizeof(struct acpi_table_fadt);
+
/*
* Expand the ACPI 1.0 32-bit addresses to the ACPI 2.0 64-bit "X"
* generic address structures as necessary. Later code will always use
(!address64->address && length)) {
ACPI_WARNING((AE_INFO,
"Optional field %s has zero address or length: "
- "0x%8.8X%8.8X/0x%X",
+ "0x%8.8X%8.8X/0x%X - not using it",
name,
ACPI_FORMAT_UINT64(address64->
address),
length));
+ address64->address = 0;
}
}
}
#include <linux/kmod.h>
#include <linux/reboot.h>
#include <linux/device.h>
+#include <linux/dmi.h>
#include <asm/uaccess.h>
#include <linux/thermal.h>
#include <acpi/acpi_bus.h>
if (!tz)
return -EINVAL;
- /* Get temperature [_TMP] (required) */
- result = acpi_thermal_get_temperature(tz);
+ /* Get trip points [_CRT, _PSV, etc.] (required) */
+ result = acpi_thermal_get_trip_points(tz);
if (result)
return result;
- /* Get trip points [_CRT, _PSV, etc.] (required) */
- result = acpi_thermal_get_trip_points(tz);
+ /* Get temperature [_TMP] (required) */
+ result = acpi_thermal_get_temperature(tz);
if (result)
return result;
tz->kelvin_offset = 2732;
}
+static struct dmi_system_id thermal_psv_dmi_table[] = {
+ {
+ .ident = "IBM ThinkPad T41",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad T41"),
+ },
+ },
+ {
+ .ident = "IBM ThinkPad T42",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad T42"),
+ },
+ },
+ {
+ .ident = "IBM ThinkPad T43",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad T43"),
+ },
+ },
+ {
+ .ident = "IBM ThinkPad T41p",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad T41p"),
+ },
+ },
+ {
+ .ident = "IBM ThinkPad T42p",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad T42p"),
+ },
+ },
+ {
+ .ident = "IBM ThinkPad T43p",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad T43p"),
+ },
+ },
+ {
+ .ident = "IBM ThinkPad R40",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad R40"),
+ },
+ },
+ {
+ .ident = "IBM ThinkPad R50p",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION,"ThinkPad R50p"),
+ },
+ },
+ {},
+};
+
+static int acpi_thermal_set_polling(struct acpi_thermal *tz, int seconds)
+{
+ if (!tz)
+ return -EINVAL;
+
+ /* Convert value to deci-seconds */
+ tz->polling_frequency = seconds * 10;
+
+ tz->thermal_zone->polling_delay = seconds * 1000;
+
+ if (tz->tz_enabled)
+ thermal_zone_device_update(tz->thermal_zone);
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Polling frequency set to %lu seconds\n",
+ tz->polling_frequency/10));
+
+ return 0;
+}
+
static int acpi_thermal_add(struct acpi_device *device)
{
int result = 0;
if (result)
goto free_memory;
+ if (dmi_check_system(thermal_psv_dmi_table)) {
+ if (tz->trips.passive.flags.valid &&
+ tz->trips.passive.temperature > CELSIUS_TO_KELVIN(85)) {
+ printk (KERN_INFO "Adjust passive trip point from %lu"
+ " to %lu\n",
+ KELVIN_TO_CELSIUS(tz->trips.passive.temperature),
+ KELVIN_TO_CELSIUS(tz->trips.passive.temperature - 150));
+ tz->trips.passive.temperature -= 150;
+ acpi_thermal_set_polling(tz, 5);
+ }
+ }
+
printk(KERN_INFO PREFIX "%s [%s] (%ld C)\n",
acpi_device_name(device), acpi_device_bid(device),
KELVIN_TO_CELSIUS(tz->temperature));
static int piix_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent);
static void piix_remove_one(struct pci_dev *pdev);
+static unsigned int piix_pata_read_id(struct ata_device *adev, struct ata_taskfile *tf, u16 *id);
static int piix_pata_prereset(struct ata_link *link, unsigned long deadline);
static void piix_set_piomode(struct ata_port *ap, struct ata_device *adev);
static void piix_set_dmamode(struct ata_port *ap, struct ata_device *adev);
{ 0x8086, 0x1e08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
/* SATA Controller IDE (Panther Point) */
{ 0x8086, 0x1e09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
+ /* SATA Controller IDE (Lynx Point) */
+ { 0x8086, 0x8c00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
+ /* SATA Controller IDE (Lynx Point) */
+ { 0x8086, 0x8c01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
+ /* SATA Controller IDE (Lynx Point) */
+ { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
+ /* SATA Controller IDE (Lynx Point) */
+ { 0x8086, 0x8c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
{ } /* terminate list */
};
.set_piomode = piix_set_piomode,
.set_dmamode = piix_set_dmamode,
.prereset = piix_pata_prereset,
+ .read_id = piix_pata_read_id,
};
static struct ata_port_operations piix_vmw_ops = {
MODULE_DEVICE_TABLE(pci, piix_pci_tbl);
MODULE_VERSION(DRV_VERSION);
+static int piix_msft_hyperv(void)
+{
+ int hv = 0;
+#if defined(CONFIG_HYPERV_STORAGE) || defined(CONFIG_HYPERV_STORAGE_MODULE)
+ static const struct dmi_system_id hv_dmi_ident[] = {
+ {
+ .ident = "Hyper-V",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Virtual Machine"),
+ DMI_MATCH(DMI_BOARD_NAME, "Virtual Machine"),
+ },
+ },
+ { } /* terminate list */
+ };
+ hv = !!dmi_check_system(hv_dmi_ident);
+#endif
+ return hv;
+}
+
struct ich_laptop {
u16 device;
u16 subvendor;
return ata_sff_prereset(link, deadline);
}
+static unsigned int piix_pata_read_id(struct ata_device *adev, struct ata_taskfile *tf, u16 *id)
+{
+ unsigned int err_mask = ata_do_dev_read_id(adev, tf, id);
+ /*
+ * Ignore disks in a hyper-v guest.
+ * There is no unplug protocol like it is done with xen_emul_unplug= option.
+ * Emulate the unplug by ignoring disks when the hv_storvsc driver is enabled.
+ * If the disks are not ignored, they will appear twice: once through
+ * piix and once through hv_storvsc.
+ * hv_storvsc can not handle ATAPI devices because they can only be
+ * accessed through the emulated code path (not through the vm_bus
+ * channel), the piix driver is still required.
+ */
+ if (ata_id_is_ata(id) && piix_msft_hyperv()) {
+ ata_dev_printk(adev, KERN_WARNING, "ATA device ignored in Hyper-V guest\n");
+ id[ATA_ID_CONFIG] |= (1 << 15);
+ }
+ return err_mask;
+}
+
static DEFINE_SPINLOCK(piix_lock);
static void piix_set_timings(struct ata_port *ap, struct ata_device *adev,
If unsure, say N.
- config BRIQ_PANEL
- tristate 'Total Impact briQ front panel driver'
- depends on PPC_CHRP
- ---help---
- The briQ is a small footprint CHRP computer with a frontpanel VFD, a
- tristate led and two switches. It is the size of a CDROM drive.
-
- If you have such one and want anything showing on the VFD then you
- must answer Y here.
-
- To compile this driver as a module, choose M here: the
- module will be called briq_panel.
-
- It's safe to say N here.
-
config BFIN_OTP
tristate "Blackfin On-Chip OTP Memory Support"
depends on BLACKFIN && (BF51x || BF52x || BF54x)
config HPET
bool "HPET - High Precision Event Timer" if (X86 || IA64)
default n
- depends on ACPI && !XEN
+ depends on ACPI
help
If you say Y here, you will have a miscdevice named "/dev/hpet/". Each
open selects one of the timers supported by the HPET. The timers are
This enables panic and oops messages to be logged to a circular
buffer in RAM where it can be read back at some later point.
+config CRASHER
+ tristate "Crasher Module"
+ help
+ Slab cache memory tester. Only use this as a module
+
config MSM_SMD_PKT
bool "Enable device interface for some SMD packet ports"
default n
obj-$(CONFIG_VIOTAPE) += viotape.o
obj-$(CONFIG_IBM_BSR) += bsr.o
obj-$(CONFIG_SGI_MBCS) += mbcs.o
- obj-$(CONFIG_BRIQ_PANEL) += briq_panel.o
obj-$(CONFIG_BFIN_OTP) += bfin-otp.o
obj-$(CONFIG_PRINTER) += lp.o
obj-$(CONFIG_HANGCHECK_TIMER) += hangcheck-timer.o
obj-$(CONFIG_TCG_TPM) += tpm/
+obj-$(CONFIG_CRASHER) += crasher.o
obj-$(CONFIG_PS3_FLASH) += ps3flash.o
obj-$(CONFIG_RAMOOPS) += ramoops.o
#include <asm/irq.h>
#include <asm/uaccess.h>
- #include <asm/system.h>
/* if you have more than 8 printers, remember to increase LP_NO */
#define LP_NO 8
return -EFAULT;
break;
case LPGETSTATUS:
+ if (mutex_lock_interruptible(&lp_table[minor].port_mutex))
+ return -EINTR;
lp_claim_parport_or_block (&lp_table[minor]);
status = r_str(minor);
lp_release_parport (&lp_table[minor]);
+ mutex_unlock(&lp_table[minor].port_mutex);
if (copy_to_user(argp, &status, sizeof(int)))
return -EFAULT;
{
unsigned int minor;
struct timeval par_timeout;
- struct compat_timeval __user *tc;
int ret;
minor = iminor(file->f_path.dentry->d_inode);
mutex_lock(&lp_mutex);
switch (cmd) {
case LPSETTIMEOUT:
- tc = compat_ptr(arg);
- if (get_user(par_timeout.tv_sec, &tc->tv_sec) ||
- get_user(par_timeout.tv_usec, &tc->tv_usec)) {
+ if (compat_get_timeval(&par_timeout, compat_ptr(arg))) {
ret = -EFAULT;
break;
}
#define MICRO_FREQUENCY_MIN_SAMPLE_RATE (10000)
#define MIN_FREQUENCY_UP_THRESHOLD (11)
#define MAX_FREQUENCY_UP_THRESHOLD (100)
+#define MAX_DEFAULT_SAMPLING_RATE (300 * 1000U)
/*
* The polling frequency of this governor depends on the capability of
show_one(ignore_nice_load, ignore_nice);
show_one(powersave_bias, powersave_bias);
+ /**
+ * update_sampling_rate - update sampling rate effective immediately if needed.
+ * @new_rate: new sampling rate
+ *
+ * If new rate is smaller than the old, simply updaing
+ * dbs_tuners_int.sampling_rate might not be appropriate. For example,
+ * if the original sampling_rate was 1 second and the requested new sampling
+ * rate is 10 ms because the user needs immediate reaction from ondemand
+ * governor, but not sure if higher frequency will be required or not,
+ * then, the governor may change the sampling rate too late; up to 1 second
+ * later. Thus, if we are reducing the sampling rate, we need to make the
+ * new value effective immediately.
+ */
+ static void update_sampling_rate(unsigned int new_rate)
+ {
+ int cpu;
+
+ dbs_tuners_ins.sampling_rate = new_rate
+ = max(new_rate, min_sampling_rate);
+
+ for_each_online_cpu(cpu) {
+ struct cpufreq_policy *policy;
+ struct cpu_dbs_info_s *dbs_info;
+ unsigned long next_sampling, appointed_at;
+
+ policy = cpufreq_cpu_get(cpu);
+ if (!policy)
+ continue;
+ dbs_info = &per_cpu(od_cpu_dbs_info, policy->cpu);
+ cpufreq_cpu_put(policy);
+
+ mutex_lock(&dbs_info->timer_mutex);
+
+ if (!delayed_work_pending(&dbs_info->work)) {
+ mutex_unlock(&dbs_info->timer_mutex);
+ continue;
+ }
+
+ next_sampling = jiffies + usecs_to_jiffies(new_rate);
+ appointed_at = dbs_info->work.timer.expires;
+
+
+ if (time_before(next_sampling, appointed_at)) {
+
+ mutex_unlock(&dbs_info->timer_mutex);
+ cancel_delayed_work_sync(&dbs_info->work);
+ mutex_lock(&dbs_info->timer_mutex);
+
+ schedule_delayed_work_on(dbs_info->cpu, &dbs_info->work,
+ usecs_to_jiffies(new_rate));
+
+ }
+ mutex_unlock(&dbs_info->timer_mutex);
+ }
+ }
+
static ssize_t store_sampling_rate(struct kobject *a, struct attribute *b,
const char *buf, size_t count)
{
ret = sscanf(buf, "%u", &input);
if (ret != 1)
return -EINVAL;
- dbs_tuners_ins.sampling_rate = max(input, min_sampling_rate);
+ update_sampling_rate(input);
return count;
}
dbs_tuners_ins.sampling_rate =
max(min_sampling_rate,
latency * LATENCY_MULTIPLIER);
+ /*
+ * Cut def_sampling rate to 300ms if it was above,
+ * still consider to not set it above latency
+ * transition * 100
+ */
+ if (dbs_tuners_ins.sampling_rate > MAX_DEFAULT_SAMPLING_RATE) {
+ dbs_tuners_ins.sampling_rate =
+ max(min_sampling_rate, MAX_DEFAULT_SAMPLING_RATE);
+ printk(KERN_INFO "CPUFREQ: ondemand sampling "
+ "rate set to %d ms\n",
+ dbs_tuners_ins.sampling_rate / 1000);
+ }
+ /*
+ * Be conservative in respect to performance.
+ * If an application calculates using two threads
+ * depending on each other, they will be run on several
+ * CPU cores resulting on 50% load on both.
+ * SLED might still want to prefer 80% up_threshold
+ * by default, but we cannot differ that here.
+ */
+ if (num_online_cpus() > 1)
+ dbs_tuners_ins.up_threshold =
+ DEF_FREQUENCY_UP_THRESHOLD / 2;
dbs_tuners_ins.io_is_busy = should_io_be_busy();
}
mutex_unlock(&dbs_mutex);
* Copyright (c) 1999 Andreas Gal
* Copyright (c) 2000-2005 Vojtech Pavlik <vojtech@suse.cz>
* Copyright (c) 2005 Michael Haboustak <mike-@cinci.rr.com> for Concept2, Inc
- * Copyright (c) 2006-2010 Jiri Kosina
+ * Copyright (c) 2006-2012 Jiri Kosina
*/
/*
MODULE_PARM_DESC(debug, "toggle HID debugging messages");
EXPORT_SYMBOL_GPL(hid_debug);
+ static int hid_ignore_special_drivers = 0;
+ module_param_named(ignore_special_drivers, hid_ignore_special_drivers, int, 0600);
+ MODULE_PARM_DESC(debug, "Ignore any special drivers and handle all devices by generic driver");
+
/*
* Register a new report for a device.
*/
hdev->claimed |= HID_CLAIMED_INPUT;
if (hdev->quirks & HID_QUIRK_MULTITOUCH) {
/* this device should be handled by hid-multitouch, skip it */
- hdev->quirks &= ~HID_QUIRK_MULTITOUCH;
return -ENODEV;
}
{ HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR) },
{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_TACTICAL_PAD) },
{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_WIRELESS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_WIRELESS2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT, USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_CREATIVELABS, USB_DEVICE_ID_PRODIKEYS_PCMIDI) },
{ HID_USB_DEVICE(USB_VENDOR_ID_CVTOUCH, USB_DEVICE_ID_CVTOUCH_SCREEN) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_480D) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_480E) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_720C) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7224) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_725E) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_726B) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_72A1) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7302) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2515) },
{ HID_USB_DEVICE(USB_VENDOR_ID_EMS, USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II) },
{ HID_USB_DEVICE(USB_VENDOR_ID_EZKEY, USB_DEVICE_ID_BTC_8193) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_FRUCTEL, USB_DEVICE_ID_GAMETEL_MT_MODE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_GAMERON, USB_DEVICE_ID_GAMERON_DUAL_PSX_ADAPTOR) },
{ HID_USB_DEVICE(USB_VENDOR_ID_GAMERON, USB_DEVICE_ID_GAMERON_DUAL_PCS_ADAPTOR) },
{ HID_USB_DEVICE(USB_VENDOR_ID_GENERAL_TOUCH, USB_DEVICE_ID_GENERAL_TOUCH_WIN7_TWOFINGERS) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KENSINGTON, USB_DEVICE_ID_KS_SLIMBLADE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KEYTOUCH, USB_DEVICE_ID_KEYTOUCH_IEC) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_ERGO_525V) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_I405X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LABTEC, USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LCPOWER, USB_DEVICE_ID_LCPOWER_LC1000 ) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LG, USB_DEVICE_ID_LG_MULTITOUCH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_DFGT_WHEEL) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G25_WHEEL) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G27_WHEEL) },
+ #if IS_ENABLED(CONFIG_HID_LOGITECH_DJ)
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_UNIFYING_RECEIVER) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_UNIFYING_RECEIVER_2) },
+ #endif
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_WII_WHEEL) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_RUMBLEPAD2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_SPACETRAVELLER) },
{ HID_USB_DEVICE(USB_VENDOR_ID_NTRIG, USB_DEVICE_ID_NTRIG_TOUCH_SCREEN_18) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ORTEK, USB_DEVICE_ID_ORTEK_PKB1700) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ORTEK, USB_DEVICE_ID_ORTEK_WKB2000) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PANASONIC, USB_DEVICE_ID_PANABOARD_UBT780) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PANASONIC, USB_DEVICE_ID_PANABOARD_UBT880) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_PCI) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PETALYNX, USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_KOVAPLUS) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_PYRA_WIRED) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_PYRA_WIRELESS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_PS1000) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_IR_REMOTE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_WIRELESS_KBD_MOUSE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SKYCABLE, USB_DEVICE_ID_SKYCABLE_WIRELESS_PRESENTER) },
{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb653) },
{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb654) },
{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65a) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_BT) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED, USB_DEVICE_ID_TOPSEED_CYBERLINK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED2, USB_DEVICE_ID_TOPSEED2_RF_COMBO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TOUCH_INTL, USB_DEVICE_ID_TOUCH_INTL_MULTI_TOUCH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_UNITEC, USB_DEVICE_ID_UNITEC_USB_TOUCH_0709) },
{ HID_USB_DEVICE(USB_VENDOR_ID_UNITEC, USB_DEVICE_ID_UNITEC_USB_TOUCH_0A19) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_SMARTJOY_PLUS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_SUPER_JOY_BOX_3) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_DUAL_USB_JOYPAD) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP_LTD, USB_DEVICE_ID_SUPER_JOY_BOX_3_PRO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP_LTD, USB_DEVICE_ID_SUPER_DUAL_BOX_PRO) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_WACOM, USB_DEVICE_ID_WACOM_INTUOS4_BLUETOOTH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_SLIM_TABLET_5_8_INCH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_SLIM_TABLET_12_1_INCH) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_Q_PAD) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_PID_0038) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_10_6_INCH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_14_1_INCH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_XAT, USB_DEVICE_ID_XAT_CSR) },
list_add_tail(&dynid->list, &hdrv->dyn_list);
spin_unlock(&hdrv->dyn_lock);
- ret = 0;
- if (get_driver(&hdrv->driver)) {
- ret = driver_attach(&hdrv->driver);
- put_driver(&hdrv->driver);
- }
+ ret = driver_attach(&hdrv->driver);
return ret ? : count;
}
struct hid_driver *hdrv = container_of(drv, struct hid_driver, driver);
struct hid_device *hdev = container_of(dev, struct hid_device, dev);
+ if ((hdev->quirks & HID_QUIRK_MULTITOUCH) &&
+ !strncmp(hdrv->name, "hid-multitouch", 14))
+ return 1;
+
if (!hid_match_device(hdev, hdrv))
return 0;
/* generic wants all that don't have specialized driver */
- if (!strncmp(hdrv->name, "generic-", 8))
+ if (!strncmp(hdrv->name, "generic-", 8) && !hid_ignore_special_drivers)
return !hid_match_id(hdev, hid_have_special_driver);
return 1;
if (!hdev->driver) {
id = hid_match_device(hdev, hdrv);
if (id == NULL) {
- ret = -ENODEV;
- goto unlock;
+ if (!((hdev->quirks & HID_QUIRK_MULTITOUCH) &&
+ !strncmp(hdrv->name, "hid-multitouch", 14))) {
+ ret = -ENODEV;
+ goto unlock;
+ }
}
hdev->driver = hdrv;
{ HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EM_LT20) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DREAM_CHEEKY, 0x0004) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DREAM_CHEEKY, 0x000a) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_4000U) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_4500U) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ESSENTIAL_REALITY, USB_DEVICE_ID_ESSENTIAL_REALITY_P5) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC5UH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC4UM) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PHILIPS, USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) },
+ #if defined(CONFIG_MOUSE_SYNAPTICS_USB) || defined(CONFIG_MOUSE_SYNAPTICS_USB_MODULE)
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_INT_TP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_CPAD) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_STICK) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_WP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_COMP_TP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_WTP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DPAD) },
+ #endif
{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_LABPRO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_GOTEMP) },
{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_SKIP) },
if (hdev->product >= USB_DEVICE_ID_LOGITECH_HARMONY_FIRST &&
hdev->product <= USB_DEVICE_ID_LOGITECH_HARMONY_LAST)
return true;
+ /*
+ * The Keene FM transmitter USB device has the same USB ID as
+ * the Logitech AudioHub Speaker, but it should ignore the hid.
+ * Check if the name is that of the Keene device.
+ * For reference: the name of the AudioHub is
+ * "HOLTEK AudioHub Speaker".
+ */
+ if (hdev->product == USB_DEVICE_ID_LOGITECH_AUDIOHUB &&
+ !strcmp(hdev->name, "HOLTEK B-LINK USB Audio "))
+ return true;
break;
case USB_VENDOR_ID_SOUNDGRAPH:
if (hdev->product >= USB_DEVICE_ID_SOUNDGRAPH_IMON_FIRST &&
#define USB_VENDOR_ID_ACTIONSTAR 0x2101
#define USB_DEVICE_ID_ACTIONSTAR_1011 0x1011
- #define USB_VENDOR_ID_ADS_TECH 0x06e1
+ #define USB_VENDOR_ID_ADS_TECH 0x06e1
#define USB_DEVICE_ID_ADS_TECH_RADIO_SI470X 0xa155
#define USB_VENDOR_ID_AFATECH 0x15a4
#define USB_VENDOR_ID_ATMEL 0x03eb
#define USB_DEVICE_ID_ATMEL_MULTITOUCH 0x211c
+ #define USB_DEVICE_ID_ATMEL_MXT_DIGITIZER 0x2118
#define USB_VENDOR_ID_AVERMEDIA 0x07ca
#define USB_DEVICE_ID_AVER_FM_MR800 0xb800
#define USB_VENDOR_ID_CH 0x068e
#define USB_DEVICE_ID_CH_PRO_THROTTLE 0x00f1
#define USB_DEVICE_ID_CH_PRO_PEDALS 0x00f2
+ #define USB_DEVICE_ID_CH_FIGHTERSTICK 0x00f3
#define USB_DEVICE_ID_CH_COMBATSTICK 0x00f4
#define USB_DEVICE_ID_CH_FLIGHT_SIM_ECLIPSE_YOKE 0x0051
#define USB_DEVICE_ID_CH_FLIGHT_SIM_YOKE 0x00ff
#define USB_DEVICE_ID_CHICONY_TACTICAL_PAD 0x0418
#define USB_DEVICE_ID_CHICONY_MULTI_TOUCH 0xb19d
#define USB_DEVICE_ID_CHICONY_WIRELESS 0x0618
+ #define USB_DEVICE_ID_CHICONY_WIRELESS2 0x1123
#define USB_VENDOR_ID_CHUNGHWAT 0x2247
#define USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH 0x0001
#define USB_DEVICE_ID_EGALAX_TOUCHCONTROLLER 0x0001
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_480D 0x480d
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_480E 0x480e
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7207 0x7207
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_720C 0x720c
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7224 0x7224
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_722A 0x722A
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_725E 0x725e
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7262 0x7262
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_726B 0x726b
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_72AA 0x72aa
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_72A1 0x72a1
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_72FA 0x72fa
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7302 0x7302
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7349 0x7349
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001
#define USB_VENDOR_ID_ELECOM 0x056e
#define USB_VENDOR_ID_DREAM_CHEEKY 0x1d34
#define USB_VENDOR_ID_ELO 0x04E7
+#define USB_DEVICE_ID_ELO_4000U 0x0009
#define USB_DEVICE_ID_ELO_TS2515 0x0022
#define USB_DEVICE_ID_ELO_TS2700 0x0020
+#define USB_DEVICE_ID_ELO_4500U 0x0030
#define USB_VENDOR_ID_EMS 0x2006
#define USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II 0x0118
#define USB_VENDOR_ID_EZKEY 0x0518
#define USB_DEVICE_ID_BTC_8193 0x0002
+ #define USB_VENDOR_ID_FRUCTEL 0x25B6
+ #define USB_DEVICE_ID_GAMETEL_MT_MODE 0x0002
+
#define USB_VENDOR_ID_GAMERON 0x0810
#define USB_DEVICE_ID_GAMERON_DUAL_PSX_ADAPTOR 0x0001
#define USB_DEVICE_ID_GAMERON_DUAL_PCS_ADAPTOR 0x0002
#define USB_VENDOR_ID_IDEACOM 0x1cb6
#define USB_DEVICE_ID_IDEACOM_IDC6650 0x6650
+ #define USB_DEVICE_ID_IDEACOM_IDC6651 0x6651
#define USB_VENDOR_ID_ILITEK 0x222a
#define USB_DEVICE_ID_ILITEK_MULTITOUCH 0x0001
#define USB_VENDOR_ID_KYE 0x0458
#define USB_DEVICE_ID_KYE_ERGO_525V 0x0087
#define USB_DEVICE_ID_KYE_GPEN_560 0x5003
+ #define USB_DEVICE_ID_KYE_EASYPEN_I405X 0x5010
+ #define USB_DEVICE_ID_KYE_MOUSEPEN_I608X 0x5011
+ #define USB_DEVICE_ID_KYE_EASYPEN_M610X 0x5013
#define USB_VENDOR_ID_LABTEC 0x1020
#define USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD 0x0006
#define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
#define USB_VENDOR_ID_LOGITECH 0x046d
+ #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e
#define USB_DEVICE_ID_LOGITECH_RECEIVER 0xc101
#define USB_DEVICE_ID_LOGITECH_HARMONY_FIRST 0xc110
#define USB_DEVICE_ID_LOGITECH_HARMONY_LAST 0xc14f
#define USB_DEVICE_ID_ORTEK_PKB1700 0x1700
#define USB_DEVICE_ID_ORTEK_WKB2000 0x2000
+ #define USB_VENDOR_ID_PANASONIC 0x04da
+ #define USB_DEVICE_ID_PANABOARD_UBT780 0x1044
+ #define USB_DEVICE_ID_PANABOARD_UBT880 0x104d
+
#define USB_VENDOR_ID_PANJIT 0x134c
#define USB_VENDOR_ID_PANTHERLORD 0x0810
#define USB_VENDOR_ID_SAITEK 0x06a3
#define USB_DEVICE_ID_SAITEK_RUMBLEPAD 0xff17
+ #define USB_DEVICE_ID_SAITEK_PS1000 0x0621
#define USB_VENDOR_ID_SAMSUNG 0x0419
#define USB_DEVICE_ID_SAMSUNG_IR_REMOTE 0x0001
#define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800
#define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300
+ #define USB_VENDOR_ID_SYNAPTICS 0x06cb
+ #define USB_DEVICE_ID_SYNAPTICS_TP 0x0001
+ #define USB_DEVICE_ID_SYNAPTICS_INT_TP 0x0002
+ #define USB_DEVICE_ID_SYNAPTICS_CPAD 0x0003
+ #define USB_DEVICE_ID_SYNAPTICS_TS 0x0006
+ #define USB_DEVICE_ID_SYNAPTICS_STICK 0x0007
+ #define USB_DEVICE_ID_SYNAPTICS_WP 0x0008
+ #define USB_DEVICE_ID_SYNAPTICS_COMP_TP 0x0009
+ #define USB_DEVICE_ID_SYNAPTICS_WTP 0x0010
+ #define USB_DEVICE_ID_SYNAPTICS_DPAD 0x0013
+
#define USB_VENDOR_ID_THRUSTMASTER 0x044f
+ #define USB_VENDOR_ID_TIVO 0x150a
+ #define USB_DEVICE_ID_TIVO_SLIDE_BT 0x1200
+ #define USB_DEVICE_ID_TIVO_SLIDE 0x1201
+
#define USB_VENDOR_ID_TOPSEED 0x0766
#define USB_DEVICE_ID_TOPSEED_CYBERLINK 0x0204
#define USB_VENDOR_ID_TOPSEED2 0x1784
#define USB_DEVICE_ID_TOPSEED2_RF_COMBO 0x0004
+ #define USB_DEVICE_ID_TOPSEED2_PERIPAD_701 0x0016
#define USB_VENDOR_ID_TOPMAX 0x0663
#define USB_DEVICE_ID_TOPMAX_COBRAPAD 0x0103
#define USB_VENDOR_ID_WALTOP 0x172f
#define USB_DEVICE_ID_WALTOP_SLIM_TABLET_5_8_INCH 0x0032
#define USB_DEVICE_ID_WALTOP_SLIM_TABLET_12_1_INCH 0x0034
+ #define USB_DEVICE_ID_WALTOP_Q_PAD 0x0037
+ #define USB_DEVICE_ID_WALTOP_PID_0038 0x0038
#define USB_DEVICE_ID_WALTOP_MEDIA_TABLET_10_6_INCH 0x0501
#define USB_DEVICE_ID_WALTOP_MEDIA_TABLET_14_1_INCH 0x0500
#define USB_DEVICE_ID_1_PHIDGETSERVO_20 0x8101
#define USB_DEVICE_ID_4_PHIDGETSERVO_20 0x8104
#define USB_DEVICE_ID_8_8_4_IF_KIT 0x8201
+ #define USB_DEVICE_ID_SUPER_JOY_BOX_3 0x8888
#define USB_DEVICE_ID_QUAD_USB_JOYPAD 0x8800
#define USB_DEVICE_ID_DUAL_USB_JOYPAD 0x8866
select SERIO_LIBPS2
select SERIO_I8042 if X86
select SERIO_GSCPS2 if GSC
+ select LEDS_CLASS if MOUSE_PS2_SYNAPICS_LED
help
Say Y here if you have a PS/2 mouse connected to your system. This
includes the standard 2 or 3-button PS/2 mouse, as well as PS/2
If unsure, say Y.
+config MOUSE_PS2_SYNAPTICS_LED
+ bool "Support embedded LED on Synaptics devices"
+ depends on MOUSE_PS2_SYNAPTICS
+ select NEW_LEDS
+ help
+ Say Y here if you have a Synaptics device with an embedded LED.
+ This will enable LED class driver to control the LED device.
+
config MOUSE_PS2_LIFEBOOK
bool "Fujitsu Lifebook PS/2 mouse protocol extension" if EXPERT
default y
To compile this driver as a module, choose M here: the
module will be called synaptics_i2c.
+ config MOUSE_SYNAPTICS_USB
+ tristate "Synaptics USB device support"
+ depends on USB_ARCH_HAS_HCD
+ select USB
+ help
+ Say Y here if you want to use a Synaptics USB touchpad or pointing
+ stick.
+
+ While these devices emulate an USB mouse by default and can be used
+ with standard usbhid driver, this driver, together with its X.Org
+ counterpart, allows you to fully utilize capabilities of the device.
+ More information can be found at:
+ <http://jan-steinhoff.de/linux/synaptics-usb.html>
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_usb.
+
endif
tristate "cy8ctmg110 touchscreen"
depends on I2C
depends on GPIOLIB
-
help
Say Y here if you have a cy8ctmg110 capacitive touchscreen on
an AAVA device.
To compile this driver as a module, choose M here: the
module will be called cy8ctmg110_ts.
+ config TOUCHSCREEN_CYTTSP_CORE
+ tristate "Cypress TTSP touchscreen"
+ help
+ Say Y here if you have a touchscreen using controller from
+ the Cypress TrueTouch(tm) Standard Product family connected
+ to your system. You will also need to select appropriate
+ bus connection below.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called cyttsp_core.
+
+ config TOUCHSCREEN_CYTTSP_I2C
+ tristate "support I2C bus connection"
+ depends on TOUCHSCREEN_CYTTSP_CORE && I2C
+ help
+ Say Y here if the touchscreen is connected via I2C bus.
+
+ To compile this driver as a module, choose M here: the
+ module will be called cyttsp_i2c.
+
+ config TOUCHSCREEN_CYTTSP_SPI
+ tristate "support SPI bus connection"
+ depends on TOUCHSCREEN_CYTTSP_CORE && SPI_MASTER
+ help
+ Say Y here if the touchscreen is connected via SPI bus.
+
+ To compile this driver as a module, choose M here: the
+ module will be called cyttsp_spi.
+
config TOUCHSCREEN_DA9034
tristate "Touchscreen support for Dialog Semiconductor DA9034"
depends on PMIC_DA903X
To compile this driver as a module, choose M here: the
module will be called fujitsu-ts.
+ config TOUCHSCREEN_ILI210X
+ tristate "Ilitek ILI210X based touchscreen"
+ depends on I2C
+ help
+ Say Y here if you have a ILI210X based touchscreen
+ controller. This driver supports models ILI2102,
+ ILI2102s, ILI2103, ILI2103s and ILI2105.
+ Such kind of chipsets can be found in Amazon Kindle Fire
+ touchscreens.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called ili210x.
+
config TOUCHSCREEN_S3C2410
tristate "Samsung S3C2410/generic touchscreen input driver"
- depends on ARCH_S3C2410 || SAMSUNG_DEV_TS
+ depends on ARCH_S3C24XX || SAMSUNG_DEV_TS
select S3C_ADC
help
Say Y here if you have the s3c2410 touchscreen.
To compile this driver as a module, choose M here: the
module will be called elo.
+config TOUCHSCREEN_ELOUSB
+ tristate "Elo USB touchscreens"
+ select USB
+ help
+ Say Y here if you have an Elo USB touchscreen connected to
+ your system.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called elousb.
+
config TOUCHSCREEN_WACOM_W8001
tristate "Wacom W8001 penabled serial touchscreen"
select SERIO
To compile this driver as a module, choose M here: the
module will be called touchwin.
+ config TOUCHSCREEN_TI_TSCADC
+ tristate "TI Touchscreen Interface"
+ depends on ARCH_OMAP2PLUS
+ help
+ Say Y here if you have 4/5/8 wire touchscreen controller
+ to be connected to the ADC controller on your TI AM335x SoC.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called ti_tscadc.
+
config TOUCHSCREEN_ATMEL_TSADCC
tristate "Atmel Touchscreen Interface"
depends on ARCH_AT91SAM9RL || ARCH_AT91SAM9G45
- JASTEC USB Touch Controller/DigiTech DTR-02U
- Zytronic controllers
- Elo TouchSystems 2700 IntelliTouch
+ - EasyTouch USB Touch Controller from Data Modul
Have a look at <http://linux.chapter7.ch/touchkit/> for
a usage description and the required user-space stuff.
bool "NEXIO/iNexio device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
+ config TOUCHSCREEN_USB_EASYTOUCH
+ default y
+ bool "EasyTouch USB Touch controller device support" if EMBEDDED
+ depends on TOUCHSCREEN_USB_COMPOSITE
+ help
+ Say Y here if you have a EasyTouch USB Touch controller device support.
+ If unsure, say N.
+
config TOUCHSCREEN_TOUCHIT213
tristate "Sahara TouchIT-213 touchscreen"
select SERIO
obj-$(CONFIG_TOUCHSCREEN_ATMEL_TSADCC) += atmel_tsadcc.o
obj-$(CONFIG_TOUCHSCREEN_AUO_PIXCIR) += auo-pixcir-ts.o
obj-$(CONFIG_TOUCHSCREEN_BITSY) += h3600_ts_input.o
- obj-$(CONFIG_TOUCHSCREEN_BU21013) += bu21013_ts.o
+ obj-$(CONFIG_TOUCHSCREEN_BU21013) += bu21013_ts.o
obj-$(CONFIG_TOUCHSCREEN_CY8CTMG110) += cy8ctmg110_ts.o
+ obj-$(CONFIG_TOUCHSCREEN_CYTTSP_CORE) += cyttsp_core.o
+ obj-$(CONFIG_TOUCHSCREEN_CYTTSP_I2C) += cyttsp_i2c.o
+ obj-$(CONFIG_TOUCHSCREEN_CYTTSP_SPI) += cyttsp_spi.o
obj-$(CONFIG_TOUCHSCREEN_DA9034) += da9034-ts.o
obj-$(CONFIG_TOUCHSCREEN_DYNAPRO) += dynapro.o
obj-$(CONFIG_TOUCHSCREEN_HAMPSHIRE) += hampshire.o
obj-$(CONFIG_TOUCHSCREEN_GUNZE) += gunze.o
obj-$(CONFIG_TOUCHSCREEN_EETI) += eeti_ts.o
obj-$(CONFIG_TOUCHSCREEN_ELO) += elo.o
+obj-$(CONFIG_TOUCHSCREEN_ELOUSB) += elousb.o
obj-$(CONFIG_TOUCHSCREEN_EGALAX) += egalax_ts.o
obj-$(CONFIG_TOUCHSCREEN_FUJITSU) += fujitsu_ts.o
+ obj-$(CONFIG_TOUCHSCREEN_ILI210X) += ili210x.o
obj-$(CONFIG_TOUCHSCREEN_INEXIO) += inexio.o
obj-$(CONFIG_TOUCHSCREEN_INTEL_MID) += intel-mid-touch.o
obj-$(CONFIG_TOUCHSCREEN_LPC32XX) += lpc32xx_ts.o
obj-$(CONFIG_TOUCHSCREEN_S3C2410) += s3c2410_ts.o
obj-$(CONFIG_TOUCHSCREEN_ST1232) += st1232.o
obj-$(CONFIG_TOUCHSCREEN_STMPE) += stmpe-ts.o
+ obj-$(CONFIG_TOUCHSCREEN_TI_TSCADC) += ti_tscadc.o
obj-$(CONFIG_TOUCHSCREEN_TNETV107X) += tnetv107x-ts.o
obj-$(CONFIG_TOUCHSCREEN_TOUCHIT213) += touchit213.o
obj-$(CONFIG_TOUCHSCREEN_TOUCHRIGHT) += touchright.o
#include "core.h"
static u_int debug;
+u_int misdn_permitted_gid;
MODULE_AUTHOR("Karsten Keil");
MODULE_LICENSE("GPL");
module_param(debug, uint, S_IRUGO | S_IWUSR);
+module_param_named(gid, misdn_permitted_gid, uint, 0);
+MODULE_PARM_DESC(gid, "Unix group for accessing misdn socket (default 0)");
static u64 device_ids;
#define MAX_DEVICE_ID 63
}
static ssize_t _show_id(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct mISDNdevice *mdev = dev_to_mISDN(dev);
}
static ssize_t _show_nrbchan(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct mISDNdevice *mdev = dev_to_mISDN(dev);
}
static ssize_t _show_d_protocols(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct mISDNdevice *mdev = dev_to_mISDN(dev);
}
static ssize_t _show_b_protocols(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct mISDNdevice *mdev = dev_to_mISDN(dev);
}
static ssize_t _show_protocol(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct mISDNdevice *mdev = dev_to_mISDN(dev);
}
static ssize_t _show_name(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
strcpy(buf, dev_name(dev));
return strlen(buf);
#if 0 /* hangs */
static ssize_t _set_name(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
+ const char *buf, size_t count)
{
int err = 0;
char *out = kmalloc(count + 1, GFP_KERNEL);
__ATTR(channelmap, S_IRUGO, _show_channelmap, NULL),
__ATTR(nrbchan, S_IRUGO, _show_nrbchan, NULL),
__ATTR(name, S_IRUGO, _show_name, NULL),
- /* __ATTR(name, S_IRUGO|S_IWUSR, _show_name, _set_name), */
+ /* __ATTR(name, S_IRUGO | S_IWUSR, _show_name, _set_name), */
{}
};
*get_mdevice(u_int id)
{
return dev_to_mISDN(class_find_device(&mISDN_class, NULL, &id,
- _get_mdevice));
+ _get_mdevice));
}
static int
int
mISDN_register_device(struct mISDNdevice *dev,
- struct device *parent, char *name)
+ struct device *parent, char *name)
{
int err;
dev_set_name(&dev->dev, "mISDN%d", dev->id);
if (debug & DEBUG_CORE)
printk(KERN_DEBUG "mISDN_register %s %d\n",
- dev_name(&dev->dev), dev->id);
+ dev_name(&dev->dev), dev->id);
err = create_stack(dev);
if (err)
goto error1;
mISDN_unregister_device(struct mISDNdevice *dev) {
if (debug & DEBUG_CORE)
printk(KERN_DEBUG "mISDN_unregister %s %d\n",
- dev_name(&dev->dev), dev->id);
+ dev_name(&dev->dev), dev->id);
/* sysfs_remove_link(&dev->dev.kobj, "device"); */
device_del(&dev->dev);
dev_set_drvdata(&dev->dev, NULL);
if (id < ISDN_P_B_START || id > 63) {
printk(KERN_WARNING "%s id not in range %d\n",
- __func__, id);
+ __func__, id);
return NULL;
}
m = 1 << (id & ISDN_P_B_MASK);
if (debug & DEBUG_CORE)
printk(KERN_DEBUG "%s: %s/%x\n", __func__,
- bp->name, bp->Bprotocols);
+ bp->name, bp->Bprotocols);
old = get_Bprotocol4mask(bp->Bprotocols);
if (old) {
printk(KERN_WARNING
- "register duplicate protocol old %s/%x new %s/%x\n",
- old->name, old->Bprotocols, bp->name, bp->Bprotocols);
+ "register duplicate protocol old %s/%x new %s/%x\n",
+ old->name, old->Bprotocols, bp->name, bp->Bprotocols);
return -EBUSY;
}
write_lock_irqsave(&bp_lock, flags);
if (debug & DEBUG_CORE)
printk(KERN_DEBUG "%s: %s/%x\n", __func__, bp->name,
- bp->Bprotocols);
+ bp->Bprotocols);
write_lock_irqsave(&bp_lock, flags);
list_del(&bp->list);
write_unlock_irqrestore(&bp_lock, flags);
int err;
printk(KERN_INFO "Modular ISDN core version %d.%d.%d\n",
- MISDN_MAJOR_VERSION, MISDN_MINOR_VERSION, MISDN_RELEASE);
+ MISDN_MAJOR_VERSION, MISDN_MINOR_VERSION, MISDN_RELEASE);
mISDN_init_clock(&debug);
mISDN_initstack(&debug);
err = class_register(&mISDN_class);
module_init(mISDNInit);
module_exit(mISDN_cleanup);
-
extern struct mISDNdevice *get_mdevice(u_int);
extern int get_mdevice_count(void);
+extern u_int misdn_permitted_gid;
/* stack status flag */
#define mISDN_STACK_ACTION_MASK 0x0000ffff
#define MGR_OPT_NETWORK 25
extern int connect_Bstack(struct mISDNdevice *, struct mISDNchannel *,
- u_int, struct sockaddr_mISDN *);
+ u_int, struct sockaddr_mISDN *);
extern int connect_layer1(struct mISDNdevice *, struct mISDNchannel *,
- u_int, struct sockaddr_mISDN *);
+ u_int, struct sockaddr_mISDN *);
extern int create_l2entity(struct mISDNdevice *, struct mISDNchannel *,
- u_int, struct sockaddr_mISDN *);
+ u_int, struct sockaddr_mISDN *);
extern int create_stack(struct mISDNdevice *);
extern int create_teimanager(struct mISDNdevice *);
extern int l1_init(u_int *);
extern void l1_cleanup(void);
- extern int Isdnl2_Init(u_int *);
+ extern int Isdnl2_Init(u_int *);
extern void Isdnl2_cleanup(void);
extern void mISDN_init_clock(u_int *);
static int
mISDN_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
- struct msghdr *msg, size_t len, int flags)
+ struct msghdr *msg, size_t len, int flags)
{
struct sk_buff *skb;
struct sock *sk = sock->sk;
if (*debug & DEBUG_SOCKET)
printk(KERN_DEBUG "%s: len %d, flags %x ch.nr %d, proto %x\n",
- __func__, (int)len, flags, _pms(sk)->ch.nr,
- sk->sk_protocol);
+ __func__, (int)len, flags, _pms(sk)->ch.nr,
+ sk->sk_protocol);
if (flags & (MSG_OOB))
return -EOPNOTSUPP;
} else {
if (msg->msg_namelen)
printk(KERN_WARNING "%s: too small namelen %d\n",
- __func__, msg->msg_namelen);
+ __func__, msg->msg_namelen);
msg->msg_namelen = 0;
}
return -ENOSPC;
}
memcpy(skb_push(skb, MISDN_HEADER_LEN), mISDN_HEAD_P(skb),
- MISDN_HEADER_LEN);
+ MISDN_HEADER_LEN);
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
static int
mISDN_sock_sendmsg(struct kiocb *iocb, struct socket *sock,
- struct msghdr *msg, size_t len)
+ struct msghdr *msg, size_t len)
{
struct sock *sk = sock->sk;
struct sk_buff *skb;
if (*debug & DEBUG_SOCKET)
printk(KERN_DEBUG "%s: len %d flags %x ch %d proto %x\n",
- __func__, (int)len, msg->msg_flags, _pms(sk)->ch.nr,
- sk->sk_protocol);
+ __func__, (int)len, msg->msg_flags, _pms(sk)->ch.nr,
+ sk->sk_protocol);
if (msg->msg_flags & MSG_OOB)
return -EOPNOTSUPP;
- if (msg->msg_flags & ~(MSG_DONTWAIT|MSG_NOSIGNAL|MSG_ERRQUEUE))
+ if (msg->msg_flags & ~(MSG_DONTWAIT | MSG_NOSIGNAL | MSG_ERRQUEUE))
return -EINVAL;
if (len < MISDN_HEADER_LEN)
if (*debug & DEBUG_SOCKET)
printk(KERN_DEBUG "%s: ID:%x\n",
- __func__, mISDN_HEAD_ID(skb));
+ __func__, mISDN_HEAD_ID(skb));
err = -ENODEV;
if (!_pms(sk)->ch.peer)
}
if ((sk->sk_protocol & ~ISDN_P_B_MASK) == ISDN_P_B_START) {
list_for_each_entry_safe(bchan, next,
- &_pms(sk)->dev->bchannels, list) {
+ &_pms(sk)->dev->bchannels, list) {
if (bchan->nr == cq.channel) {
err = bchan->ctrl(bchan,
- CONTROL_CHANNEL, &cq);
+ CONTROL_CHANNEL, &cq);
break;
}
}
} else
err = _pms(sk)->dev->D.ctrl(&_pms(sk)->dev->D,
- CONTROL_CHANNEL, &cq);
+ CONTROL_CHANNEL, &cq);
if (err)
break;
if (copy_to_user(p, &cq, sizeof(cq)))
break;
}
err = _pms(sk)->dev->teimgr->ctrl(_pms(sk)->dev->teimgr,
- CONTROL_CHANNEL, val);
+ CONTROL_CHANNEL, val);
break;
case IMHOLD_L1:
if (sk->sk_protocol != ISDN_P_LAPD_NT
- && sk->sk_protocol != ISDN_P_LAPD_TE) {
+ && sk->sk_protocol != ISDN_P_LAPD_TE) {
err = -EINVAL;
break;
}
break;
}
err = _pms(sk)->dev->teimgr->ctrl(_pms(sk)->dev->teimgr,
- CONTROL_CHANNEL, val);
+ CONTROL_CHANNEL, val);
break;
default:
err = -EINVAL;
static int
data_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
{
- int err = 0, id;
+ int err = 0, id;
struct sock *sk = sock->sk;
struct mISDNdevice *dev;
struct mISDNversion ver;
di.Bprotocols = dev->Bprotocols | get_all_Bprotocols();
di.protocol = dev->D.protocol;
memcpy(di.channelmap, dev->channelmap,
- sizeof(di.channelmap));
+ sizeof(di.channelmap));
di.nrbchan = dev->nrbchan;
strcpy(di.name, dev_name(&dev->dev));
if (copy_to_user((void __user *)arg, &di, sizeof(di)))
default:
if (sk->sk_state == MISDN_BOUND)
err = data_sock_ioctl_bound(sk, cmd,
- (void __user *)arg);
+ (void __user *)arg);
else
err = -ENOTCONN;
}
}
static int data_sock_setsockopt(struct socket *sock, int level, int optname,
- char __user *optval, unsigned int len)
+ char __user *optval, unsigned int len)
{
struct sock *sk = sock->sk;
int err = 0, opt = 0;
if (*debug & DEBUG_SOCKET)
printk(KERN_DEBUG "%s(%p, %d, %x, %p, %d)\n", __func__, sock,
- level, optname, optval, len);
+ level, optname, optval, len);
lock_sock(sk);
}
static int data_sock_getsockopt(struct socket *sock, int level, int optname,
- char __user *optval, int __user *optlen)
+ char __user *optval, int __user *optlen)
{
struct sock *sk = sock->sk;
int len, opt;
if (csk->sk_protocol >= ISDN_P_B_START)
continue;
if (IS_ISDN_P_TE(csk->sk_protocol)
- == IS_ISDN_P_TE(sk->sk_protocol))
+ == IS_ISDN_P_TE(sk->sk_protocol))
continue;
read_unlock_bh(&data_sockets.lock);
err = -EBUSY;
case ISDN_P_NT_E1:
mISDN_sock_unlink(&data_sockets, sk);
err = connect_layer1(_pms(sk)->dev, &_pms(sk)->ch,
- sk->sk_protocol, maddr);
+ sk->sk_protocol, maddr);
if (err)
mISDN_sock_link(&data_sockets, sk);
break;
case ISDN_P_LAPD_TE:
case ISDN_P_LAPD_NT:
err = create_l2entity(_pms(sk)->dev, &_pms(sk)->ch,
- sk->sk_protocol, maddr);
+ sk->sk_protocol, maddr);
break;
case ISDN_P_B_RAW:
case ISDN_P_B_HDLC:
case ISDN_P_B_L2DSP:
case ISDN_P_B_L2DSPHDLC:
err = connect_Bstack(_pms(sk)->dev, &_pms(sk)->ch,
- sk->sk_protocol, maddr);
+ sk->sk_protocol, maddr);
break;
default:
err = -EPROTONOSUPPORT;
static int
data_sock_getname(struct socket *sock, struct sockaddr *addr,
- int *addr_len, int peer)
+ int *addr_len, int peer)
{
- struct sockaddr_mISDN *maddr = (struct sockaddr_mISDN *) addr;
+ struct sockaddr_mISDN *maddr = (struct sockaddr_mISDN *) addr;
struct sock *sk = sock->sk;
if (!_pms(sk)->dev)
{
struct sock *sk;
+ if(!capable(CAP_SYS_ADMIN) && (misdn_permitted_gid != current_gid())
+ && (!in_group_p(misdn_permitted_gid)))
+ return -EPERM;
+
if (sock->type != SOCK_DGRAM)
return -ESOCKTNOSUPPORT;
static int
base_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
{
- int err = 0, id;
+ int err = 0, id;
struct mISDNdevice *dev;
struct mISDNversion ver;
di.Bprotocols = dev->Bprotocols | get_all_Bprotocols();
di.protocol = dev->D.protocol;
memcpy(di.channelmap, dev->channelmap,
- sizeof(di.channelmap));
+ sizeof(di.channelmap));
di.nrbchan = dev->nrbchan;
strcpy(di.name, dev_name(&dev->dev));
if (copy_to_user((void __user *)arg, &di, sizeof(di)))
err = -ENODEV;
break;
case IMSETDEVNAME:
- {
- struct mISDN_devrename dn;
- if(!capable(CAP_SYS_ADMIN)
- && (misdn_permitted_gid != current_gid())
- && (!in_group_p(misdn_permitted_gid)))
- return -EPERM;
- if (copy_from_user(&dn, (void __user *)arg,
- sizeof(dn))) {
- err = -EFAULT;
- break;
- }
- dev = get_mdevice(dn.id);
- if (dev)
- err = device_rename(&dev->dev, dn.name);
- else
- err = -ENODEV;
+ {
+ struct mISDN_devrename dn;
++ if(!capable(CAP_SYS_ADMIN)
++ && (misdn_permitted_gid != current_gid())
++ && (!in_group_p(misdn_permitted_gid)))
++ return -EPERM;
+ if (copy_from_user(&dn, (void __user *)arg,
+ sizeof(dn))) {
+ err = -EFAULT;
+ break;
}
- break;
+ dev = get_mdevice(dn.id);
+ if (dev)
+ err = device_rename(&dev->dev, dn.name);
+ else
+ err = -ENODEV;
+ }
+ break;
default:
err = -EINVAL;
}
{
int err = -EPROTONOSUPPORT;
- switch (proto) {
+ switch (proto) {
case ISDN_P_BASE:
err = base_sock_create(net, sock, proto);
break;
{
sock_unregister(PF_ISDN);
}
-
config DM_MIRROR
tristate "Mirror target"
depends on BLK_DEV_DM
+ select DM_REGION_HASH_LOG
---help---
Allow volume managers to mirror logical volumes, also
needed for live data migration tools such as 'pvmove'.
config DM_RAID
- tristate "RAID 1/4/5/6 target (EXPERIMENTAL)"
- depends on BLK_DEV_DM && EXPERIMENTAL
+ tristate "RAID 1/4/5/6 target"
+ depends on BLK_DEV_DM
select MD_RAID1
select MD_RAID456
select BLK_DEV_MD
If unsure, say N.
+config DM_REGION_HASH_LOG
+ tristate
+ default n
+
+config DM_RAID45
+ tristate "RAID 4/5 target (EXPERIMENTAL)"
+ depends on BLK_DEV_DM && EXPERIMENTAL
+ select ASYNC_XOR
+ select DM_REGION_HASH_LOG
+ ---help---
+ A target that supports RAID4 and RAID5 mappings.
+
+ If unsure, say N.
+
config DM_UEVENT
- bool "DM uevents (EXPERIMENTAL)"
- depends on BLK_DEV_DM && EXPERIMENTAL
+ bool "DM uevents"
+ depends on BLK_DEV_DM
---help---
Generate udev events for DM events.
---help---
A target that intermittently fails I/O for debugging purposes.
+ config DM_VERITY
+ tristate "Verity target support (EXPERIMENTAL)"
+ depends on BLK_DEV_DM && EXPERIMENTAL
+ select CRYPTO
+ select CRYPTO_HASH
+ select DM_BUFIO
+ ---help---
+ This device-mapper target creates a read-only device that
+ transparently validates the data on one underlying device against
+ a pre-generated tree of cryptographic checksums stored on a second
+ device.
+
+ You'll need to activate the digests you're going to use in the
+ cryptoapi configuration.
+
+ To compile this code as a module, choose M here: the module will
+ be called dm-verity.
+
+ If unsure, say N.
+
endif # MD
obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
obj-$(CONFIG_DM_DELAY) += dm-delay.o
obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
-obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o
+obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o dm-least-pending.o
obj-$(CONFIG_DM_MULTIPATH_QL) += dm-queue-length.o
obj-$(CONFIG_DM_MULTIPATH_ST) += dm-service-time.o
obj-$(CONFIG_DM_SNAPSHOT) += dm-snapshot.o
obj-$(CONFIG_DM_PERSISTENT_DATA) += persistent-data/
-obj-$(CONFIG_DM_MIRROR) += dm-mirror.o dm-log.o dm-region-hash.o
+obj-$(CONFIG_DM_MIRROR) += dm-mirror.o
+obj-$(CONFIG_DM_REGION_HASH_LOG) += dm-log.o dm-region-hash.o
obj-$(CONFIG_DM_LOG_USERSPACE) += dm-log-userspace.o
obj-$(CONFIG_DM_ZERO) += dm-zero.o
obj-$(CONFIG_DM_RAID) += dm-raid.o
obj-$(CONFIG_DM_THIN_PROVISIONING) += dm-thin-pool.o
+ obj-$(CONFIG_DM_VERITY) += dm-verity.o
+obj-$(CONFIG_DM_RAID45) += dm-raid45.o dm-memcache.o
ifeq ($(CONFIG_DM_UEVENT),y)
dm-mod-objs += dm-uevent.o
struct list_head pgpaths;
};
+#define FEATURE_NO_PARTITIONS 1
+
/* Multipath context */
struct multipath {
struct list_head list;
unsigned pg_init_retries; /* Number of times to retry pg_init */
unsigned pg_init_count; /* Number of times pg_init called */
unsigned pg_init_delay_msecs; /* Number of msecs before pg_init retry */
+ unsigned features; /* Additional selected features */
struct work_struct process_queued_ios;
struct list_head queued_ios;
static void free_pgpaths(struct list_head *pgpaths, struct dm_target *ti)
{
struct pgpath *pgpath, *tmp;
list_for_each_entry_safe(pgpath, tmp, pgpaths, list) {
list_del(&pgpath->list);
- if (m->hw_handler_name)
- scsi_dh_detach(bdev_get_queue(pgpath->path.dev->bdev));
dm_put_device(ti, pgpath->path.dev);
free_pgpath(pgpath);
}
kfree(m);
}
+ static int set_mapinfo(struct multipath *m, union map_info *info)
+ {
+ struct dm_mpath_io *mpio;
+
+ mpio = mempool_alloc(m->mpio_pool, GFP_ATOMIC);
+ if (!mpio)
+ return -ENOMEM;
+
+ memset(mpio, 0, sizeof(*mpio));
+ info->ptr = mpio;
+
+ return 0;
+ }
+
+ static void clear_mapinfo(struct multipath *m, union map_info *info)
+ {
+ struct dm_mpath_io *mpio = info->ptr;
+
+ info->ptr = NULL;
+ mempool_free(mpio, m->mpio_pool);
+ }
/*-----------------------------------------------
* Path selection
m->current_pgpath = path_to_pgpath(path);
+ if (!m->current_pgpath->path.dev) {
+ m->current_pgpath = NULL;
+ return -ENODEV;
+ }
+
if (m->current_pg != pg)
__switch_pg(m, m->current_pgpath);
}
static int map_io(struct multipath *m, struct request *clone,
- struct dm_mpath_io *mpio, unsigned was_queued)
+ union map_info *map_context, unsigned was_queued)
{
int r = DM_MAPIO_REMAPPED;
size_t nr_bytes = blk_rq_bytes(clone);
unsigned long flags;
struct pgpath *pgpath;
struct block_device *bdev;
+ struct dm_mpath_io *mpio = map_context->ptr;
spin_lock_irqsave(&m->lock, flags);
{
int r;
unsigned long flags;
- struct dm_mpath_io *mpio;
union map_info *info;
struct request *clone, *n;
LIST_HEAD(cl);
list_del_init(&clone->queuelist);
info = dm_get_rq_mapinfo(clone);
- mpio = info->ptr;
- r = map_io(m, clone, mpio, 1);
+ r = map_io(m, clone, info, 1);
if (r < 0) {
- mempool_free(mpio, m->mpio_pool);
+ clear_mapinfo(m, info);
dm_kill_unmapped_request(clone, r);
} else if (r == DM_MAPIO_REMAPPED)
dm_dispatch_request(clone);
else if (r == DM_MAPIO_REQUEUE) {
- mempool_free(mpio, m->mpio_pool);
+ clear_mapinfo(m, info);
dm_requeue_unmapped_request(clone);
}
}
{
int r;
struct pgpath *p;
+ char *path;
struct multipath *m = ti->private;
/* we need at least a path arg */
if (!p)
return ERR_PTR(-ENOMEM);
- r = dm_get_device(ti, dm_shift_arg(as), dm_table_get_mode(ti->table),
+ path = dm_shift_arg(as);
+ r = dm_get_device(ti, path, dm_table_get_mode(ti->table),
&p->path.dev);
if (r) {
- ti->error = "error getting device";
- goto bad;
+ unsigned major, minor;
+
+ /* Try to add a failed device */
+ if (r == -ENXIO && sscanf(path, "%u:%u", &major, &minor) == 2) {
+ dev_t dev;
+
+ /* Extract the major/minor numbers */
+ dev = MKDEV(major, minor);
+ if (MAJOR(dev) != major || MINOR(dev) != minor) {
+ /* Nice try, didn't work */
+ DMWARN("Invalid device path %s", path);
+ ti->error = "error converting devnum";
+ goto bad;
+ }
+ DMWARN("adding disabled device %d:%d", major, minor);
+ p->path.dev = NULL;
+ format_dev_t(p->path.pdev, dev);
+ p->is_active = 0;
+ } else {
+ ti->error = "error getting device";
+ goto bad;
+ }
+ } else {
+ memcpy(p->path.pdev, p->path.dev->name, 16);
}
- if (m->hw_handler_name) {
+ if (p->path.dev) {
struct request_queue *q = bdev_get_queue(p->path.dev->bdev);
- r = scsi_dh_attach(q, m->hw_handler_name);
- if (r == -EBUSY) {
- /*
- * Already attached to different hw_handler,
- * try to reattach with correct one.
- */
- scsi_dh_detach(q);
+ if (m->hw_handler_name) {
r = scsi_dh_attach(q, m->hw_handler_name);
- }
-
- if (r < 0) {
- ti->error = "error attaching hardware handler";
- dm_put_device(ti, p->path.dev);
- goto bad;
+ if (r == -EBUSY) {
+ /*
+ * Already attached to different hw_handler,
+ * try to reattach with correct one.
+ */
+ scsi_dh_detach(q);
+ r = scsi_dh_attach(q, m->hw_handler_name);
+ }
+ if (r < 0) {
+ ti->error = "error attaching hardware handler";
+ dm_put_device(ti, p->path.dev);
+ goto bad;
+ }
+ } else {
+ /* Play safe and detach hardware handler */
+ scsi_dh_detach(q);
}
if (m->hw_handler_params) {
goto bad;
}
+ if (!p->is_active) {
+ ps->type->fail_path(ps, &p->path);
+ p->fail_count++;
+ m->nr_valid_paths--;
+ }
return p;
bad:
continue;
}
+ if (!strcasecmp(arg_name, "no_partitions")) {
+ m->features |= FEATURE_NO_PARTITIONS;
+ continue;
+ }
if (!strcasecmp(arg_name, "pg_init_retries") &&
(argc >= 1)) {
r = dm_read_arg(_args + 1, as, &m->pg_init_retries, &ti->error);
union map_info *map_context)
{
int r;
- struct dm_mpath_io *mpio;
struct multipath *m = (struct multipath *) ti->private;
- mpio = mempool_alloc(m->mpio_pool, GFP_ATOMIC);
- if (!mpio)
+ if (set_mapinfo(m, map_context) < 0)
/* ENOMEM, requeue */
return DM_MAPIO_REQUEUE;
- memset(mpio, 0, sizeof(*mpio));
- map_context->ptr = mpio;
clone->cmd_flags |= REQ_FAILFAST_TRANSPORT;
- r = map_io(m, clone, mpio, 0);
+ r = map_io(m, clone, map_context, 0);
if (r < 0 || r == DM_MAPIO_REQUEUE)
- mempool_free(mpio, m->mpio_pool);
+ clear_mapinfo(m, map_context);
return r;
}
if (!pgpath->is_active)
goto out;
- DMWARN("Failing path %s.", pgpath->path.dev->name);
+ DMWARN("Failing path %s.", pgpath->path.pdev);
pgpath->pg->ps.type->fail_path(&pgpath->pg->ps, &pgpath->path);
pgpath->is_active = 0;
m->current_pgpath = NULL;
dm_path_uevent(DM_UEVENT_PATH_FAILED, m->ti,
- pgpath->path.dev->name, m->nr_valid_paths);
+ pgpath->path.pdev, m->nr_valid_paths);
schedule_work(&m->trigger_event);
if (pgpath->is_active)
goto out;
+ if (!pgpath->path.dev) {
+ DMWARN("Cannot reinstate disabled path %s", pgpath->path.pdev);
+ r = -ENODEV;
+ goto out;
+ }
+
if (!pgpath->pg->ps.type->reinstate_path) {
DMWARN("Reinstate path not supported by path selector %s",
pgpath->pg->ps.type->name);
}
dm_path_uevent(DM_UEVENT_PATH_REINSTATED, m->ti,
- pgpath->path.dev->name, m->nr_valid_paths);
+ pgpath->path.pdev, m->nr_valid_paths);
schedule_work(&m->trigger_event);
struct pgpath *pgpath;
struct priority_group *pg;
+ if (!dev)
+ return 0;
+
list_for_each_entry(pg, &m->priority_groups, list) {
list_for_each_entry(pgpath, &pg->pgpaths, list) {
if (pgpath->path.dev == dev)
struct priority_group *pg;
unsigned pgnum;
unsigned long flags;
+ char dummy;
- if (!pgstr || (sscanf(pgstr, "%u", &pgnum) != 1) || !pgnum ||
+ if (!pgstr || (sscanf(pgstr, "%u%c", &pgnum, &dummy) != 1) || !pgnum ||
(pgnum > m->nr_priority_groups)) {
DMWARN("invalid PG number supplied to switch_pg_num");
return -EINVAL;
{
struct priority_group *pg;
unsigned pgnum;
+ char dummy;
- if (!pgstr || (sscanf(pgstr, "%u", &pgnum) != 1) || !pgnum ||
+ if (!pgstr || (sscanf(pgstr, "%u%c", &pgnum, &dummy) != 1) || !pgnum ||
(pgnum > m->nr_priority_groups)) {
DMWARN("invalid PG number supplied to bypass_pg");
return -EINVAL;
errors = 0;
break;
}
- DMERR("Could not failover the device: Handler scsi_dh_%s "
- "Error %d.", m->hw_handler_name, errors);
+ DMERR("Count not failover device %s: Handler scsi_dh_%s "
+ "was not loaded.", pgpath->path.dev->name,
+ m->hw_handler_name);
/*
* Fail path for now, so we do not ping pong
*/
*/
bypass_pg(m, pg, 1);
break;
+ case SCSI_DH_DEV_OFFLINED:
+ DMWARN("Device %s offlined.", pgpath->path.dev->name);
+ errors = 0;
+ break;
case SCSI_DH_RETRY:
/* Wait before retrying. */
delay_retry = 1;
spin_lock_irqsave(&m->lock, flags);
if (errors) {
if (pgpath == m->current_pgpath) {
- DMERR("Could not failover device. Error %d.", errors);
+ DMERR("Could not failover device %s, error %d.",
+ pgpath->path.dev->name, errors);
m->current_pgpath = NULL;
m->current_pg = NULL;
}
struct pgpath *pgpath =
container_of(work, struct pgpath, activate_path.work);
- scsi_dh_activate(bdev_get_queue(pgpath->path.dev->bdev),
- pg_init_done, pgpath);
+ if (pgpath->path.dev)
+ scsi_dh_activate(bdev_get_queue(pgpath->path.dev->bdev),
+ pg_init_done, pgpath);
}
/*
struct path_selector *ps;
int r;
+ BUG_ON(!mpio);
+
r = do_end_io(m, clone, error, mpio);
if (pgpath) {
ps = &pgpath->pg->ps;
if (ps->type->end_io)
ps->type->end_io(ps, &pgpath->path, mpio->nr_bytes);
}
- mempool_free(mpio, m->mpio_pool);
+ clear_mapinfo(m, map_context);
return r;
}
else {
DMEMIT("%u ", m->queue_if_no_path +
(m->pg_init_retries > 0) * 2 +
- (m->pg_init_delay_msecs != DM_PG_INIT_DELAY_DEFAULT) * 2);
+ (m->pg_init_delay_msecs != DM_PG_INIT_DELAY_DEFAULT) * 2 +
+ (m->features & FEATURE_NO_PARTITIONS));
if (m->queue_if_no_path)
DMEMIT("queue_if_no_path ");
if (m->pg_init_retries)
DMEMIT("pg_init_retries %u ", m->pg_init_retries);
+ if (m->features & FEATURE_NO_PARTITIONS)
+ DMEMIT("no_partitions ");
if (m->pg_init_delay_msecs != DM_PG_INIT_DELAY_DEFAULT)
DMEMIT("pg_init_delay_msecs %u ", m->pg_init_delay_msecs);
}
pg->ps.type->info_args);
list_for_each_entry(p, &pg->pgpaths, list) {
- DMEMIT("%s %s %u ", p->path.dev->name,
+ DMEMIT("%s %s %u ", p->path.pdev,
p->is_active ? "A" : "F",
p->fail_count);
if (pg->ps.type->status)
pg->ps.type->table_args);
list_for_each_entry(p, &pg->pgpaths, list) {
- DMEMIT("%s ", p->path.dev->name);
+ DMEMIT("%s ", p->path.pdev);
if (pg->ps.type->status)
sz += pg->ps.type->status(&pg->ps,
&p->path, type, result + sz,
if (!m->current_pgpath)
__choose_pgpath(m, 0);
- if (m->current_pgpath) {
+ if (m->current_pgpath && m->current_pgpath->path.dev) {
bdev = m->current_pgpath->path.dev->bdev;
mode = m->current_pgpath->path.dev->mode;
}
vfree(t->highs);
/* free the device list */
- if (t->devices.next != &t->devices)
- free_devices(&t->devices);
+ free_devices(&t->devices);
dm_free_md_mempools(t->mempools);
dd_new = dd_old = *dd;
- dd_new.dm_dev.mode |= new_mode;
+ dd_new.dm_dev.mode = new_mode;
dd_new.dm_dev.bdev = NULL;
r = open_dev(&dd_new, dd->dm_dev.bdev->bd_dev, md);
- if (r)
+ if (r == -EROFS) {
+ dd_new.dm_dev.mode &= ~FMODE_WRITE;
+ r = open_dev(&dd_new, dd->dm_dev.bdev->bd_dev, md);
+ }
+ if (!r)
return r;
- dd->dm_dev.mode |= new_mode;
+ dd->dm_dev.mode = new_mode;
close_dev(&dd_old, md);
return 0;
struct dm_dev_internal *dd;
unsigned int major, minor;
struct dm_table *t = ti->table;
+ char dummy;
BUG_ON(!t);
- if (sscanf(path, "%u:%u", &major, &minor) == 2) {
+ if (sscanf(path, "%u:%u%c", &major, &minor, &dummy) == 2) {
/* Extract the major/minor numbers */
dev = MKDEV(major, minor);
if (MAJOR(dev) != major || MINOR(dev) != minor)
dd->dm_dev.mode = mode;
dd->dm_dev.bdev = NULL;
- if ((r = open_dev(dd, dev, t->md))) {
+ r = open_dev(dd, dev, t->md);
+ if (r == -EROFS) {
+ dd->dm_dev.mode &= ~FMODE_WRITE;
+ r = open_dev(dd, dev, t->md);
+ }
+ if (r) {
kfree(dd);
return r;
}
+ if (dd->dm_dev.mode != mode)
+ t->mode = dd->dm_dev.mode;
+
format_dev_t(dd->dm_dev.name, dev);
atomic_set(&dd->count, 0);
list_add(&dd->list, &t->devices);
- } else if (dd->dm_dev.mode != (mode | dd->dm_dev.mode)) {
+ } else if (dd->dm_dev.mode != mode) {
r = upgrade_mode(dd, mode, t->md);
if (r)
return r;
*/
void dm_put_device(struct dm_target *ti, struct dm_dev *d)
{
- struct dm_dev_internal *dd = container_of(d, struct dm_dev_internal,
- dm_dev);
+ struct dm_dev_internal *dd;
+
+ if (!d)
+ return;
+ dd = container_of(d, struct dm_dev_internal, dm_dev);
if (atomic_dec_and_test(&dd->count)) {
close_dev(dd, ti->table->md);
list_del(&dd->list);
unsigned *value, char **error, unsigned grouped)
{
const char *arg_str = dm_shift_arg(arg_set);
+ char dummy;
if (!arg_str ||
- (sscanf(arg_str, "%u", value) != 1) ||
+ (sscanf(arg_str, "%u%c", value, &dummy) != 1) ||
(*value < arg->min) ||
(*value > arg->max) ||
(grouped && arg_set->argc < *value)) {
static int dm_blk_open(struct block_device *bdev, fmode_t mode)
{
struct mapped_device *md;
+ int retval = 0;
spin_lock(&_minor_lock);
md = bdev->bd_disk->private_data;
- if (!md)
+ if (!md) {
+ retval = -ENXIO;
goto out;
+ }
if (test_bit(DMF_FREEING, &md->flags) ||
dm_deleting_md(md)) {
md = NULL;
+ retval = -ENXIO;
+ goto out;
+ }
+ if (get_disk_ro(md->disk) && (mode & FMODE_WRITE)) {
+ md = NULL;
+ retval = -EROFS;
goto out;
}
out:
spin_unlock(&_minor_lock);
- return md ? 0 : -ENXIO;
+ return retval;
}
static int dm_blk_close(struct gendisk *disk, fmode_t mode)
if (!map || !dm_table_get_size(map))
goto out;
- /* We only support devices that have a single target */
- if (dm_table_get_num_targets(map) != 1)
- goto out;
-
- tgt = dm_table_get_target(map, 0);
-
if (dm_suspended_md(md)) {
r = -EAGAIN;
goto out;
}
- if (tgt->type->ioctl)
- r = tgt->type->ioctl(tgt, cmd, arg);
+ if (cmd == BLKRRPART) {
+ /* Emulate Re-read partitions table */
+ kobject_uevent(&disk_to_dev(md->disk)->kobj, KOBJ_CHANGE);
+ r = 0;
+ } else {
+ /* We only support devices that have a single target */
+ if (dm_table_get_num_targets(map) != 1)
+ goto out;
+
+ tgt = dm_table_get_target(map, 0);
+
+ if (tgt->type->ioctl)
+ r = tgt->type->ioctl(tgt, cmd, arg);
+ }
out:
dm_table_put(map);
/*
* Store bio_set for cleanup.
*/
+ clone->bi_end_io = NULL;
clone->bi_private = md->bs;
bio_put(clone);
free_tio(md, tio);
clear_bit(DMF_MERGE_IS_OPTIONAL, &md->flags);
write_unlock_irqrestore(&md->map_lock, flags);
+ dm_table_get(md->map);
+ if (!(dm_table_get_mode(t) & FMODE_WRITE))
+ set_disk_ro(md->disk, 1);
+ else
+ set_disk_ro(md->disk, 0);
+ dm_table_put(md->map);
+
return old_map;
}
{
return md->disk;
}
+EXPORT_SYMBOL_GPL(dm_disk);
struct kobject *dm_kobject(struct mapped_device *md)
{
config INTEL_MID_PTI
tristate "Parallel Trace Interface for MIPI P1149.7 cJTAG standard"
- depends on PCI
+ depends on X86_INTEL_MID
default n
help
The PTI (Parallel Trace Interface) driver directs
config VMWARE_BALLOON
tristate "VMware Balloon Driver"
- depends on X86 && !XEN
+ depends on X86
help
This is VMware physical memory management driver which acts
like a "balloon" that can be inflated to reclaim physical pages
dma_addr_t mapping;
/* Note the receive buffer must be longword aligned.
- dev_alloc_skb() provides 16 byte alignment. But do *not*
+ netdev_alloc_skb() provides 16 byte alignment. But do *not*
use skb_reserve() to align the IP header! */
- struct sk_buff *skb = dev_alloc_skb(PKT_BUF_SZ);
+ struct sk_buff *skb = netdev_alloc_skb(dev, PKT_BUF_SZ);
tp->rx_buffers[i].skb = skb;
if (skb == NULL)
break;
mapping = pci_map_single(tp->pdev, skb->data,
PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
tp->rx_buffers[i].mapping = mapping;
- skb->dev = dev; /* Mark as being used by this device. */
tp->rx_ring[i].status = cpu_to_le32(DescOwned); /* Owned by Tulip chip */
tp->rx_ring[i].buffer1 = cpu_to_le32(mapping);
}
/* alloc_etherdev ensures aligned and zeroed private structures */
dev = alloc_etherdev (sizeof (*tp));
- if (!dev) {
- pr_err("ether device alloc failed, aborting\n");
+ if (!dev)
return -ENOMEM;
- }
SET_NETDEV_DEV(dev, &pdev->dev);
if (pci_resource_len (pdev, 0) < tulip_tbl[chip_idx].io_size) {
return;
tp = netdev_priv(dev);
+
+ /* shoot NIC in the head before deallocating descriptors */
+ pci_disable_device(tp->pdev);
+
unregister_netdev(dev);
pci_free_consistent (pdev,
sizeof (struct tulip_rx_desc) * RX_RING_SIZE +
/*
- * linux/drivers/net/ehea/ehea_main.c
+ * linux/drivers/net/ethernet/ibm/ehea/ehea_main.c
*
* eHEA ethernet device driver for IBM eServer System p
*
static int __devexit ehea_remove(struct platform_device *dev);
+static struct of_device_id ehea_module_device_table[] = {
+ {
+ .name = "lhea",
+ .compatible = "IBM,lhea",
+ },
+ {
+ .type = "network",
+ .compatible = "IBM,lhea-ethernet",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, ehea_module_device_table);
+
static struct of_device_id ehea_device_table[] = {
{
.name = "lhea",
},
{},
};
-MODULE_DEVICE_TABLE(of, ehea_device_table);
static struct of_platform_driver ehea_driver = {
.driver = {
dev = alloc_etherdev_mq(sizeof(struct ehea_port), EHEA_MAX_PORT_RES);
if (!dev) {
- pr_err("no mem for net_device\n");
ret = -ENOMEM;
goto out_err;
}
static void b43_time_lock(struct b43_wldev *dev)
{
- u32 macctl;
-
- macctl = b43_read32(dev, B43_MMIO_MACCTL);
- macctl |= B43_MACCTL_TBTTHOLD;
- b43_write32(dev, B43_MMIO_MACCTL, macctl);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~0, B43_MACCTL_TBTTHOLD);
/* Commit the write */
b43_read32(dev, B43_MMIO_MACCTL);
}
static void b43_time_unlock(struct b43_wldev *dev)
{
- u32 macctl;
-
- macctl = b43_read32(dev, B43_MMIO_MACCTL);
- macctl &= ~B43_MACCTL_TBTTHOLD;
- b43_write32(dev, B43_MMIO_MACCTL, macctl);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_TBTTHOLD, 0);
/* Commit the write */
b43_read32(dev, B43_MMIO_MACCTL);
}
static void b43_print_fw_helptext(struct b43_wl *wl, bool error)
{
const char text[] =
- "You must go to " \
- "http://wireless.kernel.org/en/users/Drivers/b43#devicefirmware " \
- "and download the correct firmware for this driver version. " \
- "Please carefully read all instructions on this website.\n";
+ "Please open a terminal and enter the command " \
+ "\"sudo /usr/sbin/install_bcm43xx_firmware\" to download " \
+ "the correct firmware for this driver version. " \
+ "For an off-line installation, go to " \
+ "http://en.opensuse.org/HCL/Network_Adapters_(Wireless)/" \
+ "Broadcom_BCM43xx and follow the instructions in the " \
+ "\"Installing firmware from RPM packages\" section.\n";
if (error)
b43err(wl, text);
return err;
}
- static int b43_request_firmware(struct b43_wldev *dev)
+ static int b43_one_core_attach(struct b43_bus_dev *dev, struct b43_wl *wl);
+ static void b43_one_core_detach(struct b43_bus_dev *dev);
+
+ static void b43_request_firmware(struct work_struct *work)
{
+ struct b43_wl *wl = container_of(work,
+ struct b43_wl, firmware_load);
+ struct b43_wldev *dev = wl->current_dev;
struct b43_request_fw_context *ctx;
unsigned int i;
int err;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
- return -ENOMEM;
+ return;
ctx->dev = dev;
ctx->req_type = B43_FWTYPE_PROPRIETARY;
err = b43_try_request_fw(ctx);
if (!err)
- goto out; /* Successfully loaded it. */
- err = ctx->fatal_failure;
- if (err)
+ goto start_ieee80211; /* Successfully loaded it. */
+ /* Was fw version known? */
+ if (ctx->fatal_failure)
goto out;
+ /* proprietary fw not found, try open source */
ctx->req_type = B43_FWTYPE_OPENSOURCE;
err = b43_try_request_fw(ctx);
if (!err)
- goto out; /* Successfully loaded it. */
- err = ctx->fatal_failure;
- if (err)
+ goto start_ieee80211; /* Successfully loaded it. */
+ if(ctx->fatal_failure)
goto out;
/* Could not find a usable firmware. Print the errors. */
b43err(dev->wl, errmsg);
}
b43_print_fw_helptext(dev->wl, 1);
- err = -ENOENT;
+ goto out;
+
+ start_ieee80211:
+ err = ieee80211_register_hw(wl->hw);
+ if (err)
+ goto err_one_core_detach;
+ b43_leds_register(wl->current_dev);
+ goto out;
+
+ err_one_core_detach:
+ b43_one_core_detach(dev->dev);
out:
kfree(ctx);
- return err;
}
static int b43_upload_microcode(struct b43_wldev *dev)
b43_write32(dev, B43_MMIO_GEN_IRQ_REASON, B43_IRQ_ALL);
/* Start the microcode PSM */
- macctl = b43_read32(dev, B43_MMIO_MACCTL);
- macctl &= ~B43_MACCTL_PSM_JMP0;
- macctl |= B43_MACCTL_PSM_RUN;
- b43_write32(dev, B43_MMIO_MACCTL, macctl);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_PSM_JMP0,
+ B43_MACCTL_PSM_RUN);
/* Wait for the microcode to load and respond */
i = 0;
return 0;
error:
- macctl = b43_read32(dev, B43_MMIO_MACCTL);
- macctl &= ~B43_MACCTL_PSM_RUN;
- macctl |= B43_MACCTL_PSM_JMP0;
- b43_write32(dev, B43_MMIO_MACCTL, macctl);
+ /* Stop the microcode PSM. */
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_PSM_RUN,
+ B43_MACCTL_PSM_JMP0);
return err;
}
struct ssb_device *gpiodev;
u32 mask, set;
- b43_write32(dev, B43_MMIO_MACCTL, b43_read32(dev, B43_MMIO_MACCTL)
- & ~B43_MACCTL_GPOUTSMSK);
-
- b43_write16(dev, B43_MMIO_GPIO_MASK, b43_read16(dev, B43_MMIO_GPIO_MASK)
- | 0x000F);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_GPOUTSMSK, 0);
+ b43_maskset16(dev, B43_MMIO_GPIO_MASK, ~0, 0xF);
mask = 0x0000001F;
set = 0x0000000F;
mask |= 0x0060;
set |= 0x0060;
}
+ if (dev->dev->chip_id == 0x5354)
+ set &= 0xff02;
if (0 /* FIXME: conditional unknown */ ) {
b43_write16(dev, B43_MMIO_GPIO_MASK,
b43_read16(dev, B43_MMIO_GPIO_MASK)
dev->mac_suspended--;
B43_WARN_ON(dev->mac_suspended < 0);
if (dev->mac_suspended == 0) {
- b43_write32(dev, B43_MMIO_MACCTL,
- b43_read32(dev, B43_MMIO_MACCTL)
- | B43_MACCTL_ENABLED);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~0, B43_MACCTL_ENABLED);
b43_write32(dev, B43_MMIO_GEN_IRQ_REASON,
B43_IRQ_MAC_SUSPENDED);
/* Commit writes */
if (dev->mac_suspended == 0) {
b43_power_saving_ctl_bits(dev, B43_PS_AWAKE);
- b43_write32(dev, B43_MMIO_MACCTL,
- b43_read32(dev, B43_MMIO_MACCTL)
- & ~B43_MACCTL_ENABLED);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_ENABLED, 0);
/* force pci to flush the write */
b43_read32(dev, B43_MMIO_MACCTL);
for (i = 35; i; i--) {
* so always disable it. If we want to implement PMQ,
* we need to enable it here (clear DISCPMQ) in AP mode.
*/
- if (0 /* ctl & B43_MACCTL_AP */) {
- b43_write32(dev, B43_MMIO_MACCTL,
- b43_read32(dev, B43_MMIO_MACCTL)
- & ~B43_MACCTL_DISCPMQ);
- } else {
- b43_write32(dev, B43_MMIO_MACCTL,
- b43_read32(dev, B43_MMIO_MACCTL)
- | B43_MACCTL_DISCPMQ);
- }
+ if (0 /* ctl & B43_MACCTL_AP */)
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_DISCPMQ, 0);
+ else
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~0, B43_MACCTL_DISCPMQ);
}
static void b43_rate_memory_write(struct b43_wldev *dev, u16 rate, int is_ofdm)
macctl |= B43_MACCTL_INFRA;
b43_write32(dev, B43_MMIO_MACCTL, macctl);
- err = b43_request_firmware(dev);
- if (err)
- goto out;
err = b43_upload_microcode(dev);
if (err)
goto out; /* firmware is released later */
if (dev->dev->core_rev < 5)
b43_write32(dev, 0x010C, 0x01000000);
- b43_write32(dev, B43_MMIO_MACCTL, b43_read32(dev, B43_MMIO_MACCTL)
- & ~B43_MACCTL_INFRA);
- b43_write32(dev, B43_MMIO_MACCTL, b43_read32(dev, B43_MMIO_MACCTL)
- | B43_MACCTL_INFRA);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_INFRA, 0);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~0, B43_MACCTL_INFRA);
/* Probe Response Timeout value */
/* FIXME: Default to 0, has to be set by ioctl probably... :-/ */
mutex_unlock(&wl->mutex);
cancel_delayed_work_sync(&dev->periodic_work);
cancel_work_sync(&wl->tx_work);
+ cancel_work_sync(&wl->firmware_load);
mutex_lock(&wl->mutex);
dev = wl->current_dev;
if (!dev || b43_status(dev) < B43_STAT_STARTED) {
/* Locking: wl->mutex */
static void b43_wireless_core_exit(struct b43_wldev *dev)
{
- u32 macctl;
-
B43_WARN_ON(dev && b43_status(dev) > B43_STAT_INITIALIZED);
if (!dev || b43_status(dev) != B43_STAT_INITIALIZED)
return;
b43_set_status(dev, B43_STAT_UNINIT);
/* Stop the microcode PSM. */
- macctl = b43_read32(dev, B43_MMIO_MACCTL);
- macctl &= ~B43_MACCTL_PSM_RUN;
- macctl |= B43_MACCTL_PSM_JMP0;
- b43_write32(dev, B43_MMIO_MACCTL, macctl);
+ b43_maskset32(dev, B43_MMIO_MACCTL, ~B43_MACCTL_PSM_RUN,
+ B43_MACCTL_PSM_JMP0);
b43_dma_free(dev);
b43_pio_free(dev);
if (err)
goto bcma_err_wireless_exit;
- err = ieee80211_register_hw(wl->hw);
- if (err)
- goto bcma_err_one_core_detach;
- b43_leds_register(wl->current_dev);
+ /* setup and start work to load firmware */
+ INIT_WORK(&wl->firmware_load, b43_request_firmware);
+ schedule_work(&wl->firmware_load);
bcma_out:
return err;
- bcma_err_one_core_detach:
- b43_one_core_detach(dev);
bcma_err_wireless_exit:
ieee80211_free_hw(wl->hw);
return err;
if (err)
goto err_wireless_exit;
- if (first) {
- err = ieee80211_register_hw(wl->hw);
- if (err)
- goto err_one_core_detach;
- b43_leds_register(wl->current_dev);
- }
+ /* setup and start work to load firmware */
+ INIT_WORK(&wl->firmware_load, b43_request_firmware);
+ schedule_work(&wl->firmware_load);
out:
return err;
- err_one_core_detach:
- b43_one_core_detach(dev);
err_wireless_exit:
if (first)
b43_wireless_exit(dev, wl);
* and sends a CRQ message back to inform the client that the request has
* completed.
*
- * Note that some of the underlying infrastructure is different between
- * machines conforming to the "RS/6000 Platform Architecture" (RPA) and
- * the older iSeries hypervisor models. To support both, some low level
- * routines have been broken out into rpa_vscsi.c and iseries_vscsi.c.
- * The Makefile should pick one, not two, not zero, of these.
- *
- * TODO: This is currently pretty tied to the IBM i/pSeries hypervisor
+ * TODO: This is currently pretty tied to the IBM pSeries hypervisor
* interfaces. It would be really nice to abstract this above an RDMA
* layer.
*/
static int max_events = IBMVSCSI_MAX_REQUESTS_DEFAULT + 2;
static int fast_fail = 1;
static int client_reserve = 1;
+/*host data buffer size*/
+#define buff_size 4096
static struct scsi_transport_template *ibmvscsi_transport_template;
static struct ibmvscsi_ops *ibmvscsi_ops;
+#define IBMVSCSI_PROC_NAME "ibmvscsi"
+/* The driver is named ibmvscsic, map ibmvscsi to module name */
+MODULE_ALIAS(IBMVSCSI_PROC_NAME);
MODULE_DESCRIPTION("IBM Virtual SCSI");
MODULE_AUTHOR("Dave Boutcher");
MODULE_LICENSE("GPL");
struct ibmvscsi_host_data *hostdata = shost_priv(shost);
int len;
- len = snprintf(buf, PAGE_SIZE, "%s\n",
+ len = snprintf(buf, buff_size, "%s\n",
hostdata->madapter_info.srp_version);
return len;
}
struct ibmvscsi_host_data *hostdata = shost_priv(shost);
int len;
- len = snprintf(buf, PAGE_SIZE, "%s\n",
+ len = snprintf(buf, buff_size, "%s\n",
hostdata->madapter_info.partition_name);
return len;
}
struct ibmvscsi_host_data *hostdata = shost_priv(shost);
int len;
- len = snprintf(buf, PAGE_SIZE, "%d\n",
+ len = snprintf(buf, buff_size, "%d\n",
hostdata->madapter_info.partition_number);
return len;
}
struct ibmvscsi_host_data *hostdata = shost_priv(shost);
int len;
- len = snprintf(buf, PAGE_SIZE, "%d\n",
+ len = snprintf(buf, buff_size, "%d\n",
hostdata->madapter_info.mad_version);
return len;
}
struct ibmvscsi_host_data *hostdata = shost_priv(shost);
int len;
- len = snprintf(buf, PAGE_SIZE, "%d\n", hostdata->madapter_info.os_type);
+ len = snprintf(buf, buff_size, "%d\n", hostdata->madapter_info.os_type);
return len;
}
struct ibmvscsi_host_data *hostdata = shost_priv(shost);
/* returns null-terminated host config data */
- if (ibmvscsi_do_host_config(hostdata, buf, PAGE_SIZE) == 0)
+ if (ibmvscsi_do_host_config(hostdata, buf, buff_size) == 0)
return strlen(buf);
else
return 0;
static struct scsi_host_template driver_template = {
.module = THIS_MODULE,
.name = "IBM POWER Virtual SCSI Adapter " IBMVSCSI_VERSION,
- .proc_name = "ibmvscsi",
+ .proc_name = IBMVSCSI_PROC_NAME,
.queuecommand = ibmvscsi_queuecommand,
.eh_abort_handler = ibmvscsi_eh_abort_handler,
.eh_device_reset_handler = ibmvscsi_eh_device_reset_handler,
.probe = ibmvscsi_probe,
.remove = ibmvscsi_remove,
.get_desired_dma = ibmvscsi_get_desired_dma,
- .driver = {
- .name = IBMVSCSI_PROC_NAME,
- .owner = THIS_MODULE,
- .pm = &ibmvscsi_pm_ops,
- }
- .name = "ibmvscsi",
++ .name = IBMVSCSI_PROC_NAME,
+ .pm = &ibmvscsi_pm_ops,
};
static struct srp_function_template ibmvscsi_transport_functions = {
driver_template.can_queue = max_requests;
max_events = max_requests + 2;
- if (firmware_has_feature(FW_FEATURE_ISERIES))
- ibmvscsi_ops = &iseriesvscsi_ops;
- else if (firmware_has_feature(FW_FEATURE_VIO))
+ if (firmware_has_feature(FW_FEATURE_VIO))
ibmvscsi_ops = &rpavscsi_ops;
else
return -ENODEV;
#include <linux/interrupt.h>
#include <linux/blkdev.h>
#include <linux/delay.h>
+#include <linux/netlink.h>
+#include <net/netlink.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_dbg.h>
#include <scsi/scsi_device.h>
+ #include <scsi/scsi_driver.h>
#include <scsi/scsi_eh.h>
#include <scsi/scsi_transport.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_ioctl.h>
+#include <scsi/scsi_netlink_ml.h>
#include "scsi_priv.h"
#include "scsi_logging.h"
#include <trace/events/scsi.h>
#define SENSE_TIMEOUT (10*HZ)
+#define TEST_UNIT_READY_TIMEOUT (30*HZ)
/*
* These should *probably* be handled by the host itself.
else if (host->hostt->eh_timed_out)
rtn = host->hostt->eh_timed_out(scmd);
+ scmd->result |= DID_TIME_OUT << 16;
+
if (unlikely(rtn == BLK_EH_NOT_HANDLED &&
- !scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD))) {
- scmd->result |= DID_TIME_OUT << 16;
+ !scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD)))
rtn = BLK_EH_HANDLED;
- }
return rtn;
}
}
#endif
+#ifdef CONFIG_SCSI_NETLINK
+/**
+ * scsi_post_sense_event - called to post a 'Sense Code' event
+ *
+ * @sdev: SCSI device the sense code occured on
+ * @sshdr: SCSI sense code
+ *
+ * Returns:
+ * 0 on succesful return
+ * otherwise, failing error code
+ *
+ */
+static void scsi_post_sense_event(struct scsi_device *sdev,
+ struct scsi_sense_hdr *sshdr)
+{
+ struct sk_buff *skb;
+ struct nlmsghdr *nlh;
+ struct scsi_nl_sense_msg *msg;
+ u32 len, skblen;
+ int err;
+
+ if (!scsi_nl_sock) {
+ err = -ENOENT;
+ goto send_fail;
+ }
+
+ len = SCSI_NL_MSGALIGN(sizeof(*msg));
+ skblen = NLMSG_SPACE(len);
+
+ skb = alloc_skb(skblen, GFP_ATOMIC);
+ if (!skb) {
+ err = -ENOBUFS;
+ goto send_fail;
+ }
+
+ nlh = nlmsg_put(skb, 0, 0, SCSI_TRANSPORT_MSG,
+ skblen - sizeof(*nlh), 0);
+ if (!nlh) {
+ err = -ENOBUFS;
+ goto send_fail_skb;
+ }
+ msg = NLMSG_DATA(nlh);
+
+ INIT_SCSI_NL_HDR(&msg->snlh, SCSI_NL_TRANSPORT_ML,
+ ML_NL_SCSI_SENSE, len);
+ msg->host_no = sdev->host->host_no;
+ msg->channel = sdev->channel;
+ msg->id = sdev->id;
+ msg->lun = sdev->lun;
+ msg->sense = (sshdr->response_code << 24) | (sshdr->sense_key << 16) |
+ (sshdr->asc << 8) | sshdr->ascq;
+
+ err = nlmsg_multicast(scsi_nl_sock, skb, 0, SCSI_NL_GRP_ML_EVENTS,
+ GFP_KERNEL);
+ if (err && (err != -ESRCH))
+ /* nlmsg_multicast already kfree_skb'd */
+ goto send_fail;
+
+ return;
+
+send_fail_skb:
+ kfree_skb(skb);
+send_fail:
+ sdev_printk(KERN_WARNING, sdev,
+ "Dropped SCSI Msg %02x/%02x/%02x/%02x: err %d\n",
+ sshdr->response_code, sshdr->sense_key,
+ sshdr->asc, sshdr->ascq, err);
+ return;
+}
+#else
+static inline void scsi_post_sense_event(struct scsi_device *sdev,
+ struct scsi_sense_hdr *sshdr) {}
+#endif
+
/**
* scsi_check_sense - Examine scsi cmd sense
* @scmd: Cmd to have sense checked.
if (scsi_sense_is_deferred(&sshdr))
return NEEDS_RETRY;
+ scsi_post_sense_event(sdev, &sshdr);
+
if (sdev->scsi_dh_data && sdev->scsi_dh_data->scsi_dh &&
sdev->scsi_dh_data->scsi_dh->check_sense) {
int rc;
* if the device is in the process of becoming ready, we
* should retry.
*/
- if ((sshdr.asc == 0x04) && (sshdr.ascq == 0x01))
+ if ((sshdr.asc == 0x04) &&
+ (sshdr.ascq == 0x01 || sshdr.ascq == 0x0a))
return NEEDS_RETRY;
/*
* if the device is not started, we need to wake
return TARGET_ERROR;
case ILLEGAL_REQUEST:
+ if (sshdr.asc == 0x20 || /* Invalid command operation code */
+ sshdr.asc == 0x21 || /* Logical block address out of range */
+ sshdr.asc == 0x24 || /* Invalid field in cdb */
+ sshdr.asc == 0x26) { /* Parameter value invalid */
+ return TARGET_ERROR;
+ }
+ return SUCCESS;
+
default:
return SUCCESS;
}
int cmnd_size, int timeout, unsigned sense_bytes)
{
struct scsi_device *sdev = scmd->device;
+ struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
struct Scsi_Host *shost = sdev->host;
DECLARE_COMPLETION_ONSTACK(done);
unsigned long timeleft;
}
scsi_eh_restore_cmnd(scmd, &ses);
+
+ if (sdrv->eh_action)
+ rtn = sdrv->eh_action(scmd, cmnd, cmnd_size, rtn);
+
return rtn;
}
int retry_cnt = 1, rtn;
retry_tur:
- rtn = scsi_send_eh_cmnd(scmd, tur_command, 6, SENSE_TIMEOUT, 0);
+ rtn = scsi_send_eh_cmnd(scmd, tur_command, 6, TEST_UNIT_READY_TIMEOUT, 0);
SCSI_LOG_ERROR_RECOVERY(3, printk("%s: scmd %p rtn %x\n",
__func__, scmd, rtn));
* Need to modify host byte to signal a
* permanent target failure
*/
- scmd->result |= (DID_TARGET_FAILURE << 16);
+ set_host_byte(scmd, DID_TARGET_FAILURE);
rtn = SUCCESS;
}
/* if rtn == FAILED, we have no sense information;
case RESERVATION_CONFLICT:
sdev_printk(KERN_INFO, scmd->device,
"reservation conflict\n");
- scmd->result |= (DID_NEXUS_FAILURE << 16);
+ set_host_byte(scmd, DID_NEXUS_FAILURE);
return SUCCESS; /* causes immediate i/o error */
default:
return FAILED;
error = -ENOLINK;
break;
case DID_TARGET_FAILURE:
- cmd->result |= (DID_OK << 16);
+ set_host_byte(cmd, DID_OK);
error = -EREMOTEIO;
break;
case DID_NEXUS_FAILURE:
- cmd->result |= (DID_OK << 16);
+ set_host_byte(cmd, DID_OK);
error = -EBADE;
break;
default:
cmd->cmnd[0] == WRITE_SAME)) {
description = "Discard failure";
action = ACTION_FAIL;
+ error = -EREMOTEIO;
} else
action = ACTION_FAIL;
break;
spin_lock_irq(q->queue_lock);
}
+struct scsi_device *scsi_device_from_queue(struct request_queue *q)
+{
+ struct scsi_device *sdev = NULL;
+
+ if (q->request_fn == scsi_request_fn)
+ sdev = q->queuedata;
+
+ return sdev;
+}
+EXPORT_SYMBOL_GPL(scsi_device_from_queue);
+
u64 scsi_calculate_bounce_limit(struct Scsi_Host *shost)
{
struct device *host_dev;
if (*len > sg_len)
*len = sg_len;
- return kmap_atomic(page, KM_BIO_SRC_IRQ);
+ return kmap_atomic(page);
}
EXPORT_SYMBOL(scsi_kmap_atomic_sg);
*/
void scsi_kunmap_atomic_sg(void *virt)
{
- kunmap_atomic(virt, KM_BIO_SRC_IRQ);
+ kunmap_atomic(virt);
}
EXPORT_SYMBOL(scsi_kunmap_atomic_sg);
* and displaying garbage for the Vendor, Product, or Revision
* strings.
*/
- if (sdev->inquiry_len < 36) {
+ if (sdev->inquiry_len < 36 && printk_ratelimit()) {
printk(KERN_INFO "scsi scan: INQUIRY result too short (%d),"
" using 36\n", sdev->inquiry_len);
sdev->inquiry_len = 36;
* LUNs even if it's older than SCSI-3.
* If BLIST_NOREPORTLUN is set, return 1 always.
* If BLIST_NOLUN is set, return 0 always.
+ * If starget->no_report_luns is set, return 1 always.
*
* Return:
* 0: scan completed (or no memory, so further scanning is futile)
* Only support SCSI-3 and up devices if BLIST_NOREPORTLUN is not set.
* Also allow SCSI-2 if BLIST_REPORTLUN2 is set and host adapter does
* support more than 8 LUNs.
+ * Don't attempt if the target doesn't support REPORT LUNS.
*/
if (bflags & BLIST_NOREPORTLUN)
return 1;
return 1;
if (bflags & BLIST_NOLUN)
return 0;
+ if (starget->no_report_luns)
+ return 1;
if (!(sdev = scsi_device_lookup_by_target(starget, 0))) {
sdev = scsi_alloc_sdev(starget, 0, NULL);
static int sd_resume(struct device *);
static void sd_rescan(struct device *);
static int sd_done(struct scsi_cmnd *);
+ static int sd_eh_action(struct scsi_cmnd *, unsigned char *, int, int);
static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer);
static void scsi_disk_release(struct device *cdev);
static void sd_print_sense_hdr(struct scsi_disk *, struct scsi_sense_hdr *);
return count;
}
+ static ssize_t
+ sd_show_max_medium_access_timeouts(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct scsi_disk *sdkp = to_scsi_disk(dev);
+
+ return snprintf(buf, 20, "%u\n", sdkp->max_medium_access_timeouts);
+ }
+
+ static ssize_t
+ sd_store_max_medium_access_timeouts(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+ struct scsi_disk *sdkp = to_scsi_disk(dev);
+ int err;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ err = kstrtouint(buf, 10, &sdkp->max_medium_access_timeouts);
+
+ return err ? err : count;
+ }
+
static struct device_attribute sd_disk_attrs[] = {
__ATTR(cache_type, S_IRUGO|S_IWUSR, sd_show_cache_type,
sd_store_cache_type),
__ATTR(thin_provisioning, S_IRUGO, sd_show_thin_provisioning, NULL),
__ATTR(provisioning_mode, S_IRUGO|S_IWUSR, sd_show_provisioning_mode,
sd_store_provisioning_mode),
+ __ATTR(max_medium_access_timeouts, S_IRUGO|S_IWUSR,
+ sd_show_max_medium_access_timeouts,
+ sd_store_max_medium_access_timeouts),
__ATTR_NULL,
};
},
.rescan = sd_rescan,
.done = sd_done,
+ .eh_action = sd_eh_action,
};
/*
max(sdkp->physical_block_size,
sdkp->unmap_granularity * logical_block_size);
+ sdkp->provisioning_mode = mode;
+
switch (mode) {
case SD_LBP_DISABLE:
q->limits.max_discard_sectors = max_blocks * (logical_block_size >> 9);
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
-
- sdkp->provisioning_mode = mode;
}
/**
}
/**
- * sd_init_command - build a scsi (read or write) command from
+ * sd_prep_fn - build a scsi (read or write) command from
* information in the request structure.
* @SCpnt: pointer to mid-level's per scsi command structure that
* contains request and into which the scsi command is written
ret = BLKPREP_KILL;
SCSI_LOG_HLQUEUE(1, scmd_printk(KERN_INFO, SCpnt,
- "sd_init_command: block=%llu, "
+ "sd_prep_fn: block=%llu, "
"count=%d\n",
(unsigned long long)block,
this_count));
retval = -ENODEV;
if (scsi_block_when_processing_errors(sdp)) {
+ retval = scsi_autopm_get_device(sdp);
+ if (retval)
+ goto out;
+
sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL);
retval = scsi_test_unit_ready(sdp, SD_TIMEOUT, SD_MAX_RETRIES,
sshdr);
+ scsi_autopm_put_device(sdp);
}
/* failed to execute TUR, assume media not present */
.unlock_native_capacity = sd_unlock_native_capacity,
};
+ /**
+ * sd_eh_action - error handling callback
+ * @scmd: sd-issued command that has failed
+ * @eh_cmnd: The command that was sent during error handling
+ * @eh_cmnd_len: Length of eh_cmnd in bytes
+ * @eh_disp: The recovery disposition suggested by the midlayer
+ *
+ * This function is called by the SCSI midlayer upon completion of
+ * an error handling command (TEST UNIT READY, START STOP UNIT,
+ * etc.) The command sent to the device by the error handler is
+ * stored in eh_cmnd. The result of sending the eh command is
+ * passed in eh_disp.
+ **/
+ static int sd_eh_action(struct scsi_cmnd *scmd, unsigned char *eh_cmnd,
+ int eh_cmnd_len, int eh_disp)
+ {
+ struct scsi_disk *sdkp = scsi_disk(scmd->request->rq_disk);
+
+ if (!scsi_device_online(scmd->device) ||
+ !scsi_medium_access_command(scmd))
+ return eh_disp;
+
+ /*
+ * The device has timed out executing a medium access command.
+ * However, the TEST UNIT READY command sent during error
+ * handling completed successfully. Either the device is in the
+ * process of recovering or has it suffered an internal failure
+ * that prevents access to the storage medium.
+ */
+ if (host_byte(scmd->result) == DID_TIME_OUT && eh_disp == SUCCESS &&
+ eh_cmnd_len && eh_cmnd[0] == TEST_UNIT_READY)
+ sdkp->medium_access_timed_out++;
+
+ /*
+ * If the device keeps failing read/write commands but TEST UNIT
+ * READY always completes successfully we assume that medium
+ * access is no longer possible and take the device offline.
+ */
+ if (sdkp->medium_access_timed_out >= sdkp->max_medium_access_timeouts) {
+ scmd_printk(KERN_ERR, scmd,
+ "Medium access timeout failure. Offlining disk!\n");
+ scsi_device_set_state(scmd->device, SDEV_OFFLINE);
+
+ return FAILED;
+ }
+
+ return eh_disp;
+ }
+
static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd)
{
u64 start_lba = blk_rq_pos(scmd->request);
(!sense_valid || sense_deferred))
goto out;
+ sdkp->medium_access_timed_out = 0;
+
switch (sshdr.sense_key) {
case HARDWARE_ERROR:
case MEDIUM_ERROR:
* Yes, this sense key/ASC combination shouldn't
* occur here. It's characteristic of these devices.
*/
- } else if (sense_valid &&
- sshdr.sense_key == UNIT_ATTENTION &&
+ } else if (sshdr.sense_key == UNIT_ATTENTION &&
sshdr.asc == 0x28) {
if (!spintime) {
spintime_expire = jiffies + 5 * HZ;
* some USB ones crash on receiving them, and the pages
* we currently ask for are for SPC-3 and beyond
*/
- if (sdp->scsi_level > SCSI_SPC_2)
+ if (sdp->scsi_level > SCSI_SPC_2 && !sdp->skip_vpd_pages)
return 1;
return 0;
}
sdkp->RCD = 0;
sdkp->ATO = 0;
sdkp->first_scan = 1;
+ sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS;
sd_revalidate_disk(gd);
put_device(&sdkp->dev);
}
+static int sd_get_index(int *index)
+{
+ int error = -ENOMEM;
+ do {
+ if (!ida_pre_get(&sd_index_ida, GFP_KERNEL))
+ break;
+
+ spin_lock(&sd_index_lock);
+ error = ida_get_new(&sd_index_ida, index);
+ spin_unlock(&sd_index_lock);
+ } while (error == -EAGAIN);
+
+ return error;
+}
/**
* sd_probe - called during driver initialization and whenever a
* new scsi device is attached to the system. It is called once
* (e.g. /dev/sda). More precisely it is the block device major
* and minor number that is chosen here.
*
- * Assume sd_attach is not re-entrant (for time being)
- * Also think about sd_attach() and sd_remove() running coincidentally.
+ * Assume sd_probe is not re-entrant (for time being)
+ * Also think about sd_probe() and sd_remove() running coincidentally.
**/
static int sd_probe(struct device *dev)
{
goto out;
SCSI_LOG_HLQUEUE(3, sdev_printk(KERN_INFO, sdp,
- "sd_attach\n"));
+ "sd_probe\n"));
error = -ENOMEM;
sdkp = kzalloc(sizeof(*sdkp), GFP_KERNEL);
if (!gd)
goto out_free;
- do {
- if (!ida_pre_get(&sd_index_ida, GFP_KERNEL))
- goto out_put;
-
- spin_lock(&sd_index_lock);
- error = ida_get_new(&sd_index_ida, &index);
- spin_unlock(&sd_index_lock);
- } while (error == -EAGAIN);
-
+ error = sd_get_index(&index);
if (error) {
sdev_printk(KERN_WARNING, sdp, "sd_probe: memory exhausted.\n");
goto out_put;
return ret;
}
+/*
+* Each major represents 16 disks. A minor is used for the disk itself and 15
+* partitions. Mark each disk busy so that sd_probe can not reclaim this major.
+*/
+static int __init init_sd_ida(int *error)
+{
+ int *index, i, j, err;
+
+ index = kmalloc(SD_MAJORS * (256 / SD_MINORS) * sizeof(int), GFP_KERNEL);
+ if (!index)
+ return -ENOMEM;
+
+ /* Mark minors for all majors as busy */
+ for (i = 0; i < SD_MAJORS; i++)
+ {
+ for (j = 0; j < (256 / SD_MINORS); j++) {
+ err = sd_get_index(&index[i * (256 / SD_MINORS) + j]);
+ if (err) {
+ kfree(index);
+ return err;
+ }
+ }
+ }
+
+ /* Mark minors for claimed majors as free */
+ for (i = 0; i < SD_MAJORS; i++)
+ {
+ if (error[i])
+ continue;
+ for (j = 0; j < (256 / SD_MINORS); j++)
+ ida_remove(&sd_index_ida, index[i * (256 / SD_MINORS) + j]);
+ }
+ kfree(index);
+ return 0;
+}
+
/**
* init_sd - entry point for this driver (both when built in or when
* a module).
static int __init init_sd(void)
{
int majors = 0, i, err;
+ int error[SD_MAJORS];
SCSI_LOG_HLQUEUE(3, printk("init_sd: sd driver entry point\n"));
for (i = 0; i < SD_MAJORS; i++)
- if (register_blkdev(sd_major(i), "sd") == 0)
+ {
+ error[i] = register_blkdev(sd_major(i), "sd");
+ if (error[i] == 0)
majors++;
+ }
if (!majors)
return -ENODEV;
+ if (majors < SD_MAJORS) {
+ err = init_sd_ida(error);
+ if (err)
+ return err;
+ }
+
err = class_register(&sd_disk_class);
if (err)
goto err_out;
return NULL;
}
+ /* Disgusting wrapper functions */
+ static inline unsigned long sg_kmap_atomic(struct scatterlist *sgl, int idx)
+ {
+ void *addr = kmap_atomic(sg_page(sgl + idx));
+ return (unsigned long)addr;
+ }
+
+ static inline void sg_kunmap_atomic(unsigned long addr)
+ {
+ kunmap_atomic((void *)addr);
+ }
+
+
/* Assume the original sgl has enough room */
static unsigned int copy_from_bounce_buffer(struct scatterlist *orig_sgl,
struct scatterlist *bounce_sgl,
local_irq_save(flags);
for (i = 0; i < orig_sgl_count; i++) {
- dest_addr = (unsigned long)kmap_atomic(sg_page((&orig_sgl[i])),
- KM_IRQ0) + orig_sgl[i].offset;
+ dest_addr = sg_kmap_atomic(orig_sgl,i) + orig_sgl[i].offset;
dest = dest_addr;
destlen = orig_sgl[i].length;
if (bounce_addr == 0)
- bounce_addr =
- (unsigned long)kmap_atomic(sg_page((&bounce_sgl[j])),
- KM_IRQ0);
+ bounce_addr = sg_kmap_atomic(bounce_sgl,j);
while (destlen) {
src = bounce_addr + bounce_sgl[j].offset;
if (bounce_sgl[j].offset == bounce_sgl[j].length) {
/* full */
- kunmap_atomic((void *)bounce_addr, KM_IRQ0);
+ sg_kunmap_atomic(bounce_addr);
j++;
/*
/*
* We are done; cleanup and return.
*/
- kunmap_atomic((void *)(dest_addr -
- orig_sgl[i].offset),
- KM_IRQ0);
+ sg_kunmap_atomic(dest_addr - orig_sgl[i].offset);
local_irq_restore(flags);
return total_copied;
}
/* if we need to use another bounce buffer */
if (destlen || i != orig_sgl_count - 1)
- bounce_addr =
- (unsigned long)kmap_atomic(
- sg_page((&bounce_sgl[j])), KM_IRQ0);
+ bounce_addr = sg_kmap_atomic(bounce_sgl,j);
} else if (destlen == 0 && i == orig_sgl_count - 1) {
/* unmap the last bounce that is < PAGE_SIZE */
- kunmap_atomic((void *)bounce_addr, KM_IRQ0);
+ sg_kunmap_atomic(bounce_addr);
}
}
- kunmap_atomic((void *)(dest_addr - orig_sgl[i].offset),
- KM_IRQ0);
+ sg_kunmap_atomic(dest_addr - orig_sgl[i].offset);
}
local_irq_restore(flags);
local_irq_save(flags);
for (i = 0; i < orig_sgl_count; i++) {
- src_addr = (unsigned long)kmap_atomic(sg_page((&orig_sgl[i])),
- KM_IRQ0) + orig_sgl[i].offset;
+ src_addr = sg_kmap_atomic(orig_sgl,i) + orig_sgl[i].offset;
src = src_addr;
srclen = orig_sgl[i].length;
if (bounce_addr == 0)
- bounce_addr =
- (unsigned long)kmap_atomic(sg_page((&bounce_sgl[j])),
- KM_IRQ0);
+ bounce_addr = sg_kmap_atomic(bounce_sgl,j);
while (srclen) {
/* assume bounce offset always == 0 */
if (bounce_sgl[j].length == PAGE_SIZE) {
/* full..move to next entry */
- kunmap_atomic((void *)bounce_addr, KM_IRQ0);
+ sg_kunmap_atomic(bounce_addr);
j++;
/* if we need to use another bounce buffer */
if (srclen || i != orig_sgl_count - 1)
- bounce_addr =
- (unsigned long)kmap_atomic(
- sg_page((&bounce_sgl[j])), KM_IRQ0);
+ bounce_addr = sg_kmap_atomic(bounce_sgl,j);
} else if (srclen == 0 && i == orig_sgl_count - 1) {
/* unmap the last bounce that is < PAGE_SIZE */
- kunmap_atomic((void *)bounce_addr, KM_IRQ0);
+ sg_kunmap_atomic(bounce_addr);
}
}
- kunmap_atomic((void *)(src_addr - orig_sgl[i].offset), KM_IRQ0);
+ sg_kunmap_atomic(src_addr - orig_sgl[i].offset);
}
local_irq_restore(flags);
/*
* If there is an error; offline the device since all
* error recovery strategies would have already been
- * deployed on the host side.
+ * deployed on the host side. However, if the command
+ * were a pass-through command deal with it appropriately.
*/
- if (vm_srb->srb_status == SRB_STATUS_ERROR)
- scmnd->result = DID_TARGET_FAILURE << 16;
- else
+ switch (vm_srb->srb_status) {
+ case SRB_STATUS_ERROR:
+ switch (scmnd->cmnd[0]) {
+ case ATA_16:
+ case ATA_12:
+ scmnd->result = DID_PASSTHROUGH << 16;
+ break;
+ default:
+ scmnd->result = DID_TARGET_FAILURE << 16;
+ }
+ break;
+ default:
scmnd->result = vm_srb->scsi_status;
+ }
+
/*
* If the LUN is invalid; remove the device.
#include <linux/uaccess.h>
#include <linux/module.h>
+#include <linux/bootsplash.h>
- #include <asm/system.h>
-
/* number of characters left in xmit buffer before select has we have room */
#define WAKEUP_CHARS 256
tty->minimum_to_wake = (minimum - (b - buf));
if (!input_available_p(tty, 0)) {
+ dev_t i_rdev = file->f_dentry->d_inode->i_rdev;
+
+ if (i_rdev == MKDEV(TTY_MAJOR, 0) ||
+ i_rdev == MKDEV(TTY_MAJOR, 1) ||
+ i_rdev == MKDEV(TTYAUX_MAJOR, 0) ||
+ i_rdev == MKDEV(TTYAUX_MAJOR, 1)) {
+ SPLASH_VERBOSE();
+ }
+
if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
retval = -EIO;
break;
#include <linux/nmi.h>
#include <linux/mutex.h>
#include <linux/slab.h>
+ #ifdef CONFIG_SPARC
+ #include <linux/sunserialcore.h>
+ #endif
#include <asm/io.h>
#include <asm/irq.h>
#include "8250.h"
- #ifdef CONFIG_SPARC
- #include "../suncore.h"
- #endif
-
/*
* Configuration:
* share_irqs - whether we pass IRQF_SHARED to request_irq(). This option
#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
- /*
- * We default to IRQ0 for the "no irq" hack. Some
- * machine types want others as well - they're free
- * to redefine this in their header file.
- */
- #define is_real_interrupt(irq) ((irq) != 0)
-
#ifdef CONFIG_SERIAL_8250_DETECT_IRQ
#define CONFIG_SERIAL_DETECT_IRQ 1
#endif
#define CONFIG_SERIAL_MANY_PORTS 1
#endif
+#define arch_8250_sysrq_via_ctrl_o(a,b) 0
+
/*
* HUB6 is always on. This will be removed once the header
* files have been cleaned.
}
static void
- serial_out_sync(struct uart_8250_port *up, int offset, int value)
+ serial_port_out_sync(struct uart_port *p, int offset, int value)
{
- struct uart_port *p = &up->port;
switch (p->iotype) {
case UPIO_MEM:
case UPIO_MEM32:
}
}
/* Uart divisor latch read */
static inline int _serial_dl_read(struct uart_8250_port *up)
{
- return serial_inp(up, UART_DLL) | serial_inp(up, UART_DLM) << 8;
+ return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8;
}
/* Uart divisor latch write */
static inline void _serial_dl_write(struct uart_8250_port *up, int value)
{
- serial_outp(up, UART_DLL, value & 0xff);
- serial_outp(up, UART_DLM, value >> 8 & 0xff);
+ serial_out(up, UART_DLL, value & 0xff);
+ serial_out(up, UART_DLM, value >> 8 & 0xff);
}
#if defined(CONFIG_MIPS_ALCHEMY)
static void serial8250_clear_fifos(struct uart_8250_port *p)
{
if (p->capabilities & UART_CAP_FIFO) {
- serial_outp(p, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_outp(p, UART_FCR, UART_FCR_ENABLE_FIFO |
+ serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO |
UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
- serial_outp(p, UART_FCR, 0);
+ serial_out(p, UART_FCR, 0);
}
}
{
if (p->capabilities & UART_CAP_SLEEP) {
if (p->capabilities & UART_CAP_EFR) {
- serial_outp(p, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_outp(p, UART_EFR, UART_EFR_ECB);
- serial_outp(p, UART_LCR, 0);
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(p, UART_EFR, UART_EFR_ECB);
+ serial_out(p, UART_LCR, 0);
}
- serial_outp(p, UART_IER, sleep ? UART_IERX_SLEEP : 0);
+ serial_out(p, UART_IER, sleep ? UART_IERX_SLEEP : 0);
if (p->capabilities & UART_CAP_EFR) {
- serial_outp(p, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_outp(p, UART_EFR, 0);
- serial_outp(p, UART_LCR, 0);
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(p, UART_EFR, 0);
+ serial_out(p, UART_LCR, 0);
}
}
}
unsigned char mode;
int result;
- mode = serial_inp(up, UART_RSA_MSR);
+ mode = serial_in(up, UART_RSA_MSR);
result = mode & UART_RSA_MSR_FIFO;
if (!result) {
- serial_outp(up, UART_RSA_MSR, mode | UART_RSA_MSR_FIFO);
- mode = serial_inp(up, UART_RSA_MSR);
+ serial_out(up, UART_RSA_MSR, mode | UART_RSA_MSR_FIFO);
+ mode = serial_in(up, UART_RSA_MSR);
result = mode & UART_RSA_MSR_FIFO;
}
spin_unlock_irq(&up->port.lock);
}
if (up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16)
- serial_outp(up, UART_RSA_FRR, 0);
+ serial_out(up, UART_RSA_FRR, 0);
}
}
up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16) {
spin_lock_irq(&up->port.lock);
- mode = serial_inp(up, UART_RSA_MSR);
+ mode = serial_in(up, UART_RSA_MSR);
result = !(mode & UART_RSA_MSR_FIFO);
if (!result) {
- serial_outp(up, UART_RSA_MSR, mode & ~UART_RSA_MSR_FIFO);
- mode = serial_inp(up, UART_RSA_MSR);
+ serial_out(up, UART_RSA_MSR, mode & ~UART_RSA_MSR_FIFO);
+ mode = serial_in(up, UART_RSA_MSR);
result = !(mode & UART_RSA_MSR_FIFO);
}
unsigned short old_dl;
int count;
- old_lcr = serial_inp(up, UART_LCR);
- serial_outp(up, UART_LCR, 0);
- old_fcr = serial_inp(up, UART_FCR);
- old_mcr = serial_inp(up, UART_MCR);
- serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO |
+ old_lcr = serial_in(up, UART_LCR);
+ serial_out(up, UART_LCR, 0);
+ old_fcr = serial_in(up, UART_FCR);
+ old_mcr = serial_in(up, UART_MCR);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO |
UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
- serial_outp(up, UART_MCR, UART_MCR_LOOP);
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_out(up, UART_MCR, UART_MCR_LOOP);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
old_dl = serial_dl_read(up);
serial_dl_write(up, 0x0001);
- serial_outp(up, UART_LCR, 0x03);
+ serial_out(up, UART_LCR, 0x03);
for (count = 0; count < 256; count++)
- serial_outp(up, UART_TX, count);
+ serial_out(up, UART_TX, count);
mdelay(20);/* FIXME - schedule_timeout */
- for (count = 0; (serial_inp(up, UART_LSR) & UART_LSR_DR) &&
+ for (count = 0; (serial_in(up, UART_LSR) & UART_LSR_DR) &&
(count < 256); count++)
- serial_inp(up, UART_RX);
- serial_outp(up, UART_FCR, old_fcr);
- serial_outp(up, UART_MCR, old_mcr);
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_in(up, UART_RX);
+ serial_out(up, UART_FCR, old_fcr);
+ serial_out(up, UART_MCR, old_mcr);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
serial_dl_write(up, old_dl);
- serial_outp(up, UART_LCR, old_lcr);
+ serial_out(up, UART_LCR, old_lcr);
return count;
}
unsigned char old_dll, old_dlm, old_lcr;
unsigned int id;
- old_lcr = serial_inp(p, UART_LCR);
- serial_outp(p, UART_LCR, UART_LCR_CONF_MODE_A);
+ old_lcr = serial_in(p, UART_LCR);
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_A);
- old_dll = serial_inp(p, UART_DLL);
- old_dlm = serial_inp(p, UART_DLM);
+ old_dll = serial_in(p, UART_DLL);
+ old_dlm = serial_in(p, UART_DLM);
- serial_outp(p, UART_DLL, 0);
- serial_outp(p, UART_DLM, 0);
+ serial_out(p, UART_DLL, 0);
+ serial_out(p, UART_DLM, 0);
- id = serial_inp(p, UART_DLL) | serial_inp(p, UART_DLM) << 8;
+ id = serial_in(p, UART_DLL) | serial_in(p, UART_DLM) << 8;
- serial_outp(p, UART_DLL, old_dll);
- serial_outp(p, UART_DLM, old_dlm);
- serial_outp(p, UART_LCR, old_lcr);
+ serial_out(p, UART_DLL, old_dll);
+ serial_out(p, UART_DLM, old_dlm);
+ serial_out(p, UART_LCR, old_lcr);
return id;
}
up->port.type = PORT_8250;
scratch = serial_in(up, UART_SCR);
- serial_outp(up, UART_SCR, 0xa5);
+ serial_out(up, UART_SCR, 0xa5);
status1 = serial_in(up, UART_SCR);
- serial_outp(up, UART_SCR, 0x5a);
+ serial_out(up, UART_SCR, 0x5a);
status2 = serial_in(up, UART_SCR);
- serial_outp(up, UART_SCR, scratch);
+ serial_out(up, UART_SCR, scratch);
if (status1 == 0xa5 && status2 == 0x5a)
up->port.type = PORT_16450;
} else {
status &= ~0xB0; /* Disable LOCK, mask out PRESL[01] */
status |= 0x10; /* 1.625 divisor for baud_base --> 921600 */
- serial_outp(up, 0x04, status);
+ serial_out(up, 0x04, status);
}
return 1;
}
* Check for presence of the EFR when DLAB is set.
* Only ST16C650V1 UARTs pass this test.
*/
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
if (serial_in(up, UART_EFR) == 0) {
- serial_outp(up, UART_EFR, 0xA8);
+ serial_out(up, UART_EFR, 0xA8);
if (serial_in(up, UART_EFR) != 0) {
DEBUG_AUTOCONF("EFRv1 ");
up->port.type = PORT_16650;
} else {
DEBUG_AUTOCONF("Motorola 8xxx DUART ");
}
- serial_outp(up, UART_EFR, 0);
+ serial_out(up, UART_EFR, 0);
return;
}
* Maybe it requires 0xbf to be written to the LCR.
* (other ST16C650V2 UARTs, TI16C752A, etc)
*/
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
if (serial_in(up, UART_EFR) == 0 && !broken_efr(up)) {
DEBUG_AUTOCONF("EFRv2 ");
autoconfig_has_efr(up);
* switch back to bank 2, read it from EXCR1 again and check
* it's changed. If so, set baud_base in EXCR2 to 921600. -- dwmw2
*/
- serial_outp(up, UART_LCR, 0);
+ serial_out(up, UART_LCR, 0);
status1 = serial_in(up, UART_MCR);
- serial_outp(up, UART_LCR, 0xE0);
+ serial_out(up, UART_LCR, 0xE0);
status2 = serial_in(up, 0x02); /* EXCR1 */
if (!((status2 ^ status1) & UART_MCR_LOOP)) {
- serial_outp(up, UART_LCR, 0);
- serial_outp(up, UART_MCR, status1 ^ UART_MCR_LOOP);
- serial_outp(up, UART_LCR, 0xE0);
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_MCR, status1 ^ UART_MCR_LOOP);
+ serial_out(up, UART_LCR, 0xE0);
status2 = serial_in(up, 0x02); /* EXCR1 */
- serial_outp(up, UART_LCR, 0);
- serial_outp(up, UART_MCR, status1);
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_MCR, status1);
if ((status2 ^ status1) & UART_MCR_LOOP) {
unsigned short quot;
- serial_outp(up, UART_LCR, 0xE0);
+ serial_out(up, UART_LCR, 0xE0);
quot = serial_dl_read(up);
quot <<= 3;
if (ns16550a_goto_highspeed(up))
serial_dl_write(up, quot);
- serial_outp(up, UART_LCR, 0);
+ serial_out(up, UART_LCR, 0);
up->port.uartclk = 921600*16;
up->port.type = PORT_NS16550A;
* Try setting it with and without DLAB set. Cheap clones
* set bit 5 without DLAB set.
*/
- serial_outp(up, UART_LCR, 0);
- serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
status1 = serial_in(up, UART_IIR) >> 5;
- serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_A);
- serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
status2 = serial_in(up, UART_IIR) >> 5;
- serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_outp(up, UART_LCR, 0);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(up, UART_LCR, 0);
DEBUG_AUTOCONF("iir1=%d iir2=%d ", status1, status2);
* already a 1 and maybe locked there before we even start start.
*/
iersave = serial_in(up, UART_IER);
- serial_outp(up, UART_IER, iersave & ~UART_IER_UUE);
+ serial_out(up, UART_IER, iersave & ~UART_IER_UUE);
if (!(serial_in(up, UART_IER) & UART_IER_UUE)) {
/*
* OK it's in a known zero state, try writing and reading
* without disturbing the current state of the other bits.
*/
- serial_outp(up, UART_IER, iersave | UART_IER_UUE);
+ serial_out(up, UART_IER, iersave | UART_IER_UUE);
if (serial_in(up, UART_IER) & UART_IER_UUE) {
/*
* It's an Xscale.
*/
DEBUG_AUTOCONF("Couldn't force IER_UUE to 0 ");
}
- serial_outp(up, UART_IER, iersave);
+ serial_out(up, UART_IER, iersave);
/*
* Exar uarts have EFR in a weird location
{
unsigned char status1, scratch, scratch2, scratch3;
unsigned char save_lcr, save_mcr;
+ struct uart_port *port = &up->port;
unsigned long flags;
- if (!up->port.iobase && !up->port.mapbase && !up->port.membase)
+ if (!port->iobase && !port->mapbase && !port->membase)
return;
DEBUG_AUTOCONF("ttyS%d: autoconf (0x%04lx, 0x%p): ",
- serial_index(&up->port), up->port.iobase, up->port.membase);
+ serial_index(port), port->iobase, port->membase);
/*
* We really do need global IRQs disabled here - we're going to
* be frobbing the chips IRQ enable register to see if it exists.
*/
- spin_lock_irqsave(&up->port.lock, flags);
+ spin_lock_irqsave(&port->lock, flags);
up->capabilities = 0;
up->bugs = 0;
- if (!(up->port.flags & UPF_BUGGY_UART)) {
+ if (!(port->flags & UPF_BUGGY_UART)) {
/*
* Do a simple existence test first; if we fail this,
* there's no point trying anything else.
* Note: this is safe as long as MCR bit 4 is clear
* and the device is in "PC" mode.
*/
- scratch = serial_inp(up, UART_IER);
- serial_outp(up, UART_IER, 0);
+ scratch = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, 0);
#ifdef __i386__
outb(0xff, 0x080);
#endif
* Mask out IER[7:4] bits for test as some UARTs (e.g. TL
* 16C754B) allow only to modify them if an EFR bit is set.
*/
- scratch2 = serial_inp(up, UART_IER) & 0x0f;
- serial_outp(up, UART_IER, 0x0F);
+ scratch2 = serial_in(up, UART_IER) & 0x0f;
+ serial_out(up, UART_IER, 0x0F);
#ifdef __i386__
outb(0, 0x080);
#endif
- scratch3 = serial_inp(up, UART_IER) & 0x0f;
- serial_outp(up, UART_IER, scratch);
+ scratch3 = serial_in(up, UART_IER) & 0x0f;
+ serial_out(up, UART_IER, scratch);
if (scratch2 != 0 || scratch3 != 0x0F) {
/*
* We failed; there's nothing here
* manufacturer would be stupid enough to design a board
* that conflicts with COM 1-4 --- we hope!
*/
- if (!(up->port.flags & UPF_SKIP_TEST)) {
- serial_outp(up, UART_MCR, UART_MCR_LOOP | 0x0A);
- status1 = serial_inp(up, UART_MSR) & 0xF0;
- serial_outp(up, UART_MCR, save_mcr);
+ if (!(port->flags & UPF_SKIP_TEST)) {
+ serial_out(up, UART_MCR, UART_MCR_LOOP | 0x0A);
+ status1 = serial_in(up, UART_MSR) & 0xF0;
+ serial_out(up, UART_MCR, save_mcr);
if (status1 != 0x90) {
DEBUG_AUTOCONF("LOOP test failed (%02x) ",
status1);
* We also initialise the EFR (if any) to zero for later. The
* EFR occupies the same register location as the FCR and IIR.
*/
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_outp(up, UART_EFR, 0);
- serial_outp(up, UART_LCR, 0);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(up, UART_EFR, 0);
+ serial_out(up, UART_LCR, 0);
- serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
scratch = serial_in(up, UART_IIR) >> 6;
DEBUG_AUTOCONF("iir=%d ", scratch);
autoconfig_8250(up);
break;
case 1:
- up->port.type = PORT_UNKNOWN;
+ port->type = PORT_UNKNOWN;
break;
case 2:
- up->port.type = PORT_16550;
+ port->type = PORT_16550;
break;
case 3:
autoconfig_16550a(up);
/*
* Only probe for RSA ports if we got the region.
*/
- if (up->port.type == PORT_16550A && probeflags & PROBE_RSA) {
+ if (port->type == PORT_16550A && probeflags & PROBE_RSA) {
int i;
for (i = 0 ; i < probe_rsa_count; ++i) {
- if (probe_rsa[i] == up->port.iobase &&
- __enable_rsa(up)) {
- up->port.type = PORT_RSA;
+ if (probe_rsa[i] == port->iobase && __enable_rsa(up)) {
+ port->type = PORT_RSA;
break;
}
}
}
#endif
- serial_outp(up, UART_LCR, save_lcr);
+ serial_out(up, UART_LCR, save_lcr);
- if (up->capabilities != uart_config[up->port.type].flags) {
+ if (up->capabilities != uart_config[port->type].flags) {
printk(KERN_WARNING
"ttyS%d: detected caps %08x should be %08x\n",
- serial_index(&up->port), up->capabilities,
- uart_config[up->port.type].flags);
+ serial_index(port), up->capabilities,
+ uart_config[port->type].flags);
}
- up->port.fifosize = uart_config[up->port.type].fifo_size;
- up->capabilities = uart_config[up->port.type].flags;
- up->tx_loadsz = uart_config[up->port.type].tx_loadsz;
+ port->fifosize = uart_config[up->port.type].fifo_size;
+ up->capabilities = uart_config[port->type].flags;
+ up->tx_loadsz = uart_config[port->type].tx_loadsz;
- if (up->port.type == PORT_UNKNOWN)
+ if (port->type == PORT_UNKNOWN)
goto out;
/*
* Reset the UART.
*/
#ifdef CONFIG_SERIAL_8250_RSA
- if (up->port.type == PORT_RSA)
- serial_outp(up, UART_RSA_FRR, 0);
+ if (port->type == PORT_RSA)
+ serial_out(up, UART_RSA_FRR, 0);
#endif
- serial_outp(up, UART_MCR, save_mcr);
+ serial_out(up, UART_MCR, save_mcr);
serial8250_clear_fifos(up);
serial_in(up, UART_RX);
if (up->capabilities & UART_CAP_UUE)
- serial_outp(up, UART_IER, UART_IER_UUE);
+ serial_out(up, UART_IER, UART_IER_UUE);
else
- serial_outp(up, UART_IER, 0);
+ serial_out(up, UART_IER, 0);
out:
- spin_unlock_irqrestore(&up->port.lock, flags);
- DEBUG_AUTOCONF("type=%s\n", uart_config[up->port.type].name);
+ spin_unlock_irqrestore(&port->lock, flags);
+ DEBUG_AUTOCONF("type=%s\n", uart_config[port->type].name);
}
static void autoconfig_irq(struct uart_8250_port *up)
{
+ struct uart_port *port = &up->port;
unsigned char save_mcr, save_ier;
unsigned char save_ICP = 0;
unsigned int ICP = 0;
unsigned long irqs;
int irq;
- if (up->port.flags & UPF_FOURPORT) {
- ICP = (up->port.iobase & 0xfe0) | 0x1f;
+ if (port->flags & UPF_FOURPORT) {
+ ICP = (port->iobase & 0xfe0) | 0x1f;
save_ICP = inb_p(ICP);
outb_p(0x80, ICP);
- (void) inb_p(ICP);
+ inb_p(ICP);
}
/* forget possible initially masked and pending IRQ */
probe_irq_off(probe_irq_on());
- save_mcr = serial_inp(up, UART_MCR);
- save_ier = serial_inp(up, UART_IER);
- serial_outp(up, UART_MCR, UART_MCR_OUT1 | UART_MCR_OUT2);
+ save_mcr = serial_in(up, UART_MCR);
+ save_ier = serial_in(up, UART_IER);
+ serial_out(up, UART_MCR, UART_MCR_OUT1 | UART_MCR_OUT2);
irqs = probe_irq_on();
- serial_outp(up, UART_MCR, 0);
+ serial_out(up, UART_MCR, 0);
udelay(10);
- if (up->port.flags & UPF_FOURPORT) {
- serial_outp(up, UART_MCR,
+ if (port->flags & UPF_FOURPORT) {
+ serial_out(up, UART_MCR,
UART_MCR_DTR | UART_MCR_RTS);
} else {
- serial_outp(up, UART_MCR,
+ serial_out(up, UART_MCR,
UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2);
}
- serial_outp(up, UART_IER, 0x0f); /* enable all intrs */
- (void)serial_inp(up, UART_LSR);
- (void)serial_inp(up, UART_RX);
- (void)serial_inp(up, UART_IIR);
- (void)serial_inp(up, UART_MSR);
- serial_outp(up, UART_TX, 0xFF);
+ serial_out(up, UART_IER, 0x0f); /* enable all intrs */
+ serial_in(up, UART_LSR);
+ serial_in(up, UART_RX);
+ serial_in(up, UART_IIR);
+ serial_in(up, UART_MSR);
+ serial_out(up, UART_TX, 0xFF);
udelay(20);
irq = probe_irq_off(irqs);
- serial_outp(up, UART_MCR, save_mcr);
- serial_outp(up, UART_IER, save_ier);
+ serial_out(up, UART_MCR, save_mcr);
+ serial_out(up, UART_IER, save_ier);
- if (up->port.flags & UPF_FOURPORT)
+ if (port->flags & UPF_FOURPORT)
outb_p(save_ICP, ICP);
- up->port.irq = (irq > 0) ? irq : 0;
+ port->irq = (irq > 0) ? irq : 0;
}
static inline void __stop_tx(struct uart_8250_port *p)
/*
* We really want to stop the transmitter from sending.
*/
- if (up->port.type == PORT_16C950) {
+ if (port->type == PORT_16C950) {
up->acr |= UART_ACR_TXDIS;
serial_icr_write(up, UART_ACR, up->acr);
}
if (!(up->ier & UART_IER_THRI)) {
up->ier |= UART_IER_THRI;
- serial_out(up, UART_IER, up->ier);
+ serial_port_out(port, UART_IER, up->ier);
if (up->bugs & UART_BUG_TXEN) {
unsigned char lsr;
lsr = serial_in(up, UART_LSR);
up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- if ((up->port.type == PORT_RM9000) ?
+ if ((port->type == PORT_RM9000) ?
(lsr & UART_LSR_THRE) :
(lsr & UART_LSR_TEMT))
serial8250_tx_chars(up);
/*
* Re-enable the transmitter if we disabled it.
*/
- if (up->port.type == PORT_16C950 && up->acr & UART_ACR_TXDIS) {
+ if (port->type == PORT_16C950 && up->acr & UART_ACR_TXDIS) {
up->acr &= ~UART_ACR_TXDIS;
serial_icr_write(up, UART_ACR, up->acr);
}
up->ier &= ~UART_IER_RLSI;
up->port.read_status_mask &= ~UART_LSR_DR;
- serial_out(up, UART_IER, up->ier);
+ serial_port_out(port, UART_IER, up->ier);
}
static void serial8250_enable_ms(struct uart_port *port)
return;
up->ier |= UART_IER_MSI;
- serial_out(up, UART_IER, up->ier);
+ serial_port_out(port, UART_IER, up->ier);
}
/*
unsigned char
serial8250_rx_chars(struct uart_8250_port *up, unsigned char lsr)
{
- struct tty_struct *tty = up->port.state->port.tty;
+ struct uart_port *port = &up->port;
+ struct tty_struct *tty = port->state->port.tty;
unsigned char ch;
int max_count = 256;
char flag;
do {
if (likely(lsr & UART_LSR_DR))
+ {
- ch = serial_inp(up, UART_RX);
+ ch = serial_in(up, UART_RX);
+ if (arch_8250_sysrq_via_ctrl_o(ch, &up->port))
+ goto ignore_char;
+ }
else
/*
* Intel 82571 has a Serial Over Lan device that will
ch = 0;
flag = TTY_NORMAL;
- up->port.icount.rx++;
+ port->icount.rx++;
lsr |= up->lsr_saved_flags;
up->lsr_saved_flags = 0;
*/
if (lsr & UART_LSR_BI) {
lsr &= ~(UART_LSR_FE | UART_LSR_PE);
- up->port.icount.brk++;
+ port->icount.brk++;
/*
* If tegra port then clear the rx fifo to
* accept another break/character.
*/
- if (up->port.type == PORT_TEGRA)
+ if (port->type == PORT_TEGRA)
clear_rx_fifo(up);
/*
* may get masked by ignore_status_mask
* or read_status_mask.
*/
- if (uart_handle_break(&up->port))
+ if (uart_handle_break(port))
goto ignore_char;
} else if (lsr & UART_LSR_PE)
- up->port.icount.parity++;
+ port->icount.parity++;
else if (lsr & UART_LSR_FE)
- up->port.icount.frame++;
+ port->icount.frame++;
if (lsr & UART_LSR_OE)
- up->port.icount.overrun++;
+ port->icount.overrun++;
/*
* Mask off conditions which should be ignored.
*/
- lsr &= up->port.read_status_mask;
+ lsr &= port->read_status_mask;
if (lsr & UART_LSR_BI) {
DEBUG_INTR("handling break....");
else if (lsr & UART_LSR_FE)
flag = TTY_FRAME;
}
- if (uart_handle_sysrq_char(&up->port, ch))
+ if (uart_handle_sysrq_char(port, ch))
goto ignore_char;
- uart_insert_char(&up->port, lsr, UART_LSR_OE, ch, flag);
+ uart_insert_char(port, lsr, UART_LSR_OE, ch, flag);
ignore_char:
- lsr = serial_inp(up, UART_LSR);
+ lsr = serial_in(up, UART_LSR);
} while ((lsr & (UART_LSR_DR | UART_LSR_BI)) && (max_count-- > 0));
- spin_unlock(&up->port.lock);
+ spin_unlock(&port->lock);
tty_flip_buffer_push(tty);
- spin_lock(&up->port.lock);
+ spin_lock(&port->lock);
return lsr;
}
EXPORT_SYMBOL_GPL(serial8250_rx_chars);
void serial8250_tx_chars(struct uart_8250_port *up)
{
- struct circ_buf *xmit = &up->port.state->xmit;
+ struct uart_port *port = &up->port;
+ struct circ_buf *xmit = &port->state->xmit;
int count;
- if (up->port.x_char) {
- serial_outp(up, UART_TX, up->port.x_char);
- up->port.icount.tx++;
- up->port.x_char = 0;
+ if (port->x_char) {
+ serial_out(up, UART_TX, port->x_char);
+ port->icount.tx++;
+ port->x_char = 0;
return;
}
- if (uart_tx_stopped(&up->port)) {
- serial8250_stop_tx(&up->port);
+ if (uart_tx_stopped(port)) {
+ serial8250_stop_tx(port);
return;
}
if (uart_circ_empty(xmit)) {
do {
serial_out(up, UART_TX, xmit->buf[xmit->tail]);
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
- up->port.icount.tx++;
+ port->icount.tx++;
if (uart_circ_empty(xmit))
break;
} while (--count > 0);
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
- uart_write_wakeup(&up->port);
+ uart_write_wakeup(port);
DEBUG_INTR("THRE...");
unsigned int serial8250_modem_status(struct uart_8250_port *up)
{
+ struct uart_port *port = &up->port;
unsigned int status = serial_in(up, UART_MSR);
status |= up->msr_saved_flags;
up->msr_saved_flags = 0;
if (status & UART_MSR_ANY_DELTA && up->ier & UART_IER_MSI &&
- up->port.state != NULL) {
+ port->state != NULL) {
if (status & UART_MSR_TERI)
- up->port.icount.rng++;
+ port->icount.rng++;
if (status & UART_MSR_DDSR)
- up->port.icount.dsr++;
+ port->icount.dsr++;
if (status & UART_MSR_DDCD)
- uart_handle_dcd_change(&up->port, status & UART_MSR_DCD);
+ uart_handle_dcd_change(port, status & UART_MSR_DCD);
if (status & UART_MSR_DCTS)
- uart_handle_cts_change(&up->port, status & UART_MSR_CTS);
+ uart_handle_cts_change(port, status & UART_MSR_CTS);
- wake_up_interruptible(&up->port.state->port.delta_msr_wait);
+ wake_up_interruptible(&port->state->port.delta_msr_wait);
}
return status;
if (iir & UART_IIR_NO_INT)
return 0;
- spin_lock_irqsave(&up->port.lock, flags);
+ spin_lock_irqsave(&port->lock, flags);
- status = serial_inp(up, UART_LSR);
+ status = serial_port_in(port, UART_LSR);
DEBUG_INTR("status = %x...", status);
if (status & UART_LSR_THRE)
serial8250_tx_chars(up);
- spin_unlock_irqrestore(&up->port.lock, flags);
+ spin_unlock_irqrestore(&port->lock, flags);
return 1;
}
EXPORT_SYMBOL_GPL(serial8250_handle_irq);
static int serial8250_default_handle_irq(struct uart_port *port)
{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned int iir = serial_in(up, UART_IIR);
+ unsigned int iir = serial_port_in(port, UART_IIR);
return serial8250_handle_irq(port, iir);
}
* Must disable interrupts or else we risk racing with the interrupt
* based handler.
*/
- if (is_real_interrupt(up->port.irq)) {
+ if (up->port.irq) {
ier = serial_in(up, UART_IER);
serial_out(up, UART_IER, 0);
}
if (!(iir & UART_IIR_NO_INT))
serial8250_tx_chars(up);
- if (is_real_interrupt(up->port.irq))
+ if (up->port.irq)
serial_out(up, UART_IER, ier);
spin_unlock_irqrestore(&up->port.lock, flags);
unsigned long flags;
unsigned int lsr;
- spin_lock_irqsave(&up->port.lock, flags);
- lsr = serial_in(up, UART_LSR);
+ spin_lock_irqsave(&port->lock, flags);
+ lsr = serial_port_in(port, UART_LSR);
up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- spin_unlock_irqrestore(&up->port.lock, flags);
+ spin_unlock_irqrestore(&port->lock, flags);
return (lsr & BOTH_EMPTY) == BOTH_EMPTY ? TIOCSER_TEMT : 0;
}
mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
- serial_out(up, UART_MCR, mcr);
+ serial_port_out(port, UART_MCR, mcr);
}
static void serial8250_break_ctl(struct uart_port *port, int break_state)
container_of(port, struct uart_8250_port, port);
unsigned long flags;
- spin_lock_irqsave(&up->port.lock, flags);
+ spin_lock_irqsave(&port->lock, flags);
if (break_state == -1)
up->lcr |= UART_LCR_SBC;
else
up->lcr &= ~UART_LCR_SBC;
- serial_out(up, UART_LCR, up->lcr);
- spin_unlock_irqrestore(&up->port.lock, flags);
+ serial_port_out(port, UART_LCR, up->lcr);
+ spin_unlock_irqrestore(&port->lock, flags);
}
/*
static int serial8250_get_poll_char(struct uart_port *port)
{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned char lsr = serial_inp(up, UART_LSR);
+ unsigned char lsr = serial_port_in(port, UART_LSR);
if (!(lsr & UART_LSR_DR))
return NO_POLL_CHAR;
- return serial_inp(up, UART_RX);
+ return serial_port_in(port, UART_RX);
}
/*
* First save the IER then disable the interrupts
*/
- ier = serial_in(up, UART_IER);
+ ier = serial_port_in(port, UART_IER);
if (up->capabilities & UART_CAP_UUE)
- serial_out(up, UART_IER, UART_IER_UUE);
+ serial_port_out(port, UART_IER, UART_IER_UUE);
else
- serial_out(up, UART_IER, 0);
+ serial_port_out(port, UART_IER, 0);
wait_for_xmitr(up, BOTH_EMPTY);
/*
* Send the character out.
* If a LF, also do CR...
*/
- serial_out(up, UART_TX, c);
+ serial_port_out(port, UART_TX, c);
if (c == 10) {
wait_for_xmitr(up, BOTH_EMPTY);
- serial_out(up, UART_TX, 13);
+ serial_port_out(port, UART_TX, 13);
}
/*
* and restore the IER
*/
wait_for_xmitr(up, BOTH_EMPTY);
- serial_out(up, UART_IER, ier);
+ serial_port_out(port, UART_IER, ier);
}
#endif /* CONFIG_CONSOLE_POLL */
unsigned char lsr, iir;
int retval;
- up->port.fifosize = uart_config[up->port.type].fifo_size;
+ port->fifosize = uart_config[up->port.type].fifo_size;
up->tx_loadsz = uart_config[up->port.type].tx_loadsz;
up->capabilities = uart_config[up->port.type].flags;
up->mcr = 0;
- if (up->port.iotype != up->cur_iotype)
+ if (port->iotype != up->cur_iotype)
set_io_from_upio(port);
- if (up->port.type == PORT_16C950) {
+ if (port->type == PORT_16C950) {
/* Wake up and initialize UART */
up->acr = 0;
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_outp(up, UART_EFR, UART_EFR_ECB);
- serial_outp(up, UART_IER, 0);
- serial_outp(up, UART_LCR, 0);
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_port_out(port, UART_EFR, UART_EFR_ECB);
+ serial_port_out(port, UART_IER, 0);
+ serial_port_out(port, UART_LCR, 0);
serial_icr_write(up, UART_CSR, 0); /* Reset the UART */
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_outp(up, UART_EFR, UART_EFR_ECB);
- serial_outp(up, UART_LCR, 0);
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_port_out(port, UART_EFR, UART_EFR_ECB);
+ serial_port_out(port, UART_LCR, 0);
}
#ifdef CONFIG_SERIAL_8250_RSA
/*
* Clear the interrupt registers.
*/
- (void) serial_inp(up, UART_LSR);
- (void) serial_inp(up, UART_RX);
- (void) serial_inp(up, UART_IIR);
- (void) serial_inp(up, UART_MSR);
+ serial_port_in(port, UART_LSR);
+ serial_port_in(port, UART_RX);
+ serial_port_in(port, UART_IIR);
+ serial_port_in(port, UART_MSR);
/*
* At this point, there's no way the LSR could still be 0xff;
* if it is, then bail out, because there's likely no UART
* here.
*/
- if (!(up->port.flags & UPF_BUGGY_UART) &&
- (serial_inp(up, UART_LSR) == 0xff)) {
+ if (!(port->flags & UPF_BUGGY_UART) &&
+ (serial_port_in(port, UART_LSR) == 0xff)) {
printk_ratelimited(KERN_INFO "ttyS%d: LSR safety check engaged!\n",
- serial_index(&up->port));
+ serial_index(port));
return -ENODEV;
}
/*
* For a XR16C850, we need to set the trigger levels
*/
- if (up->port.type == PORT_16850) {
+ if (port->type == PORT_16850) {
unsigned char fctr;
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
- fctr = serial_inp(up, UART_FCTR) & ~(UART_FCTR_RX|UART_FCTR_TX);
- serial_outp(up, UART_FCTR, fctr | UART_FCTR_TRGD | UART_FCTR_RX);
- serial_outp(up, UART_TRG, UART_TRG_96);
- serial_outp(up, UART_FCTR, fctr | UART_FCTR_TRGD | UART_FCTR_TX);
- serial_outp(up, UART_TRG, UART_TRG_96);
+ fctr = serial_in(up, UART_FCTR) & ~(UART_FCTR_RX|UART_FCTR_TX);
+ serial_port_out(port, UART_FCTR,
+ fctr | UART_FCTR_TRGD | UART_FCTR_RX);
+ serial_port_out(port, UART_TRG, UART_TRG_96);
+ serial_port_out(port, UART_FCTR,
+ fctr | UART_FCTR_TRGD | UART_FCTR_TX);
+ serial_port_out(port, UART_TRG, UART_TRG_96);
- serial_outp(up, UART_LCR, 0);
+ serial_port_out(port, UART_LCR, 0);
}
- if (is_real_interrupt(up->port.irq)) {
+ if (port->irq) {
unsigned char iir1;
/*
* Test for UARTs that do not reassert THRE when the
* the interrupt is enabled. Delays are necessary to
* allow register changes to become visible.
*/
- spin_lock_irqsave(&up->port.lock, flags);
+ spin_lock_irqsave(&port->lock, flags);
if (up->port.irqflags & IRQF_SHARED)
- disable_irq_nosync(up->port.irq);
+ disable_irq_nosync(port->irq);
wait_for_xmitr(up, UART_LSR_THRE);
- serial_out_sync(up, UART_IER, UART_IER_THRI);
+ serial_port_out_sync(port, UART_IER, UART_IER_THRI);
udelay(1); /* allow THRE to set */
- iir1 = serial_in(up, UART_IIR);
- serial_out(up, UART_IER, 0);
- serial_out_sync(up, UART_IER, UART_IER_THRI);
+ iir1 = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+ serial_port_out_sync(port, UART_IER, UART_IER_THRI);
udelay(1); /* allow a working UART time to re-assert THRE */
- iir = serial_in(up, UART_IIR);
- serial_out(up, UART_IER, 0);
+ iir = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
- if (up->port.irqflags & IRQF_SHARED)
- enable_irq(up->port.irq);
- spin_unlock_irqrestore(&up->port.lock, flags);
+ if (port->irqflags & IRQF_SHARED)
+ enable_irq(port->irq);
+ spin_unlock_irqrestore(&port->lock, flags);
/*
* If the interrupt is not reasserted, setup a timer to
* hardware interrupt, we use a timer-based system. The original
* driver used to do this with IRQ0.
*/
- if (!is_real_interrupt(up->port.irq)) {
+ if (!port->irq) {
up->timer.data = (unsigned long)up;
mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
} else {
/*
* Now, initialize the UART
*/
- serial_outp(up, UART_LCR, UART_LCR_WLEN8);
+ serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
- spin_lock_irqsave(&up->port.lock, flags);
+ spin_lock_irqsave(&port->lock, flags);
if (up->port.flags & UPF_FOURPORT) {
- if (!is_real_interrupt(up->port.irq))
+ if (!up->port.irq)
up->port.mctrl |= TIOCM_OUT1;
} else
/*
* Most PC uarts need OUT2 raised to enable interrupts.
*/
- if (is_real_interrupt(up->port.irq))
+ if (port->irq)
up->port.mctrl |= TIOCM_OUT2;
- serial8250_set_mctrl(&up->port, up->port.mctrl);
+ serial8250_set_mctrl(port, port->mctrl);
/* Serial over Lan (SoL) hack:
Intel 8257x Gigabit ethernet chips have a
* Do a quick test to see if we receive an
* interrupt when we enable the TX irq.
*/
- serial_outp(up, UART_IER, UART_IER_THRI);
- lsr = serial_in(up, UART_LSR);
- iir = serial_in(up, UART_IIR);
- serial_outp(up, UART_IER, 0);
+ serial_port_out(port, UART_IER, UART_IER_THRI);
+ lsr = serial_port_in(port, UART_LSR);
+ iir = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
if (lsr & UART_LSR_TEMT && iir & UART_IIR_NO_INT) {
if (!(up->bugs & UART_BUG_TXEN)) {
}
dont_test_tx_en:
- spin_unlock_irqrestore(&up->port.lock, flags);
+ spin_unlock_irqrestore(&port->lock, flags);
/*
* Clear the interrupt registers again for luck, and clear the
* saved flags to avoid getting false values from polling
* routines or the previous session.
*/
- serial_inp(up, UART_LSR);
- serial_inp(up, UART_RX);
- serial_inp(up, UART_IIR);
- serial_inp(up, UART_MSR);
+ serial_port_in(port, UART_LSR);
+ serial_port_in(port, UART_RX);
+ serial_port_in(port, UART_IIR);
+ serial_port_in(port, UART_MSR);
up->lsr_saved_flags = 0;
up->msr_saved_flags = 0;
* anyway, so we don't enable them here.
*/
up->ier = UART_IER_RLSI | UART_IER_RDI;
- serial_outp(up, UART_IER, up->ier);
+ serial_port_out(port, UART_IER, up->ier);
- if (up->port.flags & UPF_FOURPORT) {
+ if (port->flags & UPF_FOURPORT) {
unsigned int icp;
/*
* Enable interrupts on the AST Fourport board
*/
- icp = (up->port.iobase & 0xfe0) | 0x01f;
+ icp = (port->iobase & 0xfe0) | 0x01f;
outb_p(0x80, icp);
- (void) inb_p(icp);
+ inb_p(icp);
}
return 0;
* Disable interrupts from this port
*/
up->ier = 0;
- serial_outp(up, UART_IER, 0);
+ serial_port_out(port, UART_IER, 0);
- spin_lock_irqsave(&up->port.lock, flags);
- if (up->port.flags & UPF_FOURPORT) {
+ spin_lock_irqsave(&port->lock, flags);
+ if (port->flags & UPF_FOURPORT) {
/* reset interrupts on the AST Fourport board */
- inb((up->port.iobase & 0xfe0) | 0x1f);
- up->port.mctrl |= TIOCM_OUT1;
+ inb((port->iobase & 0xfe0) | 0x1f);
+ port->mctrl |= TIOCM_OUT1;
} else
- up->port.mctrl &= ~TIOCM_OUT2;
+ port->mctrl &= ~TIOCM_OUT2;
- serial8250_set_mctrl(&up->port, up->port.mctrl);
- spin_unlock_irqrestore(&up->port.lock, flags);
+ serial8250_set_mctrl(port, port->mctrl);
+ spin_unlock_irqrestore(&port->lock, flags);
/*
* Disable break condition and FIFOs
*/
- serial_out(up, UART_LCR, serial_inp(up, UART_LCR) & ~UART_LCR_SBC);
+ serial_port_out(port, UART_LCR,
+ serial_port_in(port, UART_LCR) & ~UART_LCR_SBC);
serial8250_clear_fifos(up);
#ifdef CONFIG_SERIAL_8250_RSA
* Read data port to reset things, and then unlink from
* the IRQ chain.
*/
- (void) serial_in(up, UART_RX);
+ serial_port_in(port, UART_RX);
del_timer_sync(&up->timer);
up->timer.function = serial8250_timeout;
- if (is_real_interrupt(up->port.irq))
+ if (port->irq)
serial_unlink_irq_chain(up);
}
if (up->bugs & UART_BUG_QUOT && (quot & 0xff) == 0)
quot++;
- if (up->capabilities & UART_CAP_FIFO && up->port.fifosize > 1) {
+ if (up->capabilities & UART_CAP_FIFO && port->fifosize > 1) {
if (baud < 2400)
fcr = UART_FCR_ENABLE_FIFO | UART_FCR_TRIGGER_1;
else
- fcr = uart_config[up->port.type].fcr;
+ fcr = uart_config[port->type].fcr;
}
/*
* have sufficient FIFO entries for the latency of the remote
* UART to respond. IOW, at least 32 bytes of FIFO.
*/
- if (up->capabilities & UART_CAP_AFE && up->port.fifosize >= 32) {
+ if (up->capabilities & UART_CAP_AFE && port->fifosize >= 32) {
up->mcr &= ~UART_MCR_AFE;
if (termios->c_cflag & CRTSCTS)
up->mcr |= UART_MCR_AFE;
* Ok, we're now changing the port state. Do it with
* interrupts disabled.
*/
- spin_lock_irqsave(&up->port.lock, flags);
+ spin_lock_irqsave(&port->lock, flags);
/*
* Update the per-port timeout.
*/
uart_update_timeout(port, termios->c_cflag, baud);
- up->port.read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
+ port->read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
if (termios->c_iflag & INPCK)
- up->port.read_status_mask |= UART_LSR_FE | UART_LSR_PE;
+ port->read_status_mask |= UART_LSR_FE | UART_LSR_PE;
if (termios->c_iflag & (BRKINT | PARMRK))
- up->port.read_status_mask |= UART_LSR_BI;
+ port->read_status_mask |= UART_LSR_BI;
/*
* Characteres to ignore
*/
- up->port.ignore_status_mask = 0;
+ port->ignore_status_mask = 0;
if (termios->c_iflag & IGNPAR)
- up->port.ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
+ port->ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
if (termios->c_iflag & IGNBRK) {
- up->port.ignore_status_mask |= UART_LSR_BI;
+ port->ignore_status_mask |= UART_LSR_BI;
/*
* If we're ignoring parity and break indicators,
* ignore overruns too (for real raw support).
*/
if (termios->c_iflag & IGNPAR)
- up->port.ignore_status_mask |= UART_LSR_OE;
+ port->ignore_status_mask |= UART_LSR_OE;
}
/*
* ignore all characters if CREAD is not set
*/
if ((termios->c_cflag & CREAD) == 0)
- up->port.ignore_status_mask |= UART_LSR_DR;
+ port->ignore_status_mask |= UART_LSR_DR;
/*
* CTS flow control flag and modem status interrupts
if (up->capabilities & UART_CAP_RTOIE)
up->ier |= UART_IER_RTOIE;
- serial_out(up, UART_IER, up->ier);
+ serial_port_out(port, UART_IER, up->ier);
if (up->capabilities & UART_CAP_EFR) {
unsigned char efr = 0;
if (termios->c_cflag & CRTSCTS)
efr |= UART_EFR_CTS;
- serial_outp(up, UART_LCR, UART_LCR_CONF_MODE_B);
- if (up->port.flags & UPF_EXAR_EFR)
- serial_outp(up, UART_XR_EFR, efr);
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ if (port->flags & UPF_EXAR_EFR)
+ serial_port_out(port, UART_XR_EFR, efr);
else
- serial_outp(up, UART_EFR, efr);
+ serial_port_out(port, UART_EFR, efr);
}
#ifdef CONFIG_ARCH_OMAP
if (cpu_is_omap1510() && is_omap_port(up)) {
if (baud == 115200) {
quot = 1;
- serial_out(up, UART_OMAP_OSC_12M_SEL, 1);
+ serial_port_out(port, UART_OMAP_OSC_12M_SEL, 1);
} else
- serial_out(up, UART_OMAP_OSC_12M_SEL, 0);
+ serial_port_out(port, UART_OMAP_OSC_12M_SEL, 0);
}
#endif
- if (up->capabilities & UART_NATSEMI) {
- /* Switch to bank 2 not bank 1, to avoid resetting EXCR2 */
- serial_outp(up, UART_LCR, 0xe0);
- } else {
- serial_outp(up, UART_LCR, cval | UART_LCR_DLAB);/* set DLAB */
- }
+ /*
+ * For NatSemi, switch to bank 2 not bank 1, to avoid resetting EXCR2,
+ * otherwise just set DLAB
+ */
+ if (up->capabilities & UART_NATSEMI)
+ serial_port_out(port, UART_LCR, 0xe0);
+ else
+ serial_port_out(port, UART_LCR, cval | UART_LCR_DLAB);
serial_dl_write(up, quot);
* LCR DLAB must be set to enable 64-byte FIFO mode. If the FCR
* is written without DLAB set, this mode will be disabled.
*/
- if (up->port.type == PORT_16750)
- serial_outp(up, UART_FCR, fcr);
+ if (port->type == PORT_16750)
+ serial_port_out(port, UART_FCR, fcr);
- serial_outp(up, UART_LCR, cval); /* reset DLAB */
+ serial_port_out(port, UART_LCR, cval); /* reset DLAB */
up->lcr = cval; /* Save LCR */
- if (up->port.type != PORT_16750) {
- if (fcr & UART_FCR_ENABLE_FIFO) {
- /* emulated UARTs (Lucent Venus 167x) need two steps */
- serial_outp(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- }
- serial_outp(up, UART_FCR, fcr); /* set fcr */
- }
- serial8250_set_mctrl(&up->port, up->port.mctrl);
- spin_unlock_irqrestore(&up->port.lock, flags);
+ if (port->type != PORT_16750) {
+ /* emulated UARTs (Lucent Venus 167x) need two steps */
+ if (fcr & UART_FCR_ENABLE_FIFO)
+ serial_port_out(port, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_port_out(port, UART_FCR, fcr); /* set fcr */
+ }
+ serial8250_set_mctrl(port, port->mctrl);
+ spin_unlock_irqrestore(&port->lock, flags);
/* Don't rewrite B0 */
if (tty_termios_baud_rate(termios))
tty_termios_encode_baud_rate(termios, baud, baud);
static int serial8250_request_std_resource(struct uart_8250_port *up)
{
unsigned int size = serial8250_port_size(up);
+ struct uart_port *port = &up->port;
int ret = 0;
- switch (up->port.iotype) {
+ switch (port->iotype) {
case UPIO_AU:
case UPIO_TSI:
case UPIO_MEM32:
case UPIO_MEM:
- if (!up->port.mapbase)
+ if (!port->mapbase)
break;
- if (!request_mem_region(up->port.mapbase, size, "serial")) {
+ if (!request_mem_region(port->mapbase, size, "serial")) {
ret = -EBUSY;
break;
}
- if (up->port.flags & UPF_IOREMAP) {
- up->port.membase = ioremap_nocache(up->port.mapbase,
- size);
- if (!up->port.membase) {
- release_mem_region(up->port.mapbase, size);
+ if (port->flags & UPF_IOREMAP) {
+ port->membase = ioremap_nocache(port->mapbase, size);
+ if (!port->membase) {
+ release_mem_region(port->mapbase, size);
ret = -ENOMEM;
}
}
case UPIO_HUB6:
case UPIO_PORT:
- if (!request_region(up->port.iobase, size, "serial"))
+ if (!request_region(port->iobase, size, "serial"))
ret = -EBUSY;
break;
}
static void serial8250_release_std_resource(struct uart_8250_port *up)
{
unsigned int size = serial8250_port_size(up);
+ struct uart_port *port = &up->port;
- switch (up->port.iotype) {
+ switch (port->iotype) {
case UPIO_AU:
case UPIO_TSI:
case UPIO_MEM32:
case UPIO_MEM:
- if (!up->port.mapbase)
+ if (!port->mapbase)
break;
- if (up->port.flags & UPF_IOREMAP) {
- iounmap(up->port.membase);
- up->port.membase = NULL;
+ if (port->flags & UPF_IOREMAP) {
+ iounmap(port->membase);
+ port->membase = NULL;
}
- release_mem_region(up->port.mapbase, size);
+ release_mem_region(port->mapbase, size);
break;
case UPIO_HUB6:
case UPIO_PORT:
- release_region(up->port.iobase, size);
+ release_region(port->iobase, size);
break;
}
}
{
unsigned long start = UART_RSA_BASE << up->port.regshift;
unsigned int size = 8 << up->port.regshift;
+ struct uart_port *port = &up->port;
int ret = -EINVAL;
- switch (up->port.iotype) {
+ switch (port->iotype) {
case UPIO_HUB6:
case UPIO_PORT:
- start += up->port.iobase;
+ start += port->iobase;
if (request_region(start, size, "serial-rsa"))
ret = 0;
else
{
unsigned long offset = UART_RSA_BASE << up->port.regshift;
unsigned int size = 8 << up->port.regshift;
+ struct uart_port *port = &up->port;
- switch (up->port.iotype) {
+ switch (port->iotype) {
case UPIO_HUB6:
case UPIO_PORT:
- release_region(up->port.iobase + offset, size);
+ release_region(port->iobase + offset, size);
break;
}
}
container_of(port, struct uart_8250_port, port);
serial8250_release_std_resource(up);
- if (up->port.type == PORT_RSA)
+ if (port->type == PORT_RSA)
serial8250_release_rsa_resource(up);
}
int ret = 0;
ret = serial8250_request_std_resource(up);
- if (ret == 0 && up->port.type == PORT_RSA) {
+ if (ret == 0 && port->type == PORT_RSA) {
ret = serial8250_request_rsa_resource(up);
if (ret < 0)
serial8250_release_std_resource(up);
if (ret < 0)
probeflags &= ~PROBE_RSA;
- if (up->port.iotype != up->cur_iotype)
+ if (port->iotype != up->cur_iotype)
set_io_from_upio(port);
if (flags & UART_CONFIG_TYPE)
autoconfig(up, probeflags);
/* if access method is AU, it is a 16550 with a quirk */
- if (up->port.type == PORT_16550A && up->port.iotype == UPIO_AU)
+ if (port->type == PORT_16550A && port->iotype == UPIO_AU)
up->bugs |= UART_BUG_NOMSR;
- if (up->port.type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ)
+ if (port->type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ)
autoconfig_irq(up);
- if (up->port.type != PORT_RSA && probeflags & PROBE_RSA)
+ if (port->type != PORT_RSA && probeflags & PROBE_RSA)
serial8250_release_rsa_resource(up);
- if (up->port.type == PORT_UNKNOWN)
+ if (port->type == PORT_UNKNOWN)
serial8250_release_std_resource(up);
}
for (i = 0; i < nr_uarts; i++) {
struct uart_8250_port *up = &serial8250_ports[i];
+ struct uart_port *port = &up->port;
- up->port.line = i;
- spin_lock_init(&up->port.lock);
+ port->line = i;
+ spin_lock_init(&port->lock);
init_timer(&up->timer);
up->timer.function = serial8250_timeout;
up->mcr_mask = ~ALPHA_KLUDGE_MCR;
up->mcr_force = ALPHA_KLUDGE_MCR;
- up->port.ops = &serial8250_pops;
+ port->ops = &serial8250_pops;
}
if (share_irqs)
for (i = 0, up = serial8250_ports;
i < ARRAY_SIZE(old_serial_port) && i < nr_uarts;
i++, up++) {
- up->port.iobase = old_serial_port[i].port;
- up->port.irq = irq_canonicalize(old_serial_port[i].irq);
- up->port.irqflags = old_serial_port[i].irqflags;
- up->port.uartclk = old_serial_port[i].baud_base * 16;
- up->port.flags = old_serial_port[i].flags;
- up->port.hub6 = old_serial_port[i].hub6;
- up->port.membase = old_serial_port[i].iomem_base;
- up->port.iotype = old_serial_port[i].io_type;
- up->port.regshift = old_serial_port[i].iomem_reg_shift;
- set_io_from_upio(&up->port);
- up->port.irqflags |= irqflag;
+ struct uart_port *port = &up->port;
+
+ port->iobase = old_serial_port[i].port;
+ port->irq = irq_canonicalize(old_serial_port[i].irq);
+ port->irqflags = old_serial_port[i].irqflags;
+ port->uartclk = old_serial_port[i].baud_base * 16;
+ port->flags = old_serial_port[i].flags;
+ port->hub6 = old_serial_port[i].hub6;
+ port->membase = old_serial_port[i].iomem_base;
+ port->iotype = old_serial_port[i].io_type;
+ port->regshift = old_serial_port[i].iomem_reg_shift;
+ set_io_from_upio(port);
+ port->irqflags |= irqflag;
if (serial8250_isa_config != NULL)
serial8250_isa_config(i, &up->port, &up->capabilities);
container_of(port, struct uart_8250_port, port);
wait_for_xmitr(up, UART_LSR_THRE);
- serial_out(up, UART_TX, ch);
+ serial_port_out(port, UART_TX, ch);
}
/*
serial8250_console_write(struct console *co, const char *s, unsigned int count)
{
struct uart_8250_port *up = &serial8250_ports[co->index];
+ struct uart_port *port = &up->port;
unsigned long flags;
unsigned int ier;
int locked = 1;
touch_nmi_watchdog();
local_irq_save(flags);
- if (up->port.sysrq) {
+ if (port->sysrq) {
/* serial8250_handle_irq() already took the lock */
locked = 0;
} else if (oops_in_progress) {
- locked = spin_trylock(&up->port.lock);
+ locked = spin_trylock(&port->lock);
} else
- spin_lock(&up->port.lock);
+ spin_lock(&port->lock);
/*
* First save the IER then disable the interrupts
*/
- ier = serial_in(up, UART_IER);
+ ier = serial_port_in(port, UART_IER);
if (up->capabilities & UART_CAP_UUE)
- serial_out(up, UART_IER, UART_IER_UUE);
+ serial_port_out(port, UART_IER, UART_IER_UUE);
else
- serial_out(up, UART_IER, 0);
+ serial_port_out(port, UART_IER, 0);
- uart_console_write(&up->port, s, count, serial8250_console_putchar);
+ uart_console_write(port, s, count, serial8250_console_putchar);
/*
* Finally, wait for transmitter to become empty
* and restore the IER
*/
wait_for_xmitr(up, BOTH_EMPTY);
- serial_out(up, UART_IER, ier);
+ serial_port_out(port, UART_IER, ier);
/*
* The receive handling will happen properly because the
serial8250_modem_status(up);
if (locked)
- spin_unlock(&up->port.lock);
+ spin_unlock(&port->lock);
local_irq_restore(flags);
}
void serial8250_resume_port(int line)
{
struct uart_8250_port *up = &serial8250_ports[line];
+ struct uart_port *port = &up->port;
if (up->capabilities & UART_NATSEMI) {
/* Ensure it's still in high speed mode */
- serial_outp(up, UART_LCR, 0xE0);
+ serial_port_out(port, UART_LCR, 0xE0);
ns16550a_goto_highspeed(up);
- serial_outp(up, UART_LCR, 0);
- up->port.uartclk = 921600*16;
+ serial_port_out(port, UART_LCR, 0);
+ port->uartclk = 921600*16;
}
- uart_resume_port(&serial8250_reg, &up->port);
+ uart_resume_port(&serial8250_reg, port);
}
/*
#include <linux/reboot.h>
#include <linux/notifier.h>
#include <linux/jiffies.h>
+ #include <linux/uaccess.h>
#include <asm/irq_regs.h>
+#include <linux/bootsplash.h>
+
extern void ctrl_alt_del(void);
/*
/*
* Some laptops take the 789uiojklm,. keys as number pad when NumLock is on.
* This seems a good reason to start with NumLock off. On HIL keyboards
- * of PARISC machines however there is no NumLock key and everyone expects the keypad
- * to be used for numbers.
+ * of PARISC machines however there is no NumLock key and everyone expects the
+ * keypad to be used for numbers.
*/
#if defined(CONFIG_PARISC) && (defined(CONFIG_KEYBOARD_HIL) || defined(CONFIG_KEYBOARD_HIL_OLD))
#define KBD_DEFLOCK 0
- void compute_shiftstate(void);
-
/*
* Handler Tables.
*/
* Variables exported for vt_ioctl.c
*/
- /* maximum values each key_handler can handle */
- const int max_vals[] = {
- 255, ARRAY_SIZE(func_table) - 1, ARRAY_SIZE(fn_handler) - 1, NR_PAD - 1,
- NR_DEAD - 1, 255, 3, NR_SHIFT - 1, 255, NR_ASCII - 1, NR_LOCK - 1,
- 255, NR_LOCK - 1, 255, NR_BRL - 1
- };
-
- const int NR_TYPES = ARRAY_SIZE(max_vals);
-
- struct kbd_struct kbd_table[MAX_NR_CONSOLES];
- EXPORT_SYMBOL_GPL(kbd_table);
- static struct kbd_struct *kbd = kbd_table;
-
struct vt_spawn_console vt_spawn_con = {
.lock = __SPIN_LOCK_UNLOCKED(vt_spawn_con.lock),
.pid = NULL,
.sig = 0,
};
- /*
- * Variables exported for vt.c
- */
-
- int shift_state = 0;
/*
* Internal Data.
*/
+ static struct kbd_struct kbd_table[MAX_NR_CONSOLES];
+ static struct kbd_struct *kbd = kbd_table;
+
+ /* maximum values each key_handler can handle */
+ static const int max_vals[] = {
+ 255, ARRAY_SIZE(func_table) - 1, ARRAY_SIZE(fn_handler) - 1, NR_PAD - 1,
+ NR_DEAD - 1, 255, 3, NR_SHIFT - 1, 255, NR_ASCII - 1, NR_LOCK - 1,
+ 255, NR_LOCK - 1, 255, NR_BRL - 1
+ };
+
+ static const int NR_TYPES = ARRAY_SIZE(max_vals);
+
static struct input_handler kbd_handler;
static DEFINE_SPINLOCK(kbd_event_lock);
static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)]; /* keyboard key bitmap */
static unsigned int diacr;
static char rep; /* flag telling character repeat */
+ static int shift_state = 0;
+
static unsigned char ledstate = 0xff; /* undefined */
static unsigned char ledioctl;
return d->error == 0; /* stop as soon as we successfully get one */
}
- int getkeycode(unsigned int scancode)
+ static int getkeycode(unsigned int scancode)
{
struct getset_keycode_data d = {
.ke = {
return d->error == 0; /* stop as soon as we successfully set one */
}
- int setkeycode(unsigned int scancode, unsigned int keycode)
+ static int setkeycode(unsigned int scancode, unsigned int keycode)
{
struct getset_keycode_data d = {
.ke = {
/*
* Called after returning from RAW mode or when changing consoles - recompute
* shift_down[] and shift_state from key_down[] maybe called when keymap is
- * undefined, so that shiftkey release is seen
+ * undefined, so that shiftkey release is seen. The caller must hold the
+ * kbd_event_lock.
*/
- void compute_shiftstate(void)
+
+ static void do_compute_shiftstate(void)
{
unsigned int i, j, k, sym, val;
}
}
+ /* We still have to export this method to vt.c */
+ void compute_shiftstate(void)
+ {
+ unsigned long flags;
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ do_compute_shiftstate();
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ }
+
/*
* We have a combining character DIACR here, followed by the character CH.
* If the combination occurs in the table, return the corresponding value.
static void fn_null(struct vc_data *vc)
{
- compute_shiftstate();
+ do_compute_shiftstate();
}
/*
void setledstate(struct kbd_struct *kbd, unsigned int led)
{
+ unsigned long flags;
+ spin_lock_irqsave(&kbd_event_lock, flags);
if (!(led & ~7)) {
ledioctl = led;
kbd->ledmode = LED_SHOW_IOCTL;
kbd->ledmode = LED_SHOW_FLAGS;
set_leds();
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
}
static inline unsigned char getleds(void)
return 0;
}
+ /**
+ * vt_get_leds - helper for braille console
+ * @console: console to read
+ * @flag: flag we want to check
+ *
+ * Check the status of a keyboard led flag and report it back
+ */
+ int vt_get_leds(int console, int flag)
+ {
+ unsigned long flags;
+ struct kbd_struct * kbd = kbd_table + console;
+ int ret;
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ ret = vc_kbd_led(kbd, flag);
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(vt_get_leds);
+
+ /**
+ * vt_set_led_state - set LED state of a console
+ * @console: console to set
+ * @leds: LED bits
+ *
+ * Set the LEDs on a console. This is a wrapper for the VT layer
+ * so that we can keep kbd knowledge internal
+ */
+ void vt_set_led_state(int console, int leds)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ setledstate(kbd, leds);
+ }
+
+ /**
+ * vt_kbd_con_start - Keyboard side of console start
+ * @console: console
+ *
+ * Handle console start. This is a wrapper for the VT layer
+ * so that we can keep kbd knowledge internal
+ */
+ void vt_kbd_con_start(int console)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ unsigned long flags;
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ clr_vc_kbd_led(kbd, VC_SCROLLOCK);
+ set_leds();
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ }
+
+ /**
+ * vt_kbd_con_stop - Keyboard side of console stop
+ * @console: console
+ *
+ * Handle console stop. This is a wrapper for the VT layer
+ * so that we can keep kbd knowledge internal
+ */
+ void vt_kbd_con_stop(int console)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ unsigned long flags;
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ set_vc_kbd_led(kbd, VC_SCROLLOCK);
+ set_leds();
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ }
+
/*
* This is the tasklet that updates LED state on all keyboards
* attached to the box. The reason we use tasklet is that we
pr_warning("can't emulate rawmode for keycode %d\n",
keycode);
+ /* This code has to be redone for some non-x86 platforms */
+ if (down == 1 && (keycode == 0x3c || keycode == 0x01)) {
+ /* F2 and ESC on PC keyboard */
+ if (splash_verbose())
+ return;
+ }
+
#ifdef CONFIG_SPARC
if (keycode == KEY_A && sparc_l1_a_state) {
sparc_l1_a_state = false;
if (rc == NOTIFY_STOP || !key_map) {
atomic_notifier_call_chain(&keyboard_notifier_list,
KBD_UNBOUND_KEYCODE, ¶m);
- compute_shiftstate();
+ do_compute_shiftstate();
kbd->slockstate = 0;
return;
}
static const struct input_device_id kbd_ids[] = {
{
- .flags = INPUT_DEVICE_ID_MATCH_EVBIT,
- .evbit = { BIT_MASK(EV_KEY) },
- },
+ .flags = INPUT_DEVICE_ID_MATCH_EVBIT,
+ .evbit = { BIT_MASK(EV_KEY) },
+ },
{
- .flags = INPUT_DEVICE_ID_MATCH_EVBIT,
- .evbit = { BIT_MASK(EV_SND) },
- },
+ .flags = INPUT_DEVICE_ID_MATCH_EVBIT,
+ .evbit = { BIT_MASK(EV_SND) },
+ },
{ }, /* Terminating entry */
};
int i;
int error;
- for (i = 0; i < MAX_NR_CONSOLES; i++) {
+ for (i = 0; i < MAX_NR_CONSOLES; i++) {
kbd_table[i].ledflagstate = KBD_DEFLEDS;
kbd_table[i].default_ledflagstate = KBD_DEFLEDS;
kbd_table[i].ledmode = LED_SHOW_FLAGS;
return 0;
}
+
+ /* Ioctl support code */
+
+ /**
+ * vt_do_diacrit - diacritical table updates
+ * @cmd: ioctl request
+ * @up: pointer to user data for ioctl
+ * @perm: permissions check computed by caller
+ *
+ * Update the diacritical tables atomically and safely. Lock them
+ * against simultaneous keypresses
+ */
+ int vt_do_diacrit(unsigned int cmd, void __user *up, int perm)
+ {
+ struct kbdiacrs __user *a = up;
+ unsigned long flags;
+ int asize;
+ int ret = 0;
+
+ switch (cmd) {
+ case KDGKBDIACR:
+ {
+ struct kbdiacr *diacr;
+ int i;
+
+ diacr = kmalloc(MAX_DIACR * sizeof(struct kbdiacr),
+ GFP_KERNEL);
+ if (diacr == NULL)
+ return -ENOMEM;
+
+ /* Lock the diacriticals table, make a copy and then
+ copy it after we unlock */
+ spin_lock_irqsave(&kbd_event_lock, flags);
+
+ asize = accent_table_size;
+ for (i = 0; i < asize; i++) {
+ diacr[i].diacr = conv_uni_to_8bit(
+ accent_table[i].diacr);
+ diacr[i].base = conv_uni_to_8bit(
+ accent_table[i].base);
+ diacr[i].result = conv_uni_to_8bit(
+ accent_table[i].result);
+ }
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+
+ if (put_user(asize, &a->kb_cnt))
+ ret = -EFAULT;
+ else if (copy_to_user(a->kbdiacr, diacr,
+ asize * sizeof(struct kbdiacr)))
+ ret = -EFAULT;
+ kfree(diacr);
+ return ret;
+ }
+ case KDGKBDIACRUC:
+ {
+ struct kbdiacrsuc __user *a = up;
+ void *buf;
+
+ buf = kmalloc(MAX_DIACR * sizeof(struct kbdiacruc),
+ GFP_KERNEL);
+ if (buf == NULL)
+ return -ENOMEM;
+
+ /* Lock the diacriticals table, make a copy and then
+ copy it after we unlock */
+ spin_lock_irqsave(&kbd_event_lock, flags);
+
+ asize = accent_table_size;
+ memcpy(buf, accent_table, asize * sizeof(struct kbdiacruc));
+
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+
+ if (put_user(asize, &a->kb_cnt))
+ ret = -EFAULT;
+ else if (copy_to_user(a->kbdiacruc, buf,
+ asize*sizeof(struct kbdiacruc)))
+ ret = -EFAULT;
+ kfree(buf);
+ return ret;
+ }
+
+ case KDSKBDIACR:
+ {
+ struct kbdiacrs __user *a = up;
+ struct kbdiacr *diacr = NULL;
+ unsigned int ct;
+ int i;
+
+ if (!perm)
+ return -EPERM;
+ if (get_user(ct, &a->kb_cnt))
+ return -EFAULT;
+ if (ct >= MAX_DIACR)
+ return -EINVAL;
+
+ if (ct) {
+ diacr = kmalloc(sizeof(struct kbdiacr) * ct,
+ GFP_KERNEL);
+ if (diacr == NULL)
+ return -ENOMEM;
+
+ if (copy_from_user(diacr, a->kbdiacr,
+ sizeof(struct kbdiacr) * ct)) {
+ kfree(diacr);
+ return -EFAULT;
+ }
+ }
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ accent_table_size = ct;
+ for (i = 0; i < ct; i++) {
+ accent_table[i].diacr =
+ conv_8bit_to_uni(diacr[i].diacr);
+ accent_table[i].base =
+ conv_8bit_to_uni(diacr[i].base);
+ accent_table[i].result =
+ conv_8bit_to_uni(diacr[i].result);
+ }
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ kfree(diacr);
+ return 0;
+ }
+
+ case KDSKBDIACRUC:
+ {
+ struct kbdiacrsuc __user *a = up;
+ unsigned int ct;
+ void *buf = NULL;
+
+ if (!perm)
+ return -EPERM;
+
+ if (get_user(ct, &a->kb_cnt))
+ return -EFAULT;
+
+ if (ct >= MAX_DIACR)
+ return -EINVAL;
+
+ if (ct) {
+ buf = kmalloc(ct * sizeof(struct kbdiacruc),
+ GFP_KERNEL);
+ if (buf == NULL)
+ return -ENOMEM;
+
+ if (copy_from_user(buf, a->kbdiacruc,
+ ct * sizeof(struct kbdiacruc))) {
+ kfree(buf);
+ return -EFAULT;
+ }
+ }
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ if (ct)
+ memcpy(accent_table, buf,
+ ct * sizeof(struct kbdiacruc));
+ accent_table_size = ct;
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ kfree(buf);
+ return 0;
+ }
+ }
+ return ret;
+ }
+
+ /**
+ * vt_do_kdskbmode - set keyboard mode ioctl
+ * @console: the console to use
+ * @arg: the requested mode
+ *
+ * Update the keyboard mode bits while holding the correct locks.
+ * Return 0 for success or an error code.
+ */
+ int vt_do_kdskbmode(int console, unsigned int arg)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ int ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ switch(arg) {
+ case K_RAW:
+ kbd->kbdmode = VC_RAW;
+ break;
+ case K_MEDIUMRAW:
+ kbd->kbdmode = VC_MEDIUMRAW;
+ break;
+ case K_XLATE:
+ kbd->kbdmode = VC_XLATE;
+ do_compute_shiftstate();
+ break;
+ case K_UNICODE:
+ kbd->kbdmode = VC_UNICODE;
+ do_compute_shiftstate();
+ break;
+ case K_OFF:
+ kbd->kbdmode = VC_OFF;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ return ret;
+ }
+
+ /**
+ * vt_do_kdskbmeta - set keyboard meta state
+ * @console: the console to use
+ * @arg: the requested meta state
+ *
+ * Update the keyboard meta bits while holding the correct locks.
+ * Return 0 for success or an error code.
+ */
+ int vt_do_kdskbmeta(int console, unsigned int arg)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ int ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ switch(arg) {
+ case K_METABIT:
+ clr_vc_kbd_mode(kbd, VC_META);
+ break;
+ case K_ESCPREFIX:
+ set_vc_kbd_mode(kbd, VC_META);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ return ret;
+ }
+
+ int vt_do_kbkeycode_ioctl(int cmd, struct kbkeycode __user *user_kbkc,
+ int perm)
+ {
+ struct kbkeycode tmp;
+ int kc = 0;
+
+ if (copy_from_user(&tmp, user_kbkc, sizeof(struct kbkeycode)))
+ return -EFAULT;
+ switch (cmd) {
+ case KDGETKEYCODE:
+ kc = getkeycode(tmp.scancode);
+ if (kc >= 0)
+ kc = put_user(kc, &user_kbkc->keycode);
+ break;
+ case KDSETKEYCODE:
+ if (!perm)
+ return -EPERM;
+ kc = setkeycode(tmp.scancode, tmp.keycode);
+ break;
+ }
+ return kc;
+ }
+
+ #define i (tmp.kb_index)
+ #define s (tmp.kb_table)
+ #define v (tmp.kb_value)
+
+ int vt_do_kdsk_ioctl(int cmd, struct kbentry __user *user_kbe, int perm,
+ int console)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ struct kbentry tmp;
+ ushort *key_map, *new_map, val, ov;
+ unsigned long flags;
+
+ if (copy_from_user(&tmp, user_kbe, sizeof(struct kbentry)))
+ return -EFAULT;
+
+ if (!capable(CAP_SYS_TTY_CONFIG))
+ perm = 0;
+
+ switch (cmd) {
+ case KDGKBENT:
+ /* Ensure another thread doesn't free it under us */
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ key_map = key_maps[s];
+ if (key_map) {
+ val = U(key_map[i]);
+ if (kbd->kbdmode != VC_UNICODE && KTYP(val) >= NR_TYPES)
+ val = K_HOLE;
+ } else
+ val = (i ? K_HOLE : K_NOSUCHMAP);
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ return put_user(val, &user_kbe->kb_value);
+ case KDSKBENT:
+ if (!perm)
+ return -EPERM;
+ if (!i && v == K_NOSUCHMAP) {
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ /* deallocate map */
+ key_map = key_maps[s];
+ if (s && key_map) {
+ key_maps[s] = NULL;
+ if (key_map[0] == U(K_ALLOCATED)) {
+ kfree(key_map);
+ keymap_count--;
+ }
+ }
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ break;
+ }
+
+ if (KTYP(v) < NR_TYPES) {
+ if (KVAL(v) > max_vals[KTYP(v)])
+ return -EINVAL;
+ } else
+ if (kbd->kbdmode != VC_UNICODE)
+ return -EINVAL;
+
+ /* ++Geert: non-PC keyboards may generate keycode zero */
+ #if !defined(__mc68000__) && !defined(__powerpc__)
+ /* assignment to entry 0 only tests validity of args */
+ if (!i)
+ break;
+ #endif
+
+ new_map = kmalloc(sizeof(plain_map), GFP_KERNEL);
+ if (!new_map)
+ return -ENOMEM;
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ key_map = key_maps[s];
+ if (key_map == NULL) {
+ int j;
+
+ if (keymap_count >= MAX_NR_OF_USER_KEYMAPS &&
+ !capable(CAP_SYS_RESOURCE)) {
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ kfree(new_map);
+ return -EPERM;
+ }
+ key_maps[s] = new_map;
+ key_map = new_map;
+ key_map[0] = U(K_ALLOCATED);
+ for (j = 1; j < NR_KEYS; j++)
+ key_map[j] = U(K_HOLE);
+ keymap_count++;
+ } else
+ kfree(new_map);
+
+ ov = U(key_map[i]);
+ if (v == ov)
+ goto out;
+ /*
+ * Attention Key.
+ */
+ if (((ov == K_SAK) || (v == K_SAK)) && !capable(CAP_SYS_ADMIN)) {
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ return -EPERM;
+ }
+ key_map[i] = U(v);
+ if (!s && (KTYP(ov) == KT_SHIFT || KTYP(v) == KT_SHIFT))
+ do_compute_shiftstate();
+ out:
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ break;
+ }
+ return 0;
+ }
+ #undef i
+ #undef s
+ #undef v
+
+ /* FIXME: This one needs untangling and locking */
+ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ {
+ struct kbsentry *kbs;
+ char *p;
+ u_char *q;
+ u_char __user *up;
+ int sz;
+ int delta;
+ char *first_free, *fj, *fnw;
+ int i, j, k;
+ int ret;
+
+ if (!capable(CAP_SYS_TTY_CONFIG))
+ perm = 0;
+
+ kbs = kmalloc(sizeof(*kbs), GFP_KERNEL);
+ if (!kbs) {
+ ret = -ENOMEM;
+ goto reterr;
+ }
+
+ /* we mostly copy too much here (512bytes), but who cares ;) */
+ if (copy_from_user(kbs, user_kdgkb, sizeof(struct kbsentry))) {
+ ret = -EFAULT;
+ goto reterr;
+ }
+ kbs->kb_string[sizeof(kbs->kb_string)-1] = '\0';
+ i = kbs->kb_func;
+
+ switch (cmd) {
+ case KDGKBSENT:
+ sz = sizeof(kbs->kb_string) - 1; /* sz should have been
+ a struct member */
+ up = user_kdgkb->kb_string;
+ p = func_table[i];
+ if(p)
+ for ( ; *p && sz; p++, sz--)
+ if (put_user(*p, up++)) {
+ ret = -EFAULT;
+ goto reterr;
+ }
+ if (put_user('\0', up)) {
+ ret = -EFAULT;
+ goto reterr;
+ }
+ kfree(kbs);
+ return ((p && *p) ? -EOVERFLOW : 0);
+ case KDSKBSENT:
+ if (!perm) {
+ ret = -EPERM;
+ goto reterr;
+ }
+
+ q = func_table[i];
+ first_free = funcbufptr + (funcbufsize - funcbufleft);
+ for (j = i+1; j < MAX_NR_FUNC && !func_table[j]; j++)
+ ;
+ if (j < MAX_NR_FUNC)
+ fj = func_table[j];
+ else
+ fj = first_free;
+
+ delta = (q ? -strlen(q) : 1) + strlen(kbs->kb_string);
+ if (delta <= funcbufleft) { /* it fits in current buf */
+ if (j < MAX_NR_FUNC) {
+ memmove(fj + delta, fj, first_free - fj);
+ for (k = j; k < MAX_NR_FUNC; k++)
+ if (func_table[k])
+ func_table[k] += delta;
+ }
+ if (!q)
+ func_table[i] = fj;
+ funcbufleft -= delta;
+ } else { /* allocate a larger buffer */
+ sz = 256;
+ while (sz < funcbufsize - funcbufleft + delta)
+ sz <<= 1;
+ fnw = kmalloc(sz, GFP_KERNEL);
+ if(!fnw) {
+ ret = -ENOMEM;
+ goto reterr;
+ }
+
+ if (!q)
+ func_table[i] = fj;
+ if (fj > funcbufptr)
+ memmove(fnw, funcbufptr, fj - funcbufptr);
+ for (k = 0; k < j; k++)
+ if (func_table[k])
+ func_table[k] = fnw + (func_table[k] - funcbufptr);
+
+ if (first_free > fj) {
+ memmove(fnw + (fj - funcbufptr) + delta, fj, first_free - fj);
+ for (k = j; k < MAX_NR_FUNC; k++)
+ if (func_table[k])
+ func_table[k] = fnw + (func_table[k] - funcbufptr) + delta;
+ }
+ if (funcbufptr != func_buf)
+ kfree(funcbufptr);
+ funcbufptr = fnw;
+ funcbufleft = funcbufleft - delta + sz - funcbufsize;
+ funcbufsize = sz;
+ }
+ strcpy(func_table[i], kbs->kb_string);
+ break;
+ }
+ ret = 0;
+ reterr:
+ kfree(kbs);
+ return ret;
+ }
+
+ int vt_do_kdskled(int console, int cmd, unsigned long arg, int perm)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ unsigned long flags;
+ unsigned char ucval;
+
+ switch(cmd) {
+ /* the ioctls below read/set the flags usually shown in the leds */
+ /* don't use them - they will go away without warning */
+ case KDGKBLED:
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ ucval = kbd->ledflagstate | (kbd->default_ledflagstate << 4);
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ return put_user(ucval, (char __user *)arg);
+
+ case KDSKBLED:
+ if (!perm)
+ return -EPERM;
+ if (arg & ~0x77)
+ return -EINVAL;
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ kbd->ledflagstate = (arg & 7);
+ kbd->default_ledflagstate = ((arg >> 4) & 7);
+ set_leds();
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ break;
+
+ /* the ioctls below only set the lights, not the functions */
+ /* for those, see KDGKBLED and KDSKBLED above */
+ case KDGETLED:
+ ucval = getledstate();
+ return put_user(ucval, (char __user *)arg);
+
+ case KDSETLED:
+ if (!perm)
+ return -EPERM;
+ setledstate(kbd, arg);
+ return 0;
+ }
+ return -ENOIOCTLCMD;
+ }
+
+ int vt_do_kdgkbmode(int console)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ /* This is a spot read so needs no locking */
+ switch (kbd->kbdmode) {
+ case VC_RAW:
+ return K_RAW;
+ case VC_MEDIUMRAW:
+ return K_MEDIUMRAW;
+ case VC_UNICODE:
+ return K_UNICODE;
+ case VC_OFF:
+ return K_OFF;
+ default:
+ return K_XLATE;
+ }
+ }
+
+ /**
+ * vt_do_kdgkbmeta - report meta status
+ * @console: console to report
+ *
+ * Report the meta flag status of this console
+ */
+ int vt_do_kdgkbmeta(int console)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ /* Again a spot read so no locking */
+ return vc_kbd_mode(kbd, VC_META) ? K_ESCPREFIX : K_METABIT;
+ }
+
+ /**
+ * vt_reset_unicode - reset the unicode status
+ * @console: console being reset
+ *
+ * Restore the unicode console state to its default
+ */
+ void vt_reset_unicode(int console)
+ {
+ unsigned long flags;
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ kbd_table[console].kbdmode = default_utf8 ? VC_UNICODE : VC_XLATE;
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ }
+
+ /**
+ * vt_get_shiftstate - shift bit state
+ *
+ * Report the shift bits from the keyboard state. We have to export
+ * this to support some oddities in the vt layer.
+ */
+ int vt_get_shift_state(void)
+ {
+ /* Don't lock as this is a transient report */
+ return shift_state;
+ }
+
+ /**
+ * vt_reset_keyboard - reset keyboard state
+ * @console: console to reset
+ *
+ * Reset the keyboard bits for a console as part of a general console
+ * reset event
+ */
+ void vt_reset_keyboard(int console)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ unsigned long flags;
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ set_vc_kbd_mode(kbd, VC_REPEAT);
+ clr_vc_kbd_mode(kbd, VC_CKMODE);
+ clr_vc_kbd_mode(kbd, VC_APPLIC);
+ clr_vc_kbd_mode(kbd, VC_CRLF);
+ kbd->lockstate = 0;
+ kbd->slockstate = 0;
+ kbd->ledmode = LED_SHOW_FLAGS;
+ kbd->ledflagstate = kbd->default_ledflagstate;
+ /* do not do set_leds here because this causes an endless tasklet loop
+ when the keyboard hasn't been initialized yet */
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ }
+
+ /**
+ * vt_get_kbd_mode_bit - read keyboard status bits
+ * @console: console to read from
+ * @bit: mode bit to read
+ *
+ * Report back a vt mode bit. We do this without locking so the
+ * caller must be sure that there are no synchronization needs
+ */
+
+ int vt_get_kbd_mode_bit(int console, int bit)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ return vc_kbd_mode(kbd, bit);
+ }
+
+ /**
+ * vt_set_kbd_mode_bit - read keyboard status bits
+ * @console: console to read from
+ * @bit: mode bit to read
+ *
+ * Set a vt mode bit. We do this without locking so the
+ * caller must be sure that there are no synchronization needs
+ */
+
+ void vt_set_kbd_mode_bit(int console, int bit)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ unsigned long flags;
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ set_vc_kbd_mode(kbd, bit);
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ }
+
+ /**
+ * vt_clr_kbd_mode_bit - read keyboard status bits
+ * @console: console to read from
+ * @bit: mode bit to read
+ *
+ * Report back a vt mode bit. We do this without locking so the
+ * caller must be sure that there are no synchronization needs
+ */
+
+ void vt_clr_kbd_mode_bit(int console, int bit)
+ {
+ struct kbd_struct * kbd = kbd_table + console;
+ unsigned long flags;
+
+ spin_lock_irqsave(&kbd_event_lock, flags);
+ clr_vc_kbd_mode(kbd, bit);
+ spin_unlock_irqrestore(&kbd_event_lock, flags);
+ }
#include <linux/notifier.h>
#include <linux/device.h>
#include <linux/io.h>
- #include <asm/system.h>
#include <linux/uaccess.h>
#include <linux/kdb.h>
#include <linux/ctype.h>
* VT102 emulator
*/
- #define set_kbd(vc, x) set_vc_kbd_mode(kbd_table + (vc)->vc_num, (x))
- #define clr_kbd(vc, x) clr_vc_kbd_mode(kbd_table + (vc)->vc_num, (x))
- #define is_kbd(vc, x) vc_kbd_mode(kbd_table + (vc)->vc_num, (x))
+ #define set_kbd(vc, x) vt_set_kbd_mode_bit((vc)->vc_num, (x))
+ #define clr_kbd(vc, x) vt_clr_kbd_mode_bit((vc)->vc_num, (x))
+ #define is_kbd(vc, x) vt_get_kbd_mode_bit((vc)->vc_num, (x))
#define decarm VC_REPEAT
#define decckm VC_CKMODE
vc->vc_deccm = global_cursor_default;
vc->vc_decim = 0;
- set_kbd(vc, decarm);
- clr_kbd(vc, decckm);
- clr_kbd(vc, kbdapplic);
- clr_kbd(vc, lnm);
- kbd_table[vc->vc_num].lockstate = 0;
- kbd_table[vc->vc_num].slockstate = 0;
- kbd_table[vc->vc_num].ledmode = LED_SHOW_FLAGS;
- kbd_table[vc->vc_num].ledflagstate = kbd_table[vc->vc_num].default_ledflagstate;
- /* do not do set_leds here because this causes an endless tasklet loop
- when the keyboard hasn't been initialized yet */
+ vt_reset_keyboard(vc->vc_num);
vc->vc_cursor_type = cur_default;
vc->vc_complement_mask = vc->vc_s_complement_mask;
case 'q': /* DECLL - but only 3 leds */
/* map 0,1,2,3 to 0,1,2,4 */
if (vc->vc_par[0] < 4)
- setledstate(kbd_table + vc->vc_num,
+ vt_set_led_state(vc->vc_num,
(vc->vc_par[0] < 3) ? vc->vc_par[0] : 4);
return;
case 'r':
console_unlock();
break;
case TIOCL_SELLOADLUT:
+ console_lock();
ret = sel_loadlut(p);
+ console_unlock();
break;
case TIOCL_GETSHIFTSTATE:
* kernel-internal variable; programs not closely
* related to the kernel should not use this.
*/
- data = shift_state;
+ data = vt_get_shift_state();
ret = __put_user(data, p);
break;
case TIOCL_GETMOUSEREPORTING:
+ console_lock(); /* May be overkill */
data = mouse_reporting();
+ console_unlock();
ret = __put_user(data, p);
break;
case TIOCL_SETVESABLANK:
+ console_lock();
ret = set_vesa_blanking(p);
+ console_unlock();
break;
case TIOCL_GETKMSGREDIRECT:
data = vt_get_kmsg_redirect();
}
break;
case TIOCL_GETFGCONSOLE:
+ /* No locking needed as this is a transiently
+ correct return anyway if the caller hasn't
+ disabled switching */
ret = fg_console;
break;
case TIOCL_SCROLLCONSOLE:
if (get_user(lines, (s32 __user *)(p+4))) {
ret = -EFAULT;
} else {
+ /* Need the console lock here. Note that lots
+ of other calls need fixing before the lock
+ is actually useful ! */
+ console_lock();
scrollfront(vc_cons[fg_console].d, lines);
+ console_unlock();
ret = 0;
}
break;
console_num = tty->index;
if (!vc_cons_allocated(console_num))
return;
- set_vc_kbd_led(kbd_table + console_num, VC_SCROLLOCK);
- set_leds();
+ vt_kbd_con_stop(console_num);
}
/*
console_num = tty->index;
if (!vc_cons_allocated(console_num))
return;
- clr_vc_kbd_led(kbd_table + console_num, VC_SCROLLOCK);
- set_leds();
+ vt_kbd_con_start(console_num);
}
static void con_flush_chars(struct tty_struct *tty)
console_driver = alloc_tty_driver(MAX_NR_CONSOLES);
if (!console_driver)
panic("Couldn't allocate console driver\n");
- console_driver->owner = THIS_MODULE;
+
console_driver->name = "tty";
console_driver->name_base = 1;
console_driver->major = TTY_MAJOR;
int rc = -EINVAL;
int c;
- if (vc->vc_mode != KD_TEXT)
- return -EINVAL;
-
if (op->data) {
font.data = kmalloc(max_font_size, GFP_KERNEL);
if (!font.data)
font.data = NULL;
console_lock();
- if (vc->vc_sw->con_font_get)
+ if (vc->vc_mode != KD_TEXT)
+ rc = -EINVAL;
+ else if (vc->vc_sw->con_font_get)
rc = vc->vc_sw->con_font_get(vc, &font);
else
rc = -ENOSYS;
if (IS_ERR(font.data))
return PTR_ERR(font.data);
console_lock();
- if (vc->vc_sw->con_font_set)
+ if (vc->vc_mode != KD_TEXT)
+ rc = -EINVAL;
+ else if (vc->vc_sw->con_font_set)
rc = vc->vc_sw->con_font_set(vc, &font, op->flags);
else
rc = -ENOSYS;
char *s = name;
int rc;
- if (vc->vc_mode != KD_TEXT)
- return -EINVAL;
if (!op->data)
s = NULL;
name[MAX_FONT_NAME - 1] = 0;
console_lock();
+ if (vc->vc_mode != KD_TEXT) {
+ console_unlock();
+ return -EINVAL;
+ }
if (vc->vc_sw->con_font_default)
rc = vc->vc_sw->con_font_default(vc, &font, s);
else
int con = op->height;
int rc;
- if (vc->vc_mode != KD_TEXT)
- return -EINVAL;
console_lock();
- if (!vc->vc_sw->con_font_copy)
+ if (vc->vc_mode != KD_TEXT)
+ rc = -EINVAL;
+ else if (!vc->vc_sw->con_font_copy)
rc = -ENOSYS;
else if (con < 0 || !vc_cons_allocated(con))
rc = -ENOTTY;
notify_update(vc);
}
+#ifdef CONFIG_BOOTSPLASH
+void con_remap_def_color(struct vc_data *vc, int new_color)
+{
+ unsigned short *sbuf = screenpos(vc, 0, 1);
+ unsigned c, len = vc->vc_screenbuf_size >> 1;
+ unsigned int bits, old_color;
+
+ if (sbuf) {
+ old_color = vc->vc_def_color << 8;
+ new_color <<= 8;
+ while (len--) {
+ c = scr_readw(sbuf);
+ bits = (old_color ^ new_color) & 0xf000;
+ if (((c ^ old_color) & 0xf000) == 0)
+ scr_writew((c ^ bits), sbuf);
+ *sbuf ^= bits;
+ bits = (old_color ^ new_color) & 0x0f00;
+ if (((c ^ old_color) & 0x0f00) == 0)
+ scr_writew((c ^ bits), sbuf);
+ *sbuf ^= bits;
+ sbuf++;
+ }
+ new_color >>= 8;
+ }
+ vc->vc_def_color = vc->vc_color = new_color;
+ update_attr(vc);
+}
+#endif
+
/*
* Visible symbols for modules
*/
config FB_VESA
bool "VESA VGA graphics support"
- depends on (FB = y) && X86 && !XEN_UNPRIVILEGED_GUEST
+ depends on (FB = y) && X86
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
help
Say Y here if you want to control the backlight of your display.
+ config FB_I740
+ tristate "Intel740 support (EXPERIMENTAL)"
+ depends on EXPERIMENTAL && FB && PCI
+ select FB_MODE_HELPERS
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+ select VGASTATE
+ select FB_DDC
+ help
+ This driver supports graphics cards based on Intel740 chip.
+
config FB_I810
tristate "Intel 810/815 support (EXPERIMENTAL)"
depends on EXPERIMENTAL && FB && PCI && X86_32 && AGP_INTEL
---help---
Driver for the on-chip SH-Mobile HDMI controller.
- config FB_SH_MOBILE_MERAM
- tristate "SuperH Mobile MERAM read ahead support for LCDC"
- depends on FB_SH_MOBILE_LCDC
- default y
- ---help---
- Enable MERAM support for the SH-Mobile LCD controller.
-
- This will allow for caching of the framebuffer to provide more
- reliable access under heavy main memory bus traffic situations.
- Up to 4 memory channels can be configured, allowing 4 RGB or
- 2 YCbCr framebuffers to be configured.
-
config FB_TMIO
tristate "Toshiba Mobile IO FrameBuffer support"
depends on FB && MFD_CORE
config FB_S3C2410
tristate "S3C2410 LCD framebuffer support"
- depends on FB && ARCH_S3C2410
+ depends on FB && ARCH_S3C24XX
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
+ select FB_CFB_REV_PIXELS_IN_BYTE
---help---
This is the frame buffer device driver for the TI LCD controller
found on DA8xx/OMAP-L1xx SoCs.
config XEN_FBDEV_FRONTEND
tristate "Xen virtual frame buffer support"
- depends on FB && PARAVIRT_XEN
+ depends on FB && XEN
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select FB_SYS_FOPS
select FB_DEFERRED_IO
+ select INPUT_XEN_KBDDEV_FRONTEND
select XEN_XENBUS_FRONTEND
default y
help
source "drivers/video/omap/Kconfig"
source "drivers/video/omap2/Kconfig"
-
+ source "drivers/video/exynos/Kconfig"
source "drivers/video/backlight/Kconfig"
if VT
source "drivers/video/logo/Kconfig"
endif
+if FB
+ source "drivers/video/bootsplash/Kconfig"
+endif
+
+ config FB_SH_MOBILE_MERAM
+ tristate "SuperH Mobile MERAM read ahead support"
+ depends on (SUPERH || ARCH_SHMOBILE)
+ select GENERIC_ALLOCATOR
+ ---help---
+ Enable MERAM support for the SuperH controller.
+
+ This will allow for caching of the framebuffer to provide more
+ reliable access under heavy main memory bus traffic situations.
+ Up to 4 memory channels can be configured, allowing 4 RGB or
+ 2 YCbCr framebuffers to be configured.
+
endmenu
obj-$(CONFIG_VT) += console/
obj-$(CONFIG_LOGO) += logo/
obj-y += backlight/
+obj-$(CONFIG_BOOTSPLASH) += bootsplash/
+ obj-$(CONFIG_EXYNOS_VIDEO) += exynos/
+
obj-$(CONFIG_FB_CFB_FILLRECT) += cfbfillrect.o
obj-$(CONFIG_FB_CFB_COPYAREA) += cfbcopyarea.o
obj-$(CONFIG_FB_CFB_IMAGEBLIT) += cfbimgblt.o
obj-$(CONFIG_FB_PM2) += pm2fb.o
obj-$(CONFIG_FB_PM3) += pm3fb.o
+ obj-$(CONFIG_FB_I740) += i740fb.o
obj-$(CONFIG_FB_MATROX) += matrox/
obj-$(CONFIG_FB_RIVA) += riva/
obj-$(CONFIG_FB_NVIDIA) += nvidia/
--- /dev/null
+/*
+ * linux/drivers/video/bootsplash/bootsplash.c -
+ * splash screen handling functions.
+ *
+ * (w) 2001-2004 by Volker Poplawski, <volker@poplawski.de>,
+ * Stefan Reinauer, <stepan@suse.de>,
+ * Steffen Winterfeldt, <snwint@suse.de>,
+ * Michael Schroeder <mls@suse.de>
+ * 2009-2011 Egbert Eich <eich@suse.de>
+ *
+ * Ideas & SuSE screen work by Ken Wimer, <wimer@suse.de>
+ *
+ * For more information on this code check http://www.bootsplash.org/
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/fb.h>
+#include <linux/vt_kern.h>
+#include <linux/vmalloc.h>
+#include <linux/unistd.h>
+#include <linux/syscalls.h>
+#include <linux/console.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+
+#include <asm/irq.h>
- #include <asm/system.h>
+
+#include "../console/fbcon.h"
+#include <linux/bootsplash.h>
+#include "decode-jpg.h"
+
+#ifndef DEBUG
+# define SPLASH_DEBUG(fmt, args...)
+#else
+# define SPLASH_DEBUG(fmt, args...) \
+ printk(KERN_WARNING "%s: " fmt "\n", __func__, ##args)
+#endif
+extern signed char con2fb_map[MAX_NR_CONSOLES];
+
+#define SPLASH_VERSION "3.2.0-2010/03/31"
+
+/* These errors have to match fbcon-jpegdec.h */
+static unsigned char *jpg_errors[] = {
+ "no SOI found",
+ "not 8 bit",
+ "height mismatch",
+ "width mismatch",
+ "bad width or height",
+ "too many COMPPs",
+ "illegal HV",
+ "quant table selector",
+ "picture is not YCBCR 221111",
+ "unknow CID in scan",
+ "dct not sequential",
+ "wrong marker",
+ "no EOI",
+ "bad tables",
+ "depth mismatch",
+ "scale error",
+ "out of memory"
+};
+
+static int splash_usesilent;
+static unsigned long splash_default = 0xf01;
+
+static int jpeg_get(unsigned char *buf, unsigned char *pic,
+ int width, int height, enum splash_color_format cf,
+ struct jpeg_decdata *decdata);
+static int splash_look_for_jpeg(struct vc_data *vc, int width, int height);
+
+static int __init splash_setup(char *options)
+{
+ splash_usesilent = 0;
+
+ if (!strncmp("silent", options, 6)) {
+ printk(KERN_INFO "bootsplash: silent mode.\n");
+ splash_usesilent = 1;
+ /* skip "silent," */
+ if (strlen(options) == 6)
+ return 0;
+ options += 7;
+ }
+ if (!strncmp("verbose", options, 7)) {
+ printk(KERN_INFO "bootsplash: verbose mode.\n");
+ splash_usesilent = 0;
+ if (strlen(options) == 7)
+ return 0;
+ options += 8;
+ }
+ if (strict_strtoul(options, 0, &splash_default) == -EINVAL)
+ splash_default = 0;
+
+ return 0;
+}
+
+__setup("splash=", splash_setup);
+
+
+static int splash_hasinter(unsigned char *buf, int num)
+{
+ unsigned char *bufend = buf + num * 12;
+ while (buf < bufend) {
+ if (buf[1] > 127) /* inter? */
+ return 1;
+ buf += buf[3] > 127 ? 24 : 12; /* blend? */
+ }
+ return 0;
+}
+
+static int boxextract(unsigned char *buf, unsigned short *dp,
+ unsigned char *cols, int *blendp)
+{
+ dp[0] = buf[0] | buf[1] << 8;
+ dp[1] = buf[2] | buf[3] << 8;
+ dp[2] = buf[4] | buf[5] << 8;
+ dp[3] = buf[6] | buf[7] << 8;
+ *(unsigned int *)(cols + 0) =
+ *(unsigned int *)(cols + 4) =
+ *(unsigned int *)(cols + 8) =
+ *(unsigned int *)(cols + 12) = *(unsigned int *)(buf + 8);
+ if (dp[1] > 32767) {
+ dp[1] = ~dp[1];
+ *(unsigned int *)(cols + 4) = *(unsigned int *)(buf + 12);
+ *(unsigned int *)(cols + 8) = *(unsigned int *)(buf + 16);
+ *(unsigned int *)(cols + 12) = *(unsigned int *)(buf + 20);
+ *blendp = 1;
+ return 24;
+ }
+ return 12;
+}
+
+static void boxit(unsigned char *pic, int bytes, unsigned char *buf, int num,
+ int percent, int xoff, int yoff, int overpaint,
+ enum splash_color_format cf)
+{
+ int x, y, p, doblend, r, g, b, a, add;
+ unsigned int i = 0;
+ unsigned short data1[4];
+ unsigned char cols1[16];
+ unsigned short data2[4];
+ unsigned char cols2[16];
+ unsigned char *bufend;
+ union pt picp;
+ unsigned int stipple[32], sti, stin, stinn, stixs, stixe, stiys, stiye;
+ int xs, xe, ys, ye, xo, yo;
+ int octpp = splash_octpp(cf);
+
+ SPLASH_DEBUG();
+ if (num == 0 || percent < -1)
+ return;
+ bufend = buf + num * 12;
+ stipple[0] = 0xffffffff;
+ stin = 1;
+ stinn = 0;
+ stixs = stixe = 0;
+ stiys = stiye = 0;
+ while (buf < bufend) {
+ doblend = 0;
+ buf += boxextract(buf, data1, cols1, &doblend);
+ if (data1[0] == 32767 && data1[1] == 32767) {
+ /* box stipple */
+ if (stinn == 32)
+ continue;
+ if (stinn == 0) {
+ stixs = data1[2];
+ stixe = data1[3];
+ stiys = stiye = 0;
+ } else if (stinn == 4) {
+ stiys = data1[2];
+ stiye = data1[3];
+ }
+ stipple[stinn++] = (cols1[0] << 24) |
+ (cols1[1] << 16) |
+ (cols1[2] << 8) |
+ cols1[3] ;
+ stipple[stinn++] = (cols1[4] << 24) |
+ (cols1[5] << 16) |
+ (cols1[6] << 8) |
+ cols1[7] ;
+ stipple[stinn++] = (cols1[8] << 24) |
+ (cols1[9] << 16) |
+ (cols1[10] << 8) |
+ cols1[11] ;
+ stipple[stinn++] = (cols1[12] << 24) |
+ (cols1[13] << 16) |
+ (cols1[14] << 8) |
+ cols1[15] ;
+ stin = stinn;
+ continue;
+ }
+ stinn = 0;
+ if (data1[0] > 32767)
+ buf += boxextract(buf, data2, cols2, &doblend);
+ if (data1[0] == 32767 && data1[1] == 32766) {
+ /* box copy */
+ i = 12 * (short)data1[3];
+ doblend = 0;
+ i += boxextract(buf + i, data1, cols1, &doblend);
+ if (data1[0] > 32767)
+ boxextract(buf + i, data2, cols2, &doblend);
+ }
+ if (data1[0] == 32767)
+ continue;
+ if (data1[2] > 32767) {
+ if (overpaint)
+ continue;
+ data1[2] = ~data1[2];
+ }
+ if (data1[3] > 32767) {
+ if (percent == 65536)
+ continue;
+ data1[3] = ~data1[3];
+ }
+ if (data1[0] > 32767) {
+ if (percent < 0)
+ continue;
+ data1[0] = ~data1[0];
+ for (i = 0; i < 4; i++)
+ data1[i] = (data1[i] * (65536 - percent)
+ + data2[i] * percent) >> 16;
+ for (i = 0; i < 16; i++)
+ cols1[i] = (cols1[i] * (65536 - percent)
+ + cols2[i] * percent) >> 16;
+ }
+ *(unsigned int *)cols2 = *(unsigned int *)cols1;
+ a = cols2[3];
+ if (a == 0 && !doblend)
+ continue;
+
+ if (stixs >= 32768) {
+ xo = xs = (stixs ^ 65535) + data1[0];
+ xe = stixe ? stixe + data1[0] : data1[2];
+ } else if (stixe >= 32768) {
+ xs = stixs ? data1[2] - stixs : data1[0];
+ xe = data1[2] - (stixe ^ 65535);
+ xo = xe + 1;
+ } else {
+ xo = xs = stixs;
+ xe = stixe ? stixe : data1[2];
+ }
+ if (stiys >= 32768) {
+ yo = ys = (stiys ^ 65535) + data1[1];
+ ye = stiye ? stiye + data1[1] : data1[3];
+ } else if (stiye >= 32768) {
+ ys = stiys ? data1[3] - stiys : data1[1];
+ ye = data1[3] - (stiye ^ 65535);
+ yo = ye + 1;
+ } else {
+ yo = ys = stiys;
+ ye = stiye ? stiye : data1[3];
+ }
+ xo = 32 - (xo & 31);
+ yo = stin - (yo % stin);
+ if (xs < data1[0])
+ xs = data1[0];
+ if (xe > data1[2])
+ xe = data1[2];
+ if (ys < data1[1])
+ ys = data1[1];
+ if (ye > data1[3])
+ ye = data1[3];
+
+ for (y = ys; y <= ye; y++) {
+ sti = stipple[(y + yo) % stin];
+ x = (xs + xo) & 31;
+ if (x)
+ sti = (sti << x) | (sti >> (32 - x));
+ if (doblend) {
+ p = data1[3] - data1[1];
+ if (p != 0)
+ p = ((y - data1[1]) << 16) / p;
+ for (i = 0; i < 8; i++)
+ cols2[i + 8] = (cols1[i] * (65536 - p)
+ + cols1[i + 8] * p)
+ >> 16;
+ }
+ add = (xs & 1);
+ add ^= (add ^ y) & 1 ? 1 : 3; /*2x2 ordered dithering*/
+ picp.ub = (pic + (xs + xoff) * octpp
+ + (y + yoff) * bytes);
+ for (x = xs; x <= xe; x++) {
+ if (!(sti & 0x80000000)) {
+ sti <<= 1;
+ switch (octpp) {
+ case 2:
+ picp.us++;
+ break;
+ case 3:
+ picp.ub += 3;
+ break;
+ case 4:
+ picp.ul++;
+ break;
+ }
+ add ^= 3;
+ continue;
+ }
+ sti = (sti << 1) | 1;
+ if (doblend) {
+ p = data1[2] - data1[0];
+ if (p != 0)
+ p = ((x - data1[0]) << 16) / p;
+ for (i = 0; i < 4; i++)
+ cols2[i] = (cols2[i + 8] * (65536 - p)
+ + cols2[i + 12] * p)
+ >> 16;
+ a = cols2[3];
+ }
+ r = cols2[0];
+ g = cols2[1];
+ b = cols2[2];
+#define CLAMP(x) ((x) >= 256 ? 255 : (x))
+#define BLEND(x, v, a) ((x * (255 - a) + v * a) / 255)
+ switch (cf) {
+ case SPLASH_DEPTH_15:
+ if (a != 255) {
+ i = *picp.us;
+ r = BLEND((i>>7 & 0xf8), r, a);
+ g = BLEND((i>>2 & 0xf8), g, a);
+ b = BLEND((i<<3 & 0xf8), b, a);
+ }
+ r += add * 2 + 1;
+ g += add;
+ b += add * 2 + 1;
+ i = ((CLAMP(r) & 0xf8) << 7) |
+ ((CLAMP(g) & 0xf8) << 2) |
+ ((CLAMP(b)) >> 3);
+ *(picp.us++) = i;
+ break;
+ case SPLASH_DEPTH_16:
+ if (a != 255) {
+ i = *picp.us;
+ r = BLEND((i>>8 & 0xf8), r, a);
+ g = BLEND((i>>3 & 0xfc), g, a);
+ b = BLEND((i<<3 & 0xf8), b, a);
+ }
+ r += add * 2 + 1;
+ g += add;
+ b += add * 2 + 1;
+ i = ((CLAMP(r) & 0xf8) << 8) |
+ ((CLAMP(g) & 0xfc) << 3) |
+ ((CLAMP(b)) >> 3);
+ *(picp.us++) = i;
+ break;
+ case SPLASH_DEPTH_24_PACKED:
+ if (a != 255) {
+ i = *picp.ub;
+ r = BLEND((i & 0xff), r, a);
+ i = *(picp.ub + 1);
+ g = BLEND((i & 0xff), g, a);
+ i = *(picp.ub + 2);
+ b = BLEND((i & 0xff), b, a);
+ }
+ *(picp.ub++) = CLAMP(r);
+ *(picp.ub++) = CLAMP(g);
+ *(picp.ub++) = CLAMP(b);
+ break;
+ case SPLASH_DEPTH_24:
+ if (a != 255) {
+ i = *picp.ul;
+ r = BLEND((i>>16 & 0xff), r, a);
+ g = BLEND((i>>8 & 0xff), g, a);
+ b = BLEND((i & 0xff), b, a);
+ }
+ i = ((CLAMP(r) << 16)
+ | (CLAMP(g) << 8)
+ | (CLAMP(b)));
+ *(picp.ul++) = i;
+ break;
+ default:
+ break;
+ }
+ add ^= 3;
+ }
+ }
+ }
+}
+
+static void box_offsets(unsigned char *buf, int num,
+ int screen_w, int screen_h, int pic_w, int pic_h,
+ int *x_off, int *y_off)
+{
+ int a, doblend;
+ int x_min = pic_w, x_max = 0;
+ int y_min = pic_h, y_max = 0;
+ unsigned int i = 0;
+ unsigned short data1[4];
+ unsigned char cols1[16];
+ unsigned short data2[4];
+ unsigned char cols2[16];
+ unsigned char *bufend;
+ unsigned int stin, stinn, stixs, stixe, stiys, stiye;
+ int xs, xe, ys, ye;
+
+ SPLASH_DEBUG();
+
+ if ((screen_w == pic_w && screen_h == pic_h) || num == 0)
+ *x_off = *y_off = 0;
+
+ bufend = buf + num * 12;
+ stin = 1;
+ stinn = 0;
+ stixs = stixe = 0;
+ stiys = stiye = 0;
+
+ while (buf < bufend) {
+ doblend = 0;
+ buf += boxextract(buf, data1, cols1, &doblend);
+ if (data1[0] == 32767 && data1[1] == 32767) {
+ /* box stipple */
+ if (stinn == 32)
+ continue;
+ if (stinn == 0) {
+ stixs = data1[2];
+ stixe = data1[3];
+ stiys = stiye = 0;
+ } else if (stinn == 4) {
+ stiys = data1[2];
+ stiye = data1[3];
+ }
+ stin = stinn;
+ continue;
+ }
+ stinn = 0;
+ if (data1[0] > 32767)
+ buf += boxextract(buf, data2, cols2, &doblend);
+ if (data1[0] == 32767 && data1[1] == 32766) {
+ /* box copy */
+ i = 12 * (short)data1[3];
+ doblend = 0;
+ i += boxextract(buf + i, data1, cols1, &doblend);
+ if (data1[0] > 32767)
+ boxextract(buf + i, data2, cols2, &doblend);
+ }
+ if (data1[0] == 32767)
+ continue;
+ if (data1[2] > 32767)
+ data1[2] = ~data1[2];
+ if (data1[3] > 32767)
+ data1[3] = ~data1[3];
+ if (data1[0] > 32767) {
+ data1[0] = ~data1[0];
+ for (i = 0; i < 4; i++)
+ data1[i] = (data1[i] * (65536 - 1)
+ + data2[i] * 1) >> 16;
+ }
+ *(unsigned int *)cols2 = *(unsigned int *)cols1;
+ a = cols2[3];
+ if (a == 0 && !doblend)
+ continue;
+
+ if (stixs >= 32768) {
+ xs = (stixs ^ 65535) + data1[0];
+ xe = stixe ? stixe + data1[0] : data1[2];
+ } else if (stixe >= 32768) {
+ xs = stixs ? data1[2] - stixs : data1[0];
+ xe = data1[2] - (stixe ^ 65535);
+ } else {
+ xs = stixs;
+ xe = stixe ? stixe : data1[2];
+ }
+ if (stiys >= 32768) {
+ ys = (stiys ^ 65535) + data1[1];
+ ye = stiye ? stiye + data1[1] : data1[3];
+ } else if (stiye >= 32768) {
+ ys = stiys ? data1[3] - stiys : data1[1];
+ ye = data1[3] - (stiye ^ 65535);
+ } else {
+ ys = stiys;
+ ye = stiye ? stiye : data1[3];
+ }
+ if (xs < data1[0])
+ xs = data1[0];
+ if (xe > data1[2])
+ xe = data1[2];
+ if (ys < data1[1])
+ ys = data1[1];
+ if (ye > data1[3])
+ ye = data1[3];
+
+ if (xs < x_min)
+ x_min = xs;
+ if (xe > x_max)
+ x_max = xe;
+ if (ys < y_min)
+ y_min = ys;
+ if (ye > y_max)
+ y_max = ye;
+ }
+ {
+ int x_center = (x_min + x_max) / 2;
+ int y_center = (y_min + y_max) / 2;
+
+ if (screen_w == pic_w)
+ *x_off = 0;
+ else {
+ if (x_center < (pic_w + pic_w / 5) >> 1 &&
+ x_center > (pic_w - pic_w / 5) >> 1) {
+ *x_off = (screen_w - pic_w) >> 1;
+ } else {
+ int x = x_center * screen_w / pic_w;
+ *x_off = x - x_center;
+ if (x_min + *x_off < 0)
+ *x_off = 0;
+ if (x_max + *x_off > screen_w)
+ *x_off = screen_w - pic_w;
+ }
+ }
+ if (screen_h == pic_h)
+ *y_off = 0;
+ else {
+ if (y_center < (pic_h + pic_h / 5) >> 1 &&
+ y_center > (pic_h - pic_h / 5) >> 1)
+ *y_off = (screen_h - pic_h) >> 1;
+ else {
+ int x = y_center * screen_h / pic_h;
+ *y_off = x - y_center;
+ if (y_min + *y_off < 0)
+ *y_off = 0;
+ if (y_max + *x_off > screen_h)
+ *y_off = screen_h - pic_h;
+ }
+ }
+ }
+}
+
+static int splash_check_jpeg(unsigned char *jpeg,
+ int width, int height)
+{
+ int size, err;
+ unsigned char *mem;
+ struct jpeg_decdata *decdata; /* private decoder data */
+
+
+ size = ((width + 15) & ~15) * ((height + 15) & ~15) * 2;
+ mem = vmalloc(size);
+ if (!mem) {
+ printk(KERN_INFO "bootsplash: no memory for decoded picture.\n");
+ return -1;
+ }
+ decdata = vmalloc(sizeof(*decdata));
+ if (!decdata) {
+ printk(KERN_INFO "bootsplash: not enough memory.\n");
+ vfree(mem);
+ return -1;
+ }
+ /* test decode: use fixed depth of 16 */
+ err = jpeg_decode(jpeg, mem,
+ ((width + 15) & ~15), ((height + 15) & ~15),
+ SPLASH_DEPTH_16,
+ decdata);
+ if (err)
+ printk(KERN_INFO "bootsplash: "
+ "error while decompressing picture: %s (%d)\n",
+ jpg_errors[err - 1], err);
+ vfree(decdata);
+ vfree(mem);
+ return err ? -1 : 0;
+}
+
+static void splash_free(struct vc_data *vc, struct fb_info *info)
+{
+ struct splash_data *sd;
+ struct splash_data *next;
+ SPLASH_DEBUG();
+ for (sd = vc->vc_splash_data; sd; sd = next) {
+ next = sd->next;
+ sd->pic->ref_cnt--;
+ if (!sd->pic->ref_cnt) {
+ vfree(sd->pic->splash_pic);
+ vfree(sd->pic);
+ }
+ sd->imgd->ref_cnt--;
+ if (!sd->imgd->ref_cnt) {
+ vfree(sd->imgd->splash_sboxes);
+ vfree(sd->imgd);
+ }
+ vfree(sd);
+ }
+ vc->vc_splash_data = 0;
+ if (info)
+ info->splash_data = 0;
+}
+
+static int splash_mkpenguin(struct splash_data *data,
+ int pxo, int pyo, int pwi, int phe,
+ int pr, int pg, int pb)
+{
+ unsigned char *buf;
+ int i;
+
+ if (pwi == 0 || phe == 0)
+ return 0;
+
+ buf = (unsigned char *)data + sizeof(*data);
+
+ pwi += pxo - 1;
+ phe += pyo - 1;
+
+ *buf++ = pxo;
+ *buf++ = pxo >> 8;
+ *buf++ = pyo;
+ *buf++ = pyo >> 8;
+ *buf++ = pwi;
+ *buf++ = pwi >> 8;
+ *buf++ = phe;
+ *buf++ = phe >> 8;
+ *buf++ = pr;
+ *buf++ = pg;
+ *buf++ = pb;
+ *buf++ = 0;
+
+ for (i = 0; i < 12; i++, buf++)
+ *buf = buf[-12];
+
+ buf[-24] ^= 0xff;
+ buf[-23] ^= 0xff;
+ buf[-1] = 0xff;
+
+ return 2;
+}
+
+static const int splash_offsets[3][16] = {
+ /* len, unit, size, state, fgcol, col, xo, yo, wi, he
+ boxcnt, ssize, sboxcnt, percent, overok, palcnt */
+ /* V1 */
+ { 20, -1, 16, -1, -1, -1, 8, 10, 12, 14,
+ -1, -1, -1, -1, -1, -1 },
+ /* V2 */
+ { 35, 8, 12, 9, 10, 11, 16, 18, 20, 22,
+ -1, -1, -1, -1, -1, -1 },
+ /* V3 */
+ { 38, 8, 12, 9, 10, 11, 16, 18, 20, 22,
+ 24, 28, 32, 34, 36, 37 },
+};
+
+#define SPLASH_OFF_LEN offsets[0]
+#define SPLASH_OFF_UNIT offsets[1]
+#define SPLASH_OFF_SIZE offsets[2]
+#define SPLASH_OFF_STATE offsets[3]
+#define SPLASH_OFF_FGCOL offsets[4]
+#define SPLASH_OFF_COL offsets[5]
+#define SPLASH_OFF_XO offsets[6]
+#define SPLASH_OFF_YO offsets[7]
+#define SPLASH_OFF_WI offsets[8]
+#define SPLASH_OFF_HE offsets[9]
+#define SPLASH_OFF_BOXCNT offsets[10]
+#define SPLASH_OFF_SSIZE offsets[11]
+#define SPLASH_OFF_SBOXCNT offsets[12]
+#define SPLASH_OFF_PERCENT offsets[13]
+#define SPLASH_OFF_OVEROK offsets[14]
+#define SPLASH_OFF_PALCNT offsets[15]
+
+static inline int splash_getb(unsigned char *pos, int off)
+{
+ return off == -1 ? 0 : pos[off];
+}
+
+static inline int splash_gets(unsigned char *pos, int off)
+{
+ return off == -1 ? 0 : pos[off] | pos[off + 1] << 8;
+}
+
+static inline int splash_geti(unsigned char *pos, int off)
+{
+ return off == -1 ? 0 : (pos[off] |
+ pos[off + 1] << 8 |
+ pos[off + 2] << 16 |
+ pos[off + 3] << 24);
+}
+
+/* move the given splash_data to the current one */
+static void splash_pivot_current(struct vc_data *vc, struct splash_data *new)
+{
+ struct splash_data *sd;
+ struct splash_pic_data *pic;
+ int state, percent, silent;
+
+ sd = vc->vc_splash_data;
+ if (!sd || sd == new)
+ return;
+
+ state = sd->splash_state;
+ percent = sd->splash_percent;
+ silent = sd->splash_dosilent;
+ if (sd->pic->ref_cnt > 1) {
+ pic = kzalloc(sizeof(struct splash_pic_data), GFP_KERNEL);
+ if (!pic)
+ return;
+ sd->pic = pic;
+ }
+ sd->pic->ref_cnt = 1;
+ sd->pic->splash_pic_size = 0;
+ sd->pic->splash_pic = NULL;
+ sd->splash_vc_text_wi = sd->imgd->splash_text_wi;
+ sd->splash_vc_text_he = sd->imgd->splash_text_he;
+ for (; sd->next; sd = sd->next) {
+ if (sd->next == new) {
+ sd->next = new->next;
+ new->next = vc->vc_splash_data;
+ vc->vc_splash_data = new;
+ /* copy the current states */
+ new->splash_state = state;
+ new->splash_percent = percent;
+ new->splash_dosilent = silent;
+ new->splash_vc_text_wi = new->imgd->splash_text_wi;
+ new->splash_vc_text_he = new->imgd->splash_text_he;
+
+ new->splash_boxes_xoff = 0;
+ new->splash_boxes_yoff = 0;
+ new->splash_sboxes_xoff = 0;
+ new->splash_sboxes_yoff = 0;
+
+ if (new->pic->ref_cnt > 1) {
+ struct splash_pic_data *pic;
+ pic = kzalloc(sizeof(struct splash_pic_data),
+ GFP_KERNEL);
+ if (!pic)
+ return;
+
+ new->pic = pic;
+ }
+ new->pic->ref_cnt = 1;
+ new->pic->splash_pic_size = 0;
+ new->pic->splash_pic = NULL;
+
+ return;
+ }
+ }
+}
+
+static int update_boxes(struct vc_data *vc,
+ const int *offsets,
+ unsigned char *ndata, int len, unsigned char * end,
+ int *update)
+{
+ int boxcnt;
+ int sboxcnt;
+ struct splash_data *sd;
+ struct splash_img_data *imgd;
+ int i;
+
+ sd = vc->vc_splash_data;
+ if (sd != 0) {
+ int up = 0;
+ imgd = sd->imgd;
+ i = splash_getb(ndata, SPLASH_OFF_STATE);
+ if (i != 255) {
+ sd->splash_state = i; /*@!@*/
+ up = -1;
+ }
+ i = splash_getb(ndata, SPLASH_OFF_FGCOL);
+ if (i != 255) {
+ imgd->splash_fg_color = i;
+ up = -1;
+ }
+ i = splash_getb(ndata, SPLASH_OFF_COL);
+ if (i != 255) {
+ imgd->splash_color = i;
+ up = -1;
+ }
+ boxcnt = sboxcnt = 0;
+ if (ndata + len <= end) {
+ boxcnt = splash_gets(ndata, SPLASH_OFF_BOXCNT);
+ sboxcnt = splash_gets(ndata, SPLASH_OFF_SBOXCNT);
+ }
+ if (boxcnt) {
+ i = splash_gets(ndata, len);
+ if (boxcnt + i
+ <= imgd->splash_boxcount &&
+ ndata + len + 2 + boxcnt * 12
+ <= end) {
+ if (splash_geti(ndata, len + 2)
+ != 0x7ffd7fff ||
+ !memcmp(ndata + len + 2,
+ imgd->splash_boxes + i * 12,
+ 8)) {
+ memcpy(imgd->splash_boxes + i * 12,
+ ndata + len + 2,
+ boxcnt * 12);
+ up |= 1;
+ }
+ }
+ len += boxcnt * 12 + 2;
+ }
+ if (sboxcnt) {
+ i = splash_gets(ndata, len);
+ if ((sboxcnt + i <= imgd->splash_sboxcount) &&
+ (ndata + len + 2 + sboxcnt * 12 <= end)) {
+ if ((splash_geti(ndata, len + 2) != 0x7ffd7fff)
+ || !memcmp(ndata + len + 2,
+ imgd->splash_sboxes + i * 12,
+ 8)) {
+ memcpy(imgd->splash_sboxes + i * 12,
+ ndata + len + 2,
+ sboxcnt * 12);
+ up |= 2;
+ }
+ }
+ }
+ if (update)
+ *update = up;
+ }
+ return 0;
+}
+
+static int splash_getraw(unsigned char *start, unsigned char *end, int *update)
+{
+ unsigned char *ndata;
+ int version;
+ int splash_size;
+ int unit;
+ int width, height;
+ int silentsize;
+ int boxcnt;
+ int sboxcnt;
+ int palcnt;
+ int len;
+ const int *offsets;
+ struct vc_data *vc = NULL;
+ struct fb_info *info = NULL;
+ struct splash_data *sd;
+ struct splash_img_data *imgd;
+ struct splash_pic_data *pic;
+ struct splash_data *splash_found = NULL;
+ int unit_found = -1;
+ int oldpercent, oldsilent;
+
+ if (update)
+ *update = -1;
+
+ if (!update ||
+ start[7] < '2' ||
+ start[7] > '3' ||
+ splash_geti(start, 12) != (int)0xffffffff)
+ printk(KERN_INFO "bootsplash %s: looking for picture...\n",
+ SPLASH_VERSION);
+
+ oldpercent = -3;
+ oldsilent = -1;
+ for (ndata = start; ndata < end; ndata++) {
+ if (ndata[0] != 'B' ||
+ ndata[1] != 'O' ||
+ ndata[2] != 'O' ||
+ ndata[3] != 'T')
+ continue;
+ if (ndata[4] != 'S' ||
+ ndata[5] != 'P' ||
+ ndata[6] != 'L' ||
+ ndata[7] < '1' ||
+ ndata[7] > '3')
+ continue;
+
+ version = ndata[7] - '0';
+ offsets = splash_offsets[version - 1];
+ len = SPLASH_OFF_LEN;
+
+ unit = splash_getb(ndata, SPLASH_OFF_UNIT);
+ if (unit >= MAX_NR_CONSOLES)
+ continue;
+
+ if (unit)
+ vc_allocate(unit);
+
+ vc = vc_cons[unit].d;
+ if (!vc)
+ continue;
+
+ info = registered_fb[(int)con2fb_map[unit]];
+
+ splash_size = splash_geti(ndata, SPLASH_OFF_SIZE);
+
+ /*
+ * Update. Wonder what should happen here now
+ * since we can have multiple splash_data records
+ */
+ if (splash_size == (int)0xffffffff && version > 1) {
+ if (update_boxes(vc, offsets, ndata, len, end, update) < 0)
+ return -1;
+
+ return unit;
+ }
+
+ if (splash_size == 0) {
+ printk(KERN_INFO
+ "bootsplash: ...found, freeing memory.\n");
+ if (vc->vc_splash_data)
+ splash_free(vc, info);
+ return unit;
+ }
+ boxcnt = splash_gets(ndata, SPLASH_OFF_BOXCNT);
+ palcnt = 3 * splash_getb(ndata, SPLASH_OFF_PALCNT);
+ if (ndata + len + splash_size > end) {
+ printk(KERN_ERR
+ "bootsplash: ...found, but truncated!\n");
+ return -1;
+ }
+ silentsize = splash_geti(ndata, SPLASH_OFF_SSIZE);
+ if (silentsize)
+ printk(KERN_INFO
+ "bootsplash: silentjpeg size %d bytes\n",
+ silentsize);
+ if (silentsize >= splash_size) {
+ printk(KERN_ERR "bootsplash: bigger than splashsize!\n");
+ return -1;
+ }
+ splash_size -= silentsize;
+ if (!splash_usesilent)
+ silentsize = 0;
+
+ sboxcnt = splash_gets(ndata, SPLASH_OFF_SBOXCNT);
+ if (vc->vc_splash_data) {
+ oldpercent = vc->vc_splash_data->splash_percent;/*@!@*/
+ oldsilent = vc->vc_splash_data->splash_dosilent;/*@!@*/
+ }
+ sd = kzalloc(sizeof(*sd), GFP_KERNEL);
+ if (!sd)
+ break;
+ imgd = vmalloc(sizeof(*imgd)
+ + splash_size + (version < 3 ? 2 * 12 : 0));
+ if (!imgd) {
+ vfree(sd);
+ break;
+ }
+ pic = kzalloc(sizeof(*pic), GFP_KERNEL);
+ if (!pic) {
+ vfree(sd);
+ vfree(pic);
+ break;
+ }
+ memset(imgd, 0, sizeof(*imgd));
+ sd->imgd = imgd;
+ sd->pic = pic;
+ imgd->ref_cnt = 1;
+ pic->ref_cnt = 1;
+ jpeg_get_size(ndata + len + boxcnt * 12 + palcnt,
+ &imgd->splash_width, &imgd->splash_height);
+ if (splash_check_jpeg(ndata + len + boxcnt * 12 + palcnt,
+ imgd->splash_width,
+ imgd->splash_height)) {
+ ndata += len + splash_size - 1;
+ vfree(imgd);
+ vfree(sd);
+ continue;
+ }
+ if (silentsize) {
+ imgd->splash_silentjpeg = vmalloc(silentsize);
+ if (imgd->splash_silentjpeg) {
+ memcpy(imgd->splash_silentjpeg,
+ ndata + len + splash_size, silentsize);
+ imgd->splash_sboxes = imgd->splash_silentjpeg;
+ imgd->splash_silentjpeg += 12 * sboxcnt;
+ imgd->splash_sboxcount = sboxcnt;
+ }
+ }
+ imgd->splash_fg_color = splash_getb(ndata, SPLASH_OFF_FGCOL);
+ imgd->splash_color = splash_getb(ndata, SPLASH_OFF_COL);
+ imgd->splash_overpaintok = splash_getb(ndata, SPLASH_OFF_OVEROK);
+ imgd->splash_text_xo = splash_gets(ndata, SPLASH_OFF_XO);
+ imgd->splash_text_yo = splash_gets(ndata, SPLASH_OFF_YO);
+ imgd->splash_text_wi = splash_gets(ndata, SPLASH_OFF_WI);
+ imgd->splash_text_he = splash_gets(ndata, SPLASH_OFF_HE);
+ if (version == 1) {
+ imgd->splash_text_xo *= 8;
+ imgd->splash_text_wi *= 8;
+ imgd->splash_text_yo *= 16;
+ imgd->splash_text_he *= 16;
+ imgd->splash_color = (splash_default >> 8) & 0x0f;
+ imgd->splash_fg_color = (splash_default >> 4) & 0x0f;
+ }
+
+ /* fake penguin box for older formats */
+ if (version == 1)
+ boxcnt = splash_mkpenguin(sd, imgd->splash_text_xo + 10,
+ imgd->splash_text_yo + 10,
+ imgd->splash_text_wi - 20,
+ imgd->splash_text_he - 20,
+ 0xf0, 0xf0, 0xf0);
+ else if (version == 2)
+ boxcnt = splash_mkpenguin(sd,
+ splash_gets(ndata, 24),
+ splash_gets(ndata, 26),
+ splash_gets(ndata, 28),
+ splash_gets(ndata, 30),
+ splash_getb(ndata, 32),
+ splash_getb(ndata, 33),
+ splash_getb(ndata, 34));
+
+ memcpy((char *)imgd
+ + sizeof(*imgd) + (version < 3 ? boxcnt * 12 : 0),
+ ndata + len,
+ splash_size);
+ imgd->splash_boxcount = boxcnt;
+ imgd->splash_boxes = (unsigned char *)imgd + sizeof(*imgd);
+ imgd->splash_palette = imgd->splash_boxes + boxcnt * 12;
+ imgd->splash_jpeg = imgd->splash_palette + palcnt;
+
+ sd->splash_state = splash_getb(ndata, SPLASH_OFF_STATE);/*@!@*/
+ sd->splash_percent = oldpercent == -3 ?
+ splash_gets(ndata, SPLASH_OFF_PERCENT) :
+ oldpercent; /*@!@*/
+ sd->pic->splash_pic = NULL;
+ sd->pic->splash_pic_size = 0;
+
+ sd->splash_dosilent = imgd->splash_silentjpeg != 0 ?
+ (oldsilent == -1 ? 1 : oldsilent) :
+ 0; /* @!@ */
+
+ sd->splash_vc_text_wi = imgd->splash_text_wi;
+ sd->splash_vc_text_he = imgd->splash_text_he;
+
+ sd->next = vc->vc_splash_data;
+ vc->vc_splash_data = sd;
+
+ if (info) {
+ width = info->var.xres;
+ height = info->var.yres;
+ if (imgd->splash_width != width ||
+ imgd->splash_height != height) {
+ ndata += len + splash_size - 1;
+ continue;
+ }
+ }
+ printk(KERN_INFO
+ "bootsplash: ...found (%dx%d, %d bytes, v%d).\n",
+ imgd->splash_width, imgd->splash_height,
+ splash_size, version);
+ if (version == 1) {
+ printk(KERN_WARNING
+ "bootsplash: Using deprecated v1 header. "
+ "Updating your splash utility recommended.\n");
+ printk(KERN_INFO
+ "bootsplash: Find the latest version at "
+ "http://www.bootsplash.org/\n");
+ }
+
+ splash_found = sd;
+ unit_found = unit;
+ }
+
+ if (splash_found) {
+ splash_pivot_current(vc, splash_found);
+ return unit_found;
+ } else {
+ vc = vc_cons[0].d;
+ if (vc) {
+ info = registered_fb[(int)con2fb_map[0]];
+ if (info) {
+ width = info->var.xres;
+ height = info->var.yres;
+ } else
+ width = height = 0;
+ if (!splash_look_for_jpeg(vc, width, height))
+ return -1;
+ return 0;
+ }
+ }
+
+ printk(KERN_ERR "bootsplash: ...no good signature found.\n");
+ return -1;
+}
+
+static void splash_update_redraw(struct vc_data *vc, struct fb_info *info)
+{
+ update_region(vc,
+ vc->vc_origin + vc->vc_size_row * vc->vc_top,
+ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
+ splash_clear_margins(vc, info, 0);
+}
+
+int splash_do_verbose(void)
+{
+ struct vc_data *vc;
+ struct fb_info *info;
+ int ret = 0;
+
+ SPLASH_DEBUG();
+ if (!oops_in_progress)
+ console_lock();
+
+ if (!splash_usesilent)
+ goto done;
+
+ vc = vc_cons[0].d;
+
+ if (!vc || !vc->vc_splash_data || !vc->vc_splash_data->splash_state)
+ goto done;
+ if (!vc->vc_splash_data->imgd->splash_silentjpeg)
+ goto done;
+
+ if (!vc->vc_splash_data->splash_dosilent)
+ goto done;
+ vc->vc_splash_data->splash_dosilent = 0;
+ if (fg_console != vc->vc_num)
+ goto done;
+
+ info = registered_fb[(int)con2fb_map[0]];
+
+ if (!info || !info->splash_data)
+ goto done;
+
+ splash_update_redraw(vc, info);
+ ret = 0;
+
+ done:
+ if (!oops_in_progress)
+ console_unlock();
+
+ return ret;
+}
+
+static void splash_verbose_callback(struct work_struct *ignored)
+{
+ splash_do_verbose();
+}
+
+static DECLARE_WORK(splash_work, splash_verbose_callback);
+
+int splash_verbose(void)
+{
+ if (!oops_in_progress)
+ schedule_work(&splash_work);
+ else
+ return splash_do_verbose();
+ return 0;
+}
+
+static void splash_off(struct vc_data *vc, struct fb_info *info)
+{
+ int rows = info->var.xres / vc->vc_font.width;
+ int cols = info->var.yres / vc->vc_font.height;
+ SPLASH_DEBUG();
+
+ info->splash_data = 0;
+ if (rows != vc->vc_rows || cols != vc->vc_cols)
+ vc_resize(vc, rows, cols);
+}
+
+/* look for the splash with the matching size and set it as the current */
+static int splash_look_for_jpeg(struct vc_data *vc, int width, int height)
+{
+ struct splash_data *sd, *found = NULL;
+ int found_delta_x = INT_MAX, found_delta_y = INT_MAX;
+
+ for (sd = vc->vc_splash_data; sd; sd = sd->next) {
+ int delta_x = abs(sd->imgd->splash_width - width) * height;
+ int delta_y = abs(sd->imgd->splash_height - height) * width;
+ if (!found ||
+ (found_delta_x + found_delta_y > delta_x + delta_y)) {
+ found = sd;
+ found_delta_x = delta_x;
+ found_delta_y = delta_y;
+ }
+ }
+
+ if (found) {
+ SPLASH_DEBUG("bootsplash: "
+ "scalable image found (%dx%d scaled to %dx%d).",
+ found->imgd->splash_width,
+ found->imgd->splash_height,
+ width, height);
+
+ splash_pivot_current(vc, found);
+
+ /* textarea margins are constant independent from image size */
+ if (found->imgd->splash_height != height)
+ found->splash_vc_text_he = height
+ - (found->imgd->splash_height
+ - found->imgd->splash_text_he);
+ else
+ found->splash_vc_text_he = found->imgd->splash_text_he;
+ if (found->imgd->splash_width != width)
+ found->splash_vc_text_wi =
+ width
+ - (found->imgd->splash_width
+ - found->imgd->splash_text_wi);
+ else
+ found->splash_vc_text_wi = found->imgd->splash_text_wi;
+
+ if (found->imgd->splash_width != width
+ || found->imgd->splash_height != height) {
+ box_offsets(found->imgd->splash_boxes,
+ found->imgd->splash_boxcount,
+ width, height,
+ found->imgd->splash_width,
+ found->imgd->splash_height,
+ &found->splash_boxes_xoff,
+ &found->splash_boxes_yoff);
+ SPLASH_DEBUG("bootsplash: offsets for boxes: x=%d y=%d",
+ found->splash_boxes_xoff,
+ found->splash_boxes_yoff);
+
+ if (found->imgd->splash_sboxes) {
+ box_offsets(found->imgd->splash_sboxes,
+ found->imgd->splash_sboxcount,
+ width, height,
+ found->imgd->splash_width,
+ found->imgd->splash_height,
+ &found->splash_sboxes_xoff,
+ &found->splash_sboxes_yoff);
+ SPLASH_DEBUG("bootsplash: "
+ "offsets sboxes: x=%d y=%d",
+ found->splash_sboxes_xoff,
+ found->splash_sboxes_yoff);
+ }
+ } else {
+ found->splash_sboxes_xoff = 0;
+ found->splash_sboxes_yoff = 0;
+ }
+ return 0;
+ }
+ return -1;
+}
+
+static int splash_recolor(struct vc_data *vc, struct fb_info *info)
+{
+ int color;
+
+ SPLASH_DEBUG();
+ if (!vc->vc_splash_data)
+ return -1;
+ if (!vc->vc_splash_data->splash_state)
+ return 0;
+ color = vc->vc_splash_data->imgd->splash_color << 4 |
+ vc->vc_splash_data->imgd->splash_fg_color;
+ if (vc->vc_def_color != color)
+ con_remap_def_color(vc, color);
+ if (info && info->splash_data && fg_console == vc->vc_num)
+ splash_update_redraw(vc, info);
+ vc->vc_splash_data->color_set = 1;
+ return 0;
+}
+
+int splash_prepare(struct vc_data *vc, struct fb_info *info)
+{
+ int err;
+ int width, height, octpp, size, sbytes;
+ enum splash_color_format cf = SPLASH_DEPTH_UNKNOWN;
+ int pic_update = 0;
+ struct jpeg_decdata *decdata; /* private decoder data */
+
+ SPLASH_DEBUG("vc_num: %i", vc->vc_num);
+
+#if 0 /* Nouveau fb sets a different ops, so we can't use the condition */
+ if (info->fbops->fb_imageblit != cfb_imageblit) {
+ printk(KERN_ERR "bootsplash: "
+ "found, but framebuffer can't "
+ "handle it!\n");
+ return -1;
+ }
+#endif
+
+ if (!vc->vc_splash_data || !vc->vc_splash_data->splash_state) {
+ splash_off(vc, info);
+ return -1;
+ }
+
+ width = info->var.xres;
+ height = info->var.yres;
+ switch (info->var.bits_per_pixel) {
+ case 16:
+ if ((info->var.red.length +
+ info->var.green.length +
+ info->var.blue.length) == 15)
+ cf = SPLASH_DEPTH_15;
+ else
+ cf = SPLASH_DEPTH_16;
+ break;
+ case 24:
+ cf = SPLASH_DEPTH_24_PACKED;
+ break;
+ case 32:
+ cf = SPLASH_DEPTH_24;
+ break;
+ }
+ if (cf == SPLASH_DEPTH_UNKNOWN) {
+ printk(KERN_INFO "bootsplash: unsupported pixel format: %i\n",
+ info->var.bits_per_pixel);
+ splash_off(vc, info);
+ return -2;
+ }
+ octpp = splash_octpp(cf);
+
+ if (splash_look_for_jpeg(vc, width, height) < 0) {
+ printk(KERN_INFO "bootsplash: no matching splash %dx%d\n",
+ width, height);
+ splash_off(vc, info);
+ return -2;
+ }
+
+ sbytes = ((width + 15) & ~15) * octpp;
+ size = sbytes * ((height + 15) & ~15);
+
+ if (size != vc->vc_splash_data->pic->splash_pic_size) {
+ if (vc->vc_splash_data->pic->ref_cnt > 1) {
+ struct splash_pic_data *pic;
+ pic = kzalloc(sizeof(struct splash_pic_data),
+ GFP_KERNEL);
+ if (!pic)
+ return -2;
+ vc->vc_splash_data->pic = pic;
+ }
+ vc->vc_splash_data->pic->ref_cnt = 1;
+ vc->vc_splash_data->pic->splash_pic = NULL;
+ vc->vc_splash_data->pic->splash_pic_size = 0;
+ }
+ if (!vc->vc_splash_data->pic->splash_pic) {
+ vc->vc_splash_data->pic->splash_pic = vmalloc(size);
+ pic_update = 1;
+ }
+ if (!vc->vc_splash_data->pic->splash_pic) {
+ printk(KERN_INFO "bootsplash: not enough memory.\n");
+ splash_off(vc, info);
+ return -3;
+ }
+
+ decdata = vmalloc(sizeof(*decdata));
+ if (!decdata) {
+ printk(KERN_INFO "bootsplash: not enough memory.\n");
+ splash_off(vc, info);
+ return -3;
+ }
+
+ if (vc->vc_splash_data->imgd->splash_silentjpeg &&
+ vc->vc_splash_data->splash_dosilent) {
+ pic_update = 1;
+ err = jpeg_get(vc->vc_splash_data->imgd->splash_silentjpeg,
+ vc->vc_splash_data->pic->splash_pic,
+ width, height, cf, decdata);
+ if (err) {
+ printk(KERN_INFO "bootsplash: "
+ "error while decompressing silent picture: "
+ "%s (%d)\n",
+ jpg_errors[err - 1], err);
+ vc->vc_splash_data->splash_dosilent = 0;
+ } else {
+ if (vc->vc_splash_data->imgd->splash_sboxcount)
+ boxit(vc->vc_splash_data->pic->splash_pic,
+ sbytes,
+ vc->vc_splash_data->imgd->splash_sboxes,
+ vc->vc_splash_data->imgd->splash_sboxcount,
+ vc->vc_splash_data->splash_percent,
+ vc->vc_splash_data->splash_sboxes_xoff,
+ vc->vc_splash_data->splash_sboxes_yoff,
+ vc->vc_splash_data->splash_percent < 0 ?
+ 1 : 0,
+ cf);
+ splashcopy(info->screen_base,
+ vc->vc_splash_data->pic->splash_pic,
+ info->var.yres,
+ info->var.xres,
+ info->fix.line_length, sbytes,
+ octpp);
+ }
+ } else
+ vc->vc_splash_data->splash_dosilent = 0;
+
+ if (pic_update) {
+ err = jpeg_get(vc->vc_splash_data->imgd->splash_jpeg,
+ vc->vc_splash_data->pic->splash_pic,
+ width, height, cf, decdata);
+ if (err) {
+ printk(KERN_INFO "bootsplash: "
+ "error while decompressing picture: %s (%d) .\n",
+ jpg_errors[err - 1], err);
+ splash_off(vc, info);
+ return -4;
+ }
+ }
+
+ vfree(decdata);
+
+ vc->vc_splash_data->pic->splash_pic_size = size;
+ vc->vc_splash_data->pic->splash_pic_stride = sbytes;
+
+ if (vc->vc_splash_data->imgd->splash_boxcount)
+ boxit(vc->vc_splash_data->pic->splash_pic,
+ sbytes,
+ vc->vc_splash_data->imgd->splash_boxes,
+ vc->vc_splash_data->imgd->splash_boxcount,
+ vc->vc_splash_data->splash_percent,
+ vc->vc_splash_data->splash_boxes_xoff,
+ vc->vc_splash_data->splash_boxes_yoff,
+ 0,
+ cf);
+ if (vc->vc_splash_data->splash_state) {
+ int cols = vc->vc_splash_data->splash_vc_text_wi
+ / vc->vc_font.width;
+ int rows = vc->vc_splash_data->splash_vc_text_he
+ / vc->vc_font.height;
+
+ info->splash_data = vc->vc_splash_data;
+
+ info->splash_data->need_sync = 0;
+ /* XEN fb needs some sync after the direct modification of
+ * fb area; maybe other FBs would need similar hack, but
+ * so far I don't care.
+ */
+ if (!strcmp(info->fix.id, "xen")) {
+ info->splash_data->need_sync = 1;
+ /* sync the whole splash once */
+ splash_sync_region(info, 0, 0,
+ info->var.xres, info->var.yres);
+ }
+
+ /* vc_resize also calls con_switch which resets yscroll */
+ if (rows != vc->vc_rows || cols != vc->vc_cols)
+ vc_resize(vc, cols, rows);
+ if (!vc->vc_splash_data->color_set)
+ splash_recolor(vc, NULL);
+ } else {
+ SPLASH_DEBUG("Splash Status is off\n");
+ splash_off(vc, info);
+ return -5;
+ }
+ return 0;
+}
+
+
+#ifdef CONFIG_PROC_FS
+
+#include <linux/proc_fs.h>
+
+static int splash_read_proc(char *buffer, char **start, off_t offset, int size,
+ int *eof, void *data);
+static int splash_write_proc(struct file *file, const char *buffer,
+ unsigned long count, void *data);
+static int splash_status(struct vc_data *vc);
+static int splash_proc_register(void);
+
+static struct proc_dir_entry *proc_splash;
+
+static int splash_status(struct vc_data *vc)
+{
+ struct fb_info *info;
+
+ printk(KERN_INFO "bootsplash: status on console %d changed to %s\n",
+ vc->vc_num,
+ vc->vc_splash_data &&
+ vc->vc_splash_data->splash_state ? "on" : "off");
+
+ info = registered_fb[(int) con2fb_map[vc->vc_num]];
+ if (!info)
+ return 0;
+
+ if (fg_console == vc->vc_num)
+ splash_prepare(vc, info);
+ if (vc->vc_splash_data && vc->vc_splash_data->splash_state)
+ splash_recolor(vc, info);
+ else {
+ splash_off(vc, info);
+ if (vc->vc_def_color != 0x07)
+ con_remap_def_color(vc, 0x07);
+ }
+
+ return 0;
+}
+
+int splash_copy_current_img(int unit_s, int unit_t)
+{
+ struct fb_info *info;
+ struct vc_data *vc_s;
+ struct vc_data *vc_t;
+ struct splash_data *sd_s;
+ struct splash_data *sd_t;
+ int size;
+
+ if (unit_s >= MAX_NR_CONSOLES || unit_t >= MAX_NR_CONSOLES)
+ return -1;
+
+ vc_s = vc_cons[unit_s].d;
+ if (!vc_s) {
+ printk(KERN_WARNING "bootsplash: "
+ "copy: source (%i) is invalid.\n", unit_s);
+ return -1;
+ }
+ sd_s = vc_s->vc_splash_data;
+ if (!sd_s || !sd_s->imgd) {
+ printk(KERN_INFO "bootsplash: "
+ "copy: source_vc (%i) doesn't have valid splash data.\n",
+ unit_s);
+ return -1;
+ }
+ vc_allocate(unit_t);
+ vc_t = vc_cons[unit_t].d;
+ if (!vc_t) {
+ printk(KERN_WARNING "bootsplash: copy: dest (%i) is invalid.\n",
+ unit_t);
+ return -1;
+ }
+ sd_t = kzalloc(sizeof(*sd_t), GFP_KERNEL);
+ if (!sd_t)
+ return -1;
+ vc_t->vc_splash_data = sd_t;
+
+ sd_t->imgd = sd_s->imgd;
+ sd_t->imgd->ref_cnt++;
+
+ /* now recreate all the rest */
+ sd_t->splash_state = sd_s->splash_state;
+ sd_t->splash_percent = sd_s->splash_percent;
+ sd_t->splash_dosilent = sd_s->splash_dosilent;
+ sd_t->splash_vc_text_wi = sd_s->imgd->splash_text_wi;
+ sd_t->splash_vc_text_he = sd_s->imgd->splash_text_he;
+
+ sd_t->splash_boxes_xoff = 0;
+ sd_t->splash_boxes_yoff = 0;
+ sd_t->splash_sboxes_xoff = 0;
+ sd_t->splash_sboxes_yoff = 0;
+
+ info = registered_fb[(int) con2fb_map[vc_t->vc_num]];
+ size = (((info->var.xres + 15) & ~15)
+ * ((info->var.bits_per_pixel + 1) >> 3))
+ * ((info->var.yres + 15) & ~15);
+ if (size != vc_s->vc_splash_data->pic->splash_pic_size) {
+ sd_t->pic = kzalloc(sizeof(struct splash_pic_data), GFP_KERNEL);
+ if (!sd_t->pic)
+ return -1;
+ sd_t->pic->ref_cnt = 1;
+ } else {
+ sd_t->pic = sd_s->pic;
+ sd_t->pic->ref_cnt++;
+ }
+
+ splash_status(vc_t);
+
+ return 0;
+}
+
+static int splash_read_proc(char *buffer, char **start, off_t offset, int size,
+ int *eof, void *data)
+{
+ int len;
+ int xres, yres;
+ struct vc_data *vc = vc_cons[0].d;
+ struct fb_info *info = registered_fb[(int)con2fb_map[0]];
+ int color = vc->vc_splash_data ?
+ vc->vc_splash_data->imgd->splash_color << 4 |
+ vc->vc_splash_data->imgd->splash_fg_color : splash_default >> 4;
+ int status = vc->vc_splash_data ?
+ vc->vc_splash_data->splash_state & 1 : 0;
+
+ if (info) {
+ xres = info->var.xres;
+ yres = info->var.yres;
+ } else
+ xres = yres = 0;
+
+ len = sprintf(buffer, "Splash screen v%s (0x%02x, %dx%d%s): %s\n",
+ SPLASH_VERSION, color, xres, yres,
+ (vc->vc_splash_data ?
+ vc->vc_splash_data->splash_dosilent : 0) ? ", silent" :
+ "",
+ status ? "on" : "off");
+ if (offset >= len)
+ return 0;
+
+ *start = buffer - offset;
+
+ return (size < len - offset ? size : len - offset);
+}
+
+void splash_set_percent(struct vc_data *vc, int pe)
+{
+ struct fb_info *info;
+ struct fbcon_ops *ops;
+ struct splash_data *vc_splash_data;
+ int oldpe;
+
+ SPLASH_DEBUG(" console: %d val: %d\n", vc->vc_num, pe);
+
+ if (pe < -2)
+ pe = 0;
+ if (pe > 65535)
+ pe = 65535;
+ pe += pe > 32767;
+
+ vc_splash_data = vc->vc_splash_data;
+ if (!vc_splash_data || vc_splash_data->splash_percent == pe)
+ return;
+
+ oldpe = vc_splash_data->splash_percent;
+ vc_splash_data->splash_percent = pe;
+ if (fg_console != vc->vc_num ||
+ !vc_splash_data->splash_state) {
+ return;
+ }
+ info = registered_fb[(int) con2fb_map[vc->vc_num]];
+ if (!info)
+ return;
+
+ ops = info->fbcon_par;
+ if (ops->blank_state)
+ return;
+ if (!vc_splash_data->imgd->splash_overpaintok
+ || pe == 65536
+ || pe < oldpe) {
+ if (splash_hasinter(vc_splash_data->imgd->splash_boxes,
+ vc_splash_data->imgd->splash_boxcount)) {
+ splash_status(vc);
+ } else
+ splash_prepare(vc, info);
+ } else {
+ struct splash_data *splash_data = info->splash_data;
+ enum splash_color_format cf = SPLASH_DEPTH_UNKNOWN;
+ switch (info->var.bits_per_pixel) {
+ case 16:
+ if ((info->var.red.length +
+ info->var.green.length +
+ info->var.blue.length) == 15)
+ cf = SPLASH_DEPTH_15;
+ else
+ cf = SPLASH_DEPTH_16;
+ break;
+ case 24:
+ cf = SPLASH_DEPTH_24_PACKED;
+ break;
+ case 32:
+ cf = SPLASH_DEPTH_24;
+ break;
+ }
+ if (cf == SPLASH_DEPTH_UNKNOWN)
+ return;
+ if (splash_data) {
+ if (splash_data->imgd->splash_silentjpeg
+ && splash_data->splash_dosilent) {
+ boxit(info->screen_base,
+ info->fix.line_length,
+ splash_data->imgd->splash_sboxes,
+ splash_data->imgd->splash_sboxcount,
+ splash_data->splash_percent,
+ splash_data->splash_sboxes_xoff,
+ splash_data->splash_sboxes_yoff,
+ 1,
+ cf);
+ /* FIXME: get a proper width/height */
+ splash_sync_region(info,
+ splash_data->splash_sboxes_xoff,
+ splash_data->splash_sboxes_yoff,
+ info->var.xres -
+ splash_data->splash_sboxes_xoff,
+ 8);
+ }
+ }
+ }
+}
+
+static const char *get_unit(const char *buffer, int *unit)
+{
+
+ *unit = -1;
+ if (buffer[0] >= '0' && buffer[0] <= '9') {
+ *unit = buffer[0] - '0';
+ buffer++;
+ if (buffer[0] >= '0' && buffer[0] <= '9') {
+ *unit = *unit * 10 + buffer[0] - '0';
+ buffer++;
+ }
+ if (*buffer == ' ')
+ buffer++;
+ }
+ return buffer;
+}
+
+static int splash_write_proc(struct file *file, const char *buffer,
+ unsigned long count, void *data)
+{
+ int new, unit;
+ unsigned long uval;
+ struct vc_data *vc;
+ struct splash_data *vc_splash_data;
+
+ SPLASH_DEBUG();
+
+ if (!buffer || !splash_default)
+ return count;
+
+ console_lock();
+ unit = 0;
+ if (buffer[0] == '@') {
+ buffer++;
+ buffer = get_unit(buffer, &unit);
+ if (unit < 0 || unit >= MAX_NR_CONSOLES || !vc_cons[unit].d) {
+ console_unlock();
+ return count;
+ }
+ }
+ SPLASH_DEBUG(" unit: %i", unit);
+ vc = vc_cons[unit].d;
+ vc_splash_data = vc->vc_splash_data;
+
+ if (!strncmp(buffer, "redraw", 6)) {
+ SPLASH_DEBUG(" redraw");
+ splash_status(vc);
+ console_unlock();
+ return count;
+ }
+
+ if (!strncmp(buffer, "show", 4) || !strncmp(buffer, "hide", 4)) {
+ long int pe;
+
+ SPLASH_DEBUG("show/hide");
+ if (buffer[4] == ' ' && buffer[5] == 'p')
+ pe = 0;
+ else if (buffer[4] == '\n')
+ pe = 65535;
+ else if (strict_strtol(buffer + 5, 0, &pe) == -EINVAL)
+ pe = 0;
+ if (pe < -2)
+ pe = 0;
+ if (pe > 65535)
+ pe = 65535;
+ if (*buffer == 'h')
+ pe = 65535 - pe;
+ splash_set_percent(vc, pe);
+ console_unlock();
+ return count;
+ }
+
+ if (!strncmp(buffer, "copy", 4)) {
+ buffer += 4;
+ if (buffer[0] == ' ')
+ buffer++;
+ buffer = get_unit(buffer, &unit);
+ if (unit < 0 || unit >= MAX_NR_CONSOLES) {
+ console_unlock();
+ return count;
+ }
+ buffer = get_unit(buffer, &new);
+ if (new < 0 || new >= MAX_NR_CONSOLES) {
+ console_unlock();
+ return count;
+ }
+ splash_copy_current_img(unit, new);
+ console_unlock();
+ return count;
+ }
+
+ if (!strncmp(buffer, "silent\n", 7)
+ || !strncmp(buffer, "verbose\n", 8)) {
+ SPLASH_DEBUG(" silent/verbose");
+
+ if (vc_splash_data &&
+ vc_splash_data->imgd->splash_silentjpeg) {
+ if (vc_splash_data->splash_dosilent !=
+ (buffer[0] == 's')) {
+ vc_splash_data->splash_dosilent =
+ buffer[0] == 's';
+ splash_status(vc);
+ }
+ }
+ console_unlock();
+ return count;
+ }
+
+ if (!strncmp(buffer, "freesilent\n", 11)) {
+ SPLASH_DEBUG(" freesilent");
+
+ if (vc_splash_data &&
+ vc_splash_data->imgd->splash_silentjpeg) {
+ struct splash_data *sd;
+ printk(KERN_INFO "bootsplash: freeing silent jpeg\n");
+ for (sd = vc_splash_data; sd; sd = sd->next) {
+ sd->imgd->splash_silentjpeg = 0;
+ vfree(sd->imgd->splash_sboxes);
+ sd->imgd->splash_sboxes = 0;
+ sd->imgd->splash_sboxcount = 0;
+ }
+ if (vc_splash_data->splash_dosilent)
+ splash_status(vc);
+
+ vc->vc_splash_data->splash_dosilent = 0;
+ }
+ console_unlock();
+ return count;
+ }
+
+ if (!strncmp(buffer, "BOOTSPL", 7)) {
+ int up = -1;
+
+ SPLASH_DEBUG(" BOOTSPL");
+ unit = splash_getraw((unsigned char *)buffer,
+ (unsigned char *)buffer + count,
+ &up);
+ SPLASH_DEBUG(" unit: %i up: %i", unit, up);
+ if (unit >= 0) {
+ struct fb_info *info;
+
+ vc = vc_cons[unit].d;
+ info = registered_fb[(int) con2fb_map[vc->vc_num]];
+ if (!info) {
+ console_unlock();
+ return count;
+ }
+
+ if (up == -1) {
+ splash_status(vc);
+ } else {
+ struct splash_data *vc_splash_data
+ = vc->vc_splash_data;
+ struct splash_data *splash_data
+ = info->splash_data;
+ struct fbcon_ops *ops = info->fbcon_par;
+ enum splash_color_format cf = SPLASH_DEPTH_UNKNOWN;
+
+ switch (info->var.bits_per_pixel) {
+ case 16:
+ if ((info->var.red.length +
+ info->var.green.length +
+ info->var.blue.length) == 15)
+ cf = SPLASH_DEPTH_15;
+ else
+ cf = SPLASH_DEPTH_16;
+ break;
+ case 24:
+ cf = SPLASH_DEPTH_24_PACKED;
+ break;
+ case 32:
+ cf = SPLASH_DEPTH_24;
+ break;
+ }
+ if (cf == SPLASH_DEPTH_UNKNOWN)
+ up = 0;
+ if (ops->blank_state ||
+ !vc_splash_data ||
+ !splash_data)
+ up = 0;
+ if ((up & 2) != 0
+ && splash_data->imgd->splash_silentjpeg
+ && splash_data->splash_dosilent) {
+ boxit(info->screen_base,
+ info->fix.line_length,
+ splash_data->imgd->splash_sboxes,
+ splash_data->imgd->splash_sboxcount,
+ splash_data->splash_percent,
+ splash_data->splash_sboxes_xoff,
+ splash_data->splash_sboxes_yoff,
+ 1,
+ cf);
+ } else if ((up & 1) != 0) {
+ boxit(info->screen_base,
+ info->fix.line_length,
+ splash_data->imgd->splash_boxes,
+ splash_data->imgd->splash_boxcount,
+ splash_data->splash_percent,
+ splash_data->splash_boxes_xoff,
+ splash_data->splash_boxes_yoff,
+ 1,
+ cf);
+ }
+ }
+ }
+ console_unlock();
+ return count;
+ }
+
+ if (!vc_splash_data) {
+ console_unlock();
+ return count;
+ }
+
+ if (buffer[0] == 't') {
+ vc_splash_data->splash_state ^= 1;
+ SPLASH_DEBUG(" t");
+ splash_status(vc);
+ console_unlock();
+ return count;
+ }
+ if (strict_strtoul(buffer, 0, &uval) == -EINVAL)
+ uval = 1;
+ if (uval > 1) {
+ /* expert user */
+ vc_splash_data->imgd->splash_color = uval >> 8 & 0xff;
+ vc_splash_data->imgd->splash_fg_color = uval >> 4 & 0x0f;
+ }
+ if ((uval & 1) == vc_splash_data->splash_state)
+ splash_recolor(vc, NULL);
+ else {
+ vc_splash_data->splash_state = uval & 1;
+ splash_status(vc);
+ }
+ console_unlock();
+ return count;
+}
+
+static int splash_proc_register(void)
+{
+ proc_splash = create_proc_entry("splash", 0, 0);
+ if (proc_splash) {
+ proc_splash->read_proc = splash_read_proc;
+ proc_splash->write_proc = splash_write_proc;
+ return 0;
+ }
+ return 1;
+}
+
+#endif /* CONFIG_PROC_FS */
+
+#define INIT_CONSOLE 0
+
+void splash_init(void)
+{
+ static bool splash_not_initialized = true;
+ struct fb_info *info;
+ struct vc_data *vc;
+ int isramfs = 1;
+ int fd;
+ int len;
+ int max_len = 1024*1024*2;
+ char *mem;
+
+ if (splash_not_initialized == false)
+ return;
+ vc = vc_cons[INIT_CONSOLE].d;
+ info = registered_fb[(int)con2fb_map[INIT_CONSOLE]];
+ if (!vc
+ || !info
+ || info->var.bits_per_pixel < 16) /* not supported */
+ return;
+#ifdef CONFIG_PROC_FS
+ splash_proc_register();
+#endif
+ splash_not_initialized = false;
+ if (vc->vc_splash_data)
+ return;
+ fd = sys_open("/bootsplash", O_RDONLY, 0);
+ if (fd < 0) {
+ isramfs = 0;
+ fd = sys_open("/initrd.image", O_RDONLY, 0);
+ }
+ if (fd < 0)
+ return;
+ len = (int)sys_lseek(fd, (off_t)0, 2);
+ if (len <= 0) {
+ sys_close(fd);
+ return;
+ }
+ /* Don't look for more than the last 2MB */
+ if (len > max_len) {
+ printk(KERN_INFO "bootsplash: "
+ "scanning last %dMB of initrd for signature\n",
+ max_len>>20);
+ sys_lseek(fd, (off_t)(len - max_len), 0);
+ len = max_len;
+ } else {
+ sys_lseek(fd, (off_t)0, 0);
+ }
+
+ mem = vmalloc(len);
+ if (mem) {
+ console_lock();
+ if ((int)sys_read(fd, mem, len) == len
+ && (splash_getraw((unsigned char *)mem,
+ (unsigned char *)mem + len, (int *)0)
+ == INIT_CONSOLE)
+ && vc->vc_splash_data)
+ vc->vc_splash_data->splash_state = splash_default & 1;
+ console_unlock();
+ vfree(mem);
+ }
+ sys_close(fd);
+ if (isramfs)
+ sys_unlink("/bootsplash");
+ return;
+}
+
+#define SPLASH_ALIGN 15
+
+static u32 *do_coefficients(u32 from, u32 to, u32 *shift)
+{
+ u32 *coefficients;
+ u32 left = to;
+ int n = 1;
+ u32 upper = 31;
+ int col_cnt = 0;
+ int row_cnt = 0;
+ int m;
+ u32 rnd = from >> 1;
+
+ if (from > to) {
+ left = to;
+ rnd = from >> 1;
+
+ while (upper > 0) {
+ if ((1 << upper) & from)
+ break;
+ upper--;
+ }
+ upper++;
+
+ *shift = 32 - 8 - 1 - upper;
+
+ coefficients = vmalloc(sizeof(u32) * (from / to + 2) * from + 1);
+ if (!coefficients)
+ return NULL;
+
+ n = 1;
+ while (1) {
+ u32 sum = left;
+ col_cnt = 0;
+ m = n++;
+ while (sum < from) {
+ coefficients[n++] =
+ ((left << *shift) + rnd) / from;
+ col_cnt++;
+ left = to;
+ sum += left;
+ }
+ left = sum - from;
+ coefficients[n++] =
+ (((to - left) << *shift) + rnd) / from;
+ col_cnt++;
+ coefficients[m] = col_cnt;
+ row_cnt++;
+ if (!left) {
+ coefficients[0] = row_cnt;
+ return coefficients;
+ }
+ }
+ } else {
+ left = 0;
+ rnd = to >> 1;
+
+ while (upper > 0) {
+ if ((1 << upper) & to)
+ break;
+ upper--;
+ }
+ upper++;
+
+ *shift = 32 - 8 - 1 - upper;
+
+ coefficients = vmalloc(sizeof(u32) * 3 * from + 1);
+ if (!coefficients)
+ return NULL;
+
+ while (1) {
+ u32 diff;
+ u32 sum = left;
+ col_cnt = 0;
+ row_cnt++;
+ while (sum < to) {
+ col_cnt++;
+ sum += from;
+ }
+ left = sum - to;
+ diff = from - left;
+ if (!left) {
+ coefficients[n] = col_cnt;
+ coefficients[0] = row_cnt;
+ return coefficients;
+ }
+ coefficients[n++] = col_cnt - 1;
+ coefficients[n++] = ((diff << *shift) + rnd) / from;
+ coefficients[n++] = ((left << *shift) + rnd) / from;
+ }
+ }
+}
+
+
+struct pixel {
+ u32 red;
+ u32 green;
+ u32 blue;
+};
+
+#define put_pixel(pix, buf, cf) \
+ switch (cf) { \
+ case SPLASH_DEPTH_15: \
+ *(u16 *)(buf) = (u16)((pix).red << 10 | \
+ (pix).green << 5 | (pix).blue); \
+ (buf) += 2; \
+ break; \
+ case SPLASH_DEPTH_16: \
+ *(u16 *)(buf) = (u16)((pix).red << 11 | \
+ (pix).green << 5 | (pix).blue); \
+ (buf) += 2; \
+ break; \
+ case SPLASH_DEPTH_24_PACKED: \
+ *(u16 *)(buf) = (u16)((pix).red << 8 | (pix).green); \
+ buf += 2; \
+ *((buf)++) = (pix).blue; \
+ break; \
+ case SPLASH_DEPTH_24: \
+ *(u32 *)(buf) = (u32)((pix).red << 16 | \
+ (pix).green << 8 | (pix).blue); \
+ (buf) += 4; \
+ break; \
+ case SPLASH_DEPTH_UNKNOWN: \
+ break; \
+ }
+
+#define get_pixel(pix, buf, depth) \
+ switch (depth) { \
+ case SPLASH_DEPTH_15: \
+ (pix).red = ((*(u16 *)(buf)) >> 10) & 0x1f; \
+ (pix).green = ((*(u16 *)(buf)) >> 5) & 0x1f; \
+ (pix).blue = (*(u16 *)(buf)) & 0x1f; \
+ (buf) += 2; \
+ break; \
+ case SPLASH_DEPTH_16: \
+ (pix).red = ((*(u16 *)(buf)) >> 11) & 0x1f; \
+ (pix).green = ((*(u16 *)(buf)) >> 5) & 0x3f; \
+ (pix).blue = (*(u16 *)(buf)) & 0x1f; \
+ (buf) += 2; \
+ break; \
+ case SPLASH_DEPTH_24_PACKED: \
+ (pix).blue = *(((buf))++); \
+ (pix).green = *(((buf))++); \
+ (pix).red = *(((buf))++); \
+ break; \
+ case SPLASH_DEPTH_24: \
+ (pix).blue = *(((buf))++); \
+ (pix).green = *(((buf))++); \
+ (pix).red = *(((buf))++); \
+ (buf)++; \
+ break; \
+ case SPLASH_DEPTH_UNKNOWN: \
+ break; \
+ }
+
+static inline void
+scale_x_down(enum splash_color_format cf, int src_w,
+ unsigned char **src_p, u32 *x_coeff,
+ u32 x_shift, u32 y_coeff, struct pixel *row_buffer)
+{
+ u32 curr_x_coeff = 1;
+ struct pixel curr_pixel, tmp_pixel;
+ u32 x_array_size = x_coeff[0];
+ int x_column_num;
+ int i;
+ int l, m;
+ int k = 0;
+ u32 rnd = (1 << (x_shift - 1));
+
+ for (i = 0; i < src_w; ) {
+ curr_x_coeff = 1;
+ get_pixel(tmp_pixel, *src_p, cf);
+ i++;
+ for (l = 0; l < x_array_size; l++) {
+ x_column_num = x_coeff[curr_x_coeff++];
+ curr_pixel.red = 0;
+ curr_pixel.green = 0;
+ curr_pixel.blue = 0;
+ for (m = 0; m < x_column_num - 1; m++) {
+ curr_pixel.red += tmp_pixel.red
+ * x_coeff[curr_x_coeff];
+ curr_pixel.green += tmp_pixel.green
+ * x_coeff[curr_x_coeff];
+ curr_pixel.blue += tmp_pixel.blue
+ * x_coeff[curr_x_coeff];
+ curr_x_coeff++;
+ get_pixel(tmp_pixel, *src_p, cf);
+ i++;
+ }
+ curr_pixel.red += tmp_pixel.red * x_coeff[curr_x_coeff];
+ curr_pixel.green += tmp_pixel.green
+ * x_coeff[curr_x_coeff];
+ curr_pixel.blue += tmp_pixel.blue
+ * x_coeff[curr_x_coeff];
+ curr_x_coeff++;
+ curr_pixel.red = (curr_pixel.red + rnd) >> x_shift;
+ curr_pixel.green = (curr_pixel.green + rnd) >> x_shift;
+ curr_pixel.blue = (curr_pixel.blue + rnd) >> x_shift;
+ row_buffer[k].red += curr_pixel.red * y_coeff;
+ row_buffer[k].green += curr_pixel.green * y_coeff;
+ row_buffer[k].blue += curr_pixel.blue * y_coeff;
+ k++;
+ }
+ }
+}
+
+static inline void
+scale_x_up(enum splash_color_format cf, int src_w,
+ unsigned char **src_p, u32 *x_coeff,
+ u32 x_shift, u32 y_coeff, struct pixel *row_buffer)
+{
+ u32 curr_x_coeff = 1;
+ struct pixel curr_pixel, tmp_pixel;
+ u32 x_array_size = x_coeff[0];
+ int x_column_num;
+ int i;
+ int l, m;
+ int k = 0;
+ u32 rnd = (1 << (x_shift - 1));
+
+ for (i = 0; i < src_w;) {
+ curr_x_coeff = 1;
+ get_pixel(tmp_pixel, *src_p, cf);
+ i++;
+ for (l = 0; l < x_array_size - 1; l++) {
+ x_column_num = x_coeff[curr_x_coeff++];
+ for (m = 0; m < x_column_num; m++) {
+ row_buffer[k].red += tmp_pixel.red * y_coeff;
+ row_buffer[k].green += tmp_pixel.green * y_coeff;
+ row_buffer[k].blue += tmp_pixel.blue * y_coeff;
+ k++;
+ }
+ curr_pixel.red = tmp_pixel.red * x_coeff[curr_x_coeff];
+ curr_pixel.green = tmp_pixel.green
+ * x_coeff[curr_x_coeff];
+ curr_pixel.blue = tmp_pixel.blue * x_coeff[curr_x_coeff];
+ curr_x_coeff++;
+ get_pixel(tmp_pixel, *src_p, cf);
+ i++;
+ row_buffer[k].red += ((curr_pixel.red
+ + (tmp_pixel.red
+ * x_coeff[curr_x_coeff])
+ + rnd) >> x_shift) * y_coeff;
+ row_buffer[k].green += ((curr_pixel.green
+ + (tmp_pixel.green
+ * x_coeff[curr_x_coeff])
+ + rnd) >> x_shift) * y_coeff;
+ row_buffer[k].blue += ((curr_pixel.blue
+ + (tmp_pixel.blue
+ * x_coeff[curr_x_coeff])
+ + rnd) >> x_shift) * y_coeff;
+ k++;
+ curr_x_coeff++;
+ }
+ for (m = 0; m < x_coeff[curr_x_coeff]; m++) {
+ row_buffer[k].red += tmp_pixel.red * y_coeff;
+ row_buffer[k].green += tmp_pixel.green * y_coeff;
+ row_buffer[k].blue += tmp_pixel.blue * y_coeff;
+ k++;
+ }
+ }
+}
+
+static int scale_y_down(unsigned char *src, unsigned char *dst,
+ enum splash_color_format cf,
+ int src_w, int src_h, int dst_w, int dst_h)
+{
+ int octpp = splash_octpp(cf);
+ int src_x_bytes = octpp * ((src_w + SPLASH_ALIGN) & ~SPLASH_ALIGN);
+ int dst_x_bytes = octpp * ((dst_w + SPLASH_ALIGN) & ~SPLASH_ALIGN);
+ int j;
+ struct pixel *row_buffer;
+ u32 x_shift, y_shift;
+ u32 *x_coeff;
+ u32 *y_coeff;
+ u32 curr_y_coeff = 1;
+ unsigned char *src_p;
+ unsigned char *src_p_line = src;
+ char *dst_p_line;
+ int r, s;
+ int y_array_rows;
+ int y_column_num;
+ int k;
+ u32 rnd;
+ int xup;
+
+ row_buffer = vmalloc(sizeof(struct pixel)
+ * (dst_w + 1));
+ x_coeff = do_coefficients(src_w, dst_w, &x_shift);
+ y_coeff = do_coefficients(src_h, dst_h, &y_shift);
+ if (!row_buffer || !x_coeff || !y_coeff) {
+ vfree(row_buffer);
+ vfree(x_coeff);
+ vfree(y_coeff);
+ return -ENOMEM;
+ }
+ y_array_rows = y_coeff[0];
+ rnd = (1 << (y_shift - 1));
+ xup = (src_w <= dst_w) ? 1 : 0;
+
+ dst_p_line = dst;
+
+ for (j = 0; j < src_h;) {
+ curr_y_coeff = 1;
+ for (r = 0; r < y_array_rows; r++) {
+ y_column_num = y_coeff[curr_y_coeff++];
+ for (k = 0; k < dst_w + 1; k++) {
+ row_buffer[k].red = 0;
+ row_buffer[k].green = 0;
+ row_buffer[k].blue = 0;
+ }
+ src_p = src_p_line;
+ if (xup)
+ scale_x_up(cf, src_w, &src_p, x_coeff,
+ x_shift, y_coeff[curr_y_coeff],
+ row_buffer);
+ else
+ scale_x_down(cf, src_w, &src_p, x_coeff,
+ x_shift, y_coeff[curr_y_coeff],
+ row_buffer);
+ curr_y_coeff++;
+ for (s = 1; s < y_column_num; s++) {
+ src_p = src_p_line = src_p_line + src_x_bytes;
+ j++;
+ if (xup)
+ scale_x_up(cf, src_w, &src_p,
+ x_coeff, x_shift,
+ y_coeff[curr_y_coeff],
+ row_buffer);
+ else
+ scale_x_down(cf, src_w, &src_p,
+ x_coeff, x_shift,
+ y_coeff[curr_y_coeff],
+ row_buffer);
+ curr_y_coeff++;
+ }
+ for (k = 0; k < dst_w; k++) {
+ row_buffer[k].red = (row_buffer[k].red + rnd)
+ >> y_shift;
+ row_buffer[k].green = (row_buffer[k].green
+ + rnd)
+ >> y_shift;
+ row_buffer[k].blue = (row_buffer[k].blue + rnd)
+ >> y_shift;
+ put_pixel(row_buffer[k], dst, cf);
+ }
+ dst = dst_p_line = dst_p_line + dst_x_bytes;
+ }
+ src_p_line = src_p_line + src_x_bytes;
+ j++;
+ }
+ vfree(row_buffer);
+ vfree(x_coeff);
+ vfree(y_coeff);
+ return 0;
+}
+
+static int scale_y_up(unsigned char *src, unsigned char *dst,
+ enum splash_color_format cf,
+ int src_w, int src_h, int dst_w, int dst_h)
+{
+ int octpp = splash_octpp(cf);
+ int src_x_bytes = octpp * ((src_w + SPLASH_ALIGN) & ~SPLASH_ALIGN);
+ int dst_x_bytes = octpp * ((dst_w + SPLASH_ALIGN) & ~SPLASH_ALIGN);
+ int j;
+ u32 x_shift, y_shift;
+ u32 *x_coeff;
+ u32 *y_coeff;
+ struct pixel *row_buf_list[2];
+ struct pixel *row_buffer;
+ u32 curr_y_coeff = 1;
+ unsigned char *src_p;
+ unsigned char *src_p_line = src;
+ char *dst_p_line;
+ int r, s;
+ int y_array_rows;
+ int y_column_num;
+ int k;
+ u32 rnd;
+ int bi;
+ int xup;
+ int writes;
+
+ x_coeff = do_coefficients(src_w, dst_w, &x_shift);
+ y_coeff = do_coefficients(src_h, dst_h, &y_shift);
+ row_buf_list[0] = vmalloc(2 * sizeof(struct pixel)
+ * (dst_w + 1));
+ if (!row_buf_list[0] || !x_coeff || !y_coeff) {
+ vfree(row_buf_list[0]);
+ vfree(x_coeff);
+ vfree(y_coeff);
+ return -ENOMEM;
+ }
+ row_buf_list[1] = row_buf_list[0] + (dst_w + 1);
+
+ y_array_rows = y_coeff[0];
+ rnd = (1 << (y_shift - 1));
+ bi = 1;
+ xup = (src_w <= dst_w) ? 1 : 0;
+ writes = 0;
+
+ dst_p_line = dst;
+ src_p = src_p_line;
+
+ row_buffer = row_buf_list[0];
+
+ for (j = 0; j < src_h;) {
+ memset(row_buf_list[0], 0, (2 * sizeof(struct pixel)
+ * (dst_w + 1)));
+ curr_y_coeff = 1;
+ if (xup)
+ scale_x_up(cf, src_w, &src_p, x_coeff,
+ x_shift, 1, row_buffer);
+ else
+ scale_x_down(cf, src_w, &src_p, x_coeff, x_shift, 1,
+ row_buffer);
+ src_p = src_p_line = src_p_line + src_x_bytes;
+ j++;
+ for (r = 0; r < y_array_rows - 1; r++) {
+ struct pixel *old_row_buffer = row_buffer;
+ u32 prev_y_coeff_val;
+
+ y_column_num = y_coeff[curr_y_coeff];
+ for (s = 0; s < y_column_num; s++) {
+ for (k = 0; k < dst_w; k++)
+ put_pixel(row_buffer[k], dst, cf);
+ dst = dst_p_line = dst_p_line + dst_x_bytes;
+ writes++;
+ }
+ curr_y_coeff++;
+ row_buffer = row_buf_list[(bi++) % 2];
+ prev_y_coeff_val = y_coeff[curr_y_coeff++];
+ if (xup)
+ scale_x_up(cf, src_w, &src_p, x_coeff,
+ x_shift, 1, row_buffer);
+ else
+ scale_x_down(cf, src_w, &src_p, x_coeff,
+ x_shift, 1, row_buffer);
+ src_p = src_p_line = src_p_line + src_x_bytes;
+ j++;
+ for (k = 0; k < dst_w; k++) {
+ struct pixel pix;
+ pix.red = ((old_row_buffer[k].red
+ * prev_y_coeff_val)
+ + (row_buffer[k].red
+ * y_coeff[curr_y_coeff])
+ + rnd) >> y_shift;
+ pix.green = ((old_row_buffer[k].green
+ * prev_y_coeff_val)
+ + (row_buffer[k].green
+ * y_coeff[curr_y_coeff])
+ + rnd) >> y_shift;
+ pix.blue = ((old_row_buffer[k].blue
+ * prev_y_coeff_val)
+ + (row_buffer[k].blue
+ * y_coeff[curr_y_coeff])
+ + rnd) >> y_shift;
+ old_row_buffer[k].red = 0;
+ old_row_buffer[k].green = 0;
+ old_row_buffer[k].blue = 0;
+ put_pixel(pix, dst, cf);
+ }
+ dst = dst_p_line = dst_p_line + dst_x_bytes;
+ writes++;
+ curr_y_coeff++;
+ }
+ for (r = 0; r < y_coeff[curr_y_coeff]; r++) {
+ for (k = 0; k < dst_w; k++)
+ put_pixel(row_buffer[k], dst, cf);
+
+ dst = dst_p_line = dst_p_line + dst_x_bytes;
+ writes++;
+ }
+ }
+ vfree(row_buf_list[0]);
+ vfree(x_coeff);
+ vfree(y_coeff);
+
+ return 0;
+}
+
+static int jpeg_get(unsigned char *buf, unsigned char *pic,
+ int width, int height, enum splash_color_format cf,
+ struct jpeg_decdata *decdata)
+{
+ int my_width, my_height;
+ int err;
+ int octpp = splash_octpp(cf);
+
+ jpeg_get_size(buf, &my_width, &my_height);
+
+ if (my_height != height || my_width != width) {
+ int my_size = ((my_width + 15) & ~15)
+ * ((my_height + 15) & ~15) * octpp;
+ unsigned char *mem = vmalloc(my_size);
+ if (!mem)
+ return 17;
+ err = jpeg_decode(buf, mem, ((my_width + 15) & ~15),
+ ((my_height + 15) & ~15), cf, decdata);
+ if (err) {
+ vfree(mem);
+ return err;
+ }
+ printk(KERN_INFO
+ "bootsplash: scaling image from %dx%d to %dx%d\n",
+ my_width, my_height, width, height);
+ if (my_height <= height)
+ err = scale_y_up(mem, pic, cf, my_width, my_height,
+ ((width + 15) & ~15),
+ ((height + 15) & ~15));
+ else
+ err = scale_y_down(mem, pic, cf, my_width, my_height,
+ ((width + 15) & ~15),
+ ((height + 15) & ~15));
+ vfree(mem);
+ if (err < 0)
+ return 17;
+ } else {
+ err = jpeg_decode(buf, pic, ((width + 15) & ~15),
+ ((height + 15) & ~15), cf, decdata);
+ if (err)
+ return err;
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * linux/drivers/video/bootsplash/render.c - splash screen render functions.
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/fb.h>
+#include <linux/vt_kern.h>
+#include <linux/selection.h>
+#include <asm/irq.h>
- #include <asm/system.h>
+
+#include "../console/fbcon.h"
+#include <linux/bootsplash.h>
+
+#ifndef DEBUG
+# define SPLASH_DEBUG(fmt, args...)
+#else
+# define SPLASH_DEBUG(fmt, args...) \
+ printk(KERN_WARNING "%s: " fmt "\n", __func__, ##args)
+#endif
+
+/* fake a region sync */
+void splash_sync_region(struct fb_info *info, int x, int y,
+ int width, int height)
+{
+ struct splash_data *sd = info->splash_data;
+ if (sd && sd->need_sync) {
+ /* issue a fake copyarea (copy to the very same position)
+ * for marking the dirty region; this is required for Xen fb
+ * (bnc#739020)
+ */
+ struct fb_copyarea area;
+ area.sx = area.dx = x;
+ area.sy = area.dy = y;
+ area.width = width;
+ area.height = height;
+ info->fbops->fb_copyarea(info, &area);
+ }
+}
+
+void splash_putcs(struct vc_data *vc, struct fb_info *info,
+ const unsigned short *s, int count, int ypos, int xpos)
+{
+ struct splash_data *sd;
+ unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
+ int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+ int fgshift = (vc->vc_hi_font_mask) ? 9 : 8;
+ union pt src;
+ union pt dst, splashsrc;
+ unsigned int d, x, y;
+ u32 dd, fgx, bgx;
+ u16 c = scr_readw(s);
+ int fg_color, bg_color, transparent;
+ int n;
+ int octpp = (info->var.bits_per_pixel + 1) >> 3;
+ int drawn_width;
+
+ if (!oops_in_progress
+ && (console_blanked || info->splash_data->splash_dosilent))
+ return;
+ sd = info->splash_data;
+
+ fg_color = attr_fgcol(fgshift, c);
+ bg_color = attr_bgcol(bgshift, c);
+ transparent = sd->imgd->splash_color == bg_color;
+ xpos = xpos * vc->vc_font.width + sd->imgd->splash_text_xo;
+ ypos = ypos * vc->vc_font.height + sd->imgd->splash_text_yo;
+ splashsrc.ub = (u8 *)(sd->pic->splash_pic
+ + ypos * sd->pic->splash_pic_stride
+ + xpos * octpp);
+ dst.ub = (u8 *)(info->screen_base
+ + ypos * info->fix.line_length
+ + xpos * octpp);
+ fgx = ((u32 *)info->pseudo_palette)[fg_color];
+ if (transparent && sd->imgd->splash_color == 15) {
+ if (fgx == 0xffea)
+ fgx = 0xfe4a;
+ else if (fgx == 0x57ea)
+ fgx = 0x0540;
+ else if (fgx == 0xffff)
+ fgx = 0x52aa;
+ }
+ bgx = ((u32 *)info->pseudo_palette)[bg_color];
+ d = 0;
+ drawn_width = 0;
+ while (count--) {
+ c = scr_readw(s++);
+ src.ub = vc->vc_font.data
+ + ((c & charmask)
+ * vc->vc_font.height
+ * ((vc->vc_font.width + 7) >> 3));
+ for (y = 0; y < vc->vc_font.height; y++) {
+ for (x = 0; x < vc->vc_font.width; ) {
+ if ((x & 7) == 0)
+ d = *src.ub++;
+ switch (octpp) {
+ case 2:
+ if (d & 0x80)
+ dd = fgx;
+ else
+ dd = (transparent ?
+ *splashsrc.us : bgx);
+ splashsrc.us += 1;
+ if (d & 0x40)
+ dd |= fgx << 16;
+ else
+ dd |= (transparent ? *splashsrc.us : bgx) << 16;
+ splashsrc.us += 1;
+ d <<= 2;
+ x += 2;
+ fb_writel(dd, dst.ul);
+ dst.ul += 1;
+ break;
+ case 3:
+ for (n = 0; n <= 16; n += 8) {
+ if (d & 0x80)
+ dd = (fgx >> n) & 0xff;
+ else
+ dd = (transparent ? *splashsrc.ul : ((bgx >> n) & 0xff));
+ splashsrc.ub += 1;
+ fb_writeb(dd, dst.ub);
+ dst.ub += 1;
+ }
+ d <<= 1;
+ x += 1;
+ break;
+ case 4:
+ if (d & 0x80)
+ dd = fgx;
+ else
+ dd = (transparent ? *splashsrc.ul : bgx);
+ splashsrc.ul += 1;
+ d <<= 1;
+ x += 1;
+ fb_writel(dd, dst.ul);
+ dst.ul += 1;
+ break;
+ }
+ }
+ dst.ub += info->fix.line_length
+ - vc->vc_font.width * octpp;
+ splashsrc.ub += sd->pic->splash_pic_stride
+ - vc->vc_font.width * octpp;
+ }
+ dst.ub -= info->fix.line_length * vc->vc_font.height
+ - vc->vc_font.width * octpp;
+ splashsrc.ub -= sd->pic->splash_pic_stride * vc->vc_font.height
+ - vc->vc_font.width * octpp;
+ drawn_width += vc->vc_font.width;
+ }
+ splash_sync_region(info, xpos, ypos, drawn_width, vc->vc_font.height);
+}
+
+static void splash_renderc(struct fb_info *info,
+ int fg_color, int bg_color,
+ u8 *src,
+ int ypos, int xpos,
+ int height, int width)
+{
+ struct splash_data *sd;
+ int transparent;
+ u32 dd, fgx, bgx;
+ union pt dst, splashsrc;
+ unsigned int d, x, y;
+ int n;
+ int octpp = (info->var.bits_per_pixel + 1) >> 3;
+
+ if (!oops_in_progress
+ && (console_blanked || info->splash_data->splash_dosilent))
+ return;
+
+ sd = info->splash_data;
+
+ transparent = sd->imgd->splash_color == bg_color;
+ splashsrc.ub = (u8 *)(sd->pic->splash_pic
+ + ypos * sd->pic->splash_pic_stride
+ + xpos * octpp);
+ dst.ub = (u8 *)(info->screen_base
+ + ypos * info->fix.line_length
+ + xpos * octpp);
+ fgx = ((u32 *)info->pseudo_palette)[fg_color];
+ if (transparent && (sd->imgd->splash_color == 15)) {
+ if (fgx == 0xffea)
+ fgx = 0xfe4a;
+ else if (fgx == 0x57ea)
+ fgx = 0x0540;
+ else if (fgx == 0xffff)
+ fgx = 0x52aa;
+ }
+ bgx = ((u32 *)info->pseudo_palette)[bg_color];
+ d = 0;
+ for (y = 0; y < height; y++) {
+ for (x = 0; x < width; ) {
+ if ((x & 7) == 0)
+ d = *src++;
+ switch (octpp) {
+ case 2:
+ if (d & 0x80)
+ dd = fgx;
+ else
+ dd = (transparent ? *splashsrc.us : bgx);
+ splashsrc.us += 1;
+ if (d & 0x40)
+ dd |= fgx << 16;
+ else
+ dd |= (transparent ? *splashsrc.us : bgx) << 16;
+ splashsrc.us += 1;
+ d <<= 2;
+ x += 2;
+ fb_writel(dd, dst.ul);
+ dst.ul += 1;
+ break;
+ case 3:
+ for (n = 0; n <= 16; n += 8) {
+ if (d & 0x80)
+ dd = (fgx >> n) & 0xff;
+ else
+ dd = (transparent ? *splashsrc.ub : bgx);
+ splashsrc.ub += 1;
+ fb_writeb(dd, dst.ub);
+ dst.ub += 1;
+ }
+ d <<= 1;
+ x += 1;
+ break;
+ case 4:
+ if (d & 0x80)
+ dd = fgx;
+ else
+ dd = (transparent ? *splashsrc.ul : bgx);
+ splashsrc.ul += 1;
+ d <<= 1;
+ x += 1;
+ fb_writel(dd, dst.ul);
+ dst.ul += 1;
+ break;
+ }
+ }
+ dst.ub += info->fix.line_length - width * octpp;
+ splashsrc.ub += sd->pic->splash_pic_stride - width * octpp;
+ }
+ splash_sync_region(info, xpos, ypos, width, height);
+}
+
+void splashcopy(u8 *dst, u8 *src, int height, int width,
+ int dstbytes, int srcbytes, int octpp)
+{
+ int i;
+
+ width *= octpp;
+ while (height-- > 0) {
+ union pt p, q;
+ p.ul = (u32 *)dst;
+ q.ul = (u32 *)src;
+ for (i = 0; i < width / 4; i++)
+ fb_writel(*q.ul++, p.ul++);
+ if (width & 2)
+ fb_writew(*q.us++, p.us++);
+ if (width & 1)
+ fb_writeb(*q.ub, p.ub);
+ dst += dstbytes;
+ src += srcbytes;
+ }
+}
+
+static void splashset(u8 *dst, int height, int width,
+ int dstbytes, u32 bgx, int octpp) {
+ int i;
+
+ width *= octpp;
+ if (octpp == 2)
+ bgx |= bgx << 16;
+ while (height-- > 0) {
+ union pt p;
+ p.ul = (u32 *)dst;
+ if (!(octpp & 1)) {
+ for (i = 0; i < width / 4; i++)
+ fb_writel(bgx, p.ul++);
+ if (width & 2)
+ fb_writew(bgx, p.us++);
+ if (width & 1)
+ fb_writeb(bgx, p.ub);
+ dst += dstbytes;
+ } else { /* slow! */
+ for (i = 0; i < width; i++)
+ fb_writeb((bgx >> ((i % 3) * 8)) & 0xff,
+ p.ub++);
+ }
+ }
+}
+
+static void splashfill(struct fb_info *info, int sy, int sx,
+ int height, int width) {
+ int octpp = (info->var.bits_per_pixel + 1) >> 3;
+ struct splash_data *sd = info->splash_data;
+
+ splashcopy((u8 *)(info->screen_base
+ + sy * info->fix.line_length + sx * octpp),
+ (u8 *)(sd->pic->splash_pic
+ + sy * sd->pic->splash_pic_stride
+ + sx * octpp),
+ height, width, info->fix.line_length,
+ sd->pic->splash_pic_stride,
+ octpp);
+ splash_sync_region(info, sx, sy, width, height);
+}
+
+void splash_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ int sx, int height, int width)
+{
+ struct splash_data *sd;
+ int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+ int bg_color = attr_bgcol_ec(bgshift, vc, info);
+ int transparent;
+ int octpp = (info->var.bits_per_pixel + 1) >> 3;
+ u32 bgx;
+ u8 *dst;
+
+ if (!oops_in_progress
+ && (console_blanked || info->splash_data->splash_dosilent))
+ return;
+
+ sd = info->splash_data;
+
+ transparent = sd->imgd->splash_color == bg_color;
+
+ sy = sy * vc->vc_font.height + sd->imgd->splash_text_yo;
+ sx = sx * vc->vc_font.width + sd->imgd->splash_text_xo;
+ height *= vc->vc_font.height;
+ width *= vc->vc_font.width;
+ if (transparent) {
+ splashfill(info, sy, sx, height, width);
+ return;
+ }
+ dst = (u8 *)(info->screen_base
+ + sy * info->fix.line_length
+ + sx * octpp);
+ bgx = ((u32 *)info->pseudo_palette)[bg_color];
+ splashset(dst,
+ height, width,
+ info->fix.line_length,
+ bgx,
+ (info->var.bits_per_pixel + 1) >> 3);
+ splash_sync_region(info, sx, sy, width, height);
+}
+
+void splash_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ int sx, int dy, int dx, int height, int width)
+{
+ struct splash_data *sd;
+ struct fb_copyarea area;
+
+ if (!oops_in_progress
+ && (console_blanked || info->splash_data->splash_dosilent))
+ return;
+
+ sd = info->splash_data;
+
+ area.sx = sx * vc->vc_font.width;
+ area.sy = sy * vc->vc_font.height;
+ area.dx = dx * vc->vc_font.width;
+ area.dy = dy * vc->vc_font.height;
+ area.sx += sd->imgd->splash_text_xo;
+ area.sy += sd->imgd->splash_text_yo;
+ area.dx += sd->imgd->splash_text_xo;
+ area.dy += sd->imgd->splash_text_yo;
+ area.height = height * vc->vc_font.height;
+ area.width = width * vc->vc_font.width;
+
+ info->fbops->fb_copyarea(info, &area);
+}
+
+void splash_clear_margins(struct vc_data *vc, struct fb_info *info,
+ int bottom_only)
+{
+ struct splash_data *sd;
+ unsigned int tw = vc->vc_cols*vc->vc_font.width;
+ unsigned int th = vc->vc_rows*vc->vc_font.height;
+ SPLASH_DEBUG();
+
+ if (!oops_in_progress
+ && (console_blanked || info->splash_data->splash_dosilent))
+ return;
+
+ sd = info->splash_data;
+
+ if (!bottom_only) {
+ /* top margin */
+ splashfill(info,
+ 0,
+ 0,
+ sd->imgd->splash_text_yo,
+ info->var.xres);
+ /* left margin */
+ splashfill(info,
+ sd->imgd->splash_text_yo,
+ 0,
+ th,
+ sd->imgd->splash_text_xo);
+ /* right margin */
+ splashfill(info,
+ sd->imgd->splash_text_yo,
+ sd->imgd->splash_text_xo + tw,
+ th,
+ info->var.xres - sd->imgd->splash_text_xo - tw);
+ }
+ splashfill(info,
+ sd->imgd->splash_text_yo + th,
+ 0,
+ info->var.yres - sd->imgd->splash_text_yo - th,
+ info->var.xres);
+}
+
+int splash_cursor(struct fb_info *info, struct fb_cursor *cursor)
+{
+ struct splash_data *sd;
+ int i;
+ unsigned int dsize, s_pitch;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return 0;
+
+ sd = info->splash_data;
+
+ s_pitch = (cursor->image.width + 7) >> 3;
+ dsize = s_pitch * cursor->image.height;
+ if (cursor->enable) {
+ switch (cursor->rop) {
+ case ROP_XOR:
+ for (i = 0; i < dsize; i++)
+ info->fb_cursordata[i] = cursor->image.data[i]
+ ^ cursor->mask[i];
+ break;
+ case ROP_COPY:
+ default:
+ for (i = 0; i < dsize; i++)
+ info->fb_cursordata[i] = cursor->image.data[i]
+ & cursor->mask[i];
+ break;
+ }
+ } else if (info->fb_cursordata != cursor->image.data)
+ memcpy(info->fb_cursordata, cursor->image.data, dsize);
+ cursor->image.data = info->fb_cursordata;
+ splash_renderc(info, cursor->image.fg_color, cursor->image.bg_color,
+ (u8 *)info->fb_cursordata,
+ cursor->image.dy + sd->imgd->splash_text_yo,
+ cursor->image.dx + sd->imgd->splash_text_xo,
+ cursor->image.height,
+ cursor->image.width);
+ return 0;
+}
+
+void splash_bmove_redraw(struct vc_data *vc, struct fb_info *info,
+ int y, int sx, int dx, int width)
+{
+ struct splash_data *sd;
+ unsigned short *d = (unsigned short *) (vc->vc_origin
+ + vc->vc_size_row * y
+ + dx * 2);
+ unsigned short *s = d + (dx - sx);
+ unsigned short *start = d;
+ unsigned short *ls = d;
+ unsigned short *le = d + width;
+ unsigned short c;
+ int x = dx;
+ unsigned short attr = 1;
+
+ if (console_blanked || info->splash_data->splash_dosilent)
+ return;
+
+ sd = info->splash_data;
+
+ do {
+ c = scr_readw(d);
+ if (attr != (c & 0xff00)) {
+ attr = c & 0xff00;
+ if (d > start) {
+ splash_putcs(vc, info, start, d - start, y, x);
+ x += d - start;
+ start = d;
+ }
+ }
+ if (s >= ls && s < le && c == scr_readw(s)) {
+ if (d > start) {
+ splash_putcs(vc, info, start, d - start, y, x);
+ x += d - start + 1;
+ start = d + 1;
+ } else {
+ x++;
+ start++;
+ }
+ }
+ s++;
+ d++;
+ } while (d < le);
+ if (d > start)
+ splash_putcs(vc, info, start, d - start, y, x);
+}
+
+void splash_blank(struct vc_data *vc, struct fb_info *info, int blank)
+{
+ SPLASH_DEBUG();
+
+ if (blank) {
+ splashset((u8 *)info->screen_base,
+ info->var.yres, info->var.xres,
+ info->fix.line_length,
+ 0,
+ (info->var.bits_per_pixel + 1) >> 3);
+ splash_sync_region(info, 0, 0, info->var.xres, info->var.yres);
+ } else {
+ /* splash_prepare(vc, info); *//* do we really need this? */
+ splash_clear_margins(vc, info, 0);
+ /* no longer needed, done in fbcon_blank */
+ /* update_screen(vc->vc_num); */
+ }
+}
#include <linux/crc32.h> /* For counting font checksums */
#include <asm/fb.h>
#include <asm/irq.h>
- #include <asm/system.h>
#include "fbcon.h"
+#include <linux/bootsplash.h>
#ifdef FBCONDEBUG
# define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
static struct display fb_display[MAX_NR_CONSOLES];
+#ifdef CONFIG_BOOTSPLASH
+signed char con2fb_map[MAX_NR_CONSOLES];
+#else
static signed char con2fb_map[MAX_NR_CONSOLES];
+#endif
static signed char con2fb_map_boot[MAX_NR_CONSOLES];
static int logo_lines;
for (i = first_fb_vc; i <= last_fb_vc; i++)
con2fb_map[i] = info_idx;
+ splash_init();
+
err = take_over_console(&fb_con, first_fb_vc, last_fb_vc,
fbcon_is_default);
new_cols /= vc->vc_font.width;
new_rows /= vc->vc_font.height;
+#ifdef CONFIG_BOOTSPLASH
+ if (vc->vc_splash_data && vc->vc_splash_data->splash_state) {
+ new_cols = vc->vc_splash_data->splash_vc_text_wi
+ / vc->vc_font.width;
+ new_rows = vc->vc_splash_data->splash_vc_text_he
+ / vc->vc_font.height;
+ logo = 0;
+ con_remap_def_color(vc,
+ (vc->vc_splash_data->imgd->splash_color
+ << 4) |
+ vc->vc_splash_data->imgd->splash_fg_color);
+ }
+#endif
+
+
/*
* We must always set the mode. The mode of the previous console
* driver could be in the same resolution but we are using different
fbcon_softback_note(vc, t, count);
if (logo_shown >= 0)
goto redraw_up;
+ if (SPLASH_DATA(info))
+ goto redraw_up;
switch (p->scrollmode) {
case SCROLL_MOVE:
fbcon_redraw_blit(vc, info, p, t, b - t - count,
count = vc->vc_rows;
if (logo_shown >= 0)
goto redraw_down;
+ if (SPLASH_DATA(info))
+ goto redraw_down;
switch (p->scrollmode) {
case SCROLL_MOVE:
fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
}
return;
}
+
+ if (SPLASH_DATA(info) && sy == dy && height == 1) {
+ /*must use slower redraw bmove to keep background pic intact*/
+ splash_bmove_redraw(vc, info, sy, sx, dx, width);
+ return;
+ }
ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
height, width);
}
info = registered_fb[con2fb_map[vc->vc_num]];
ops = info->fbcon_par;
+#ifdef CONFIG_BOOTSPLASH
+ {
+ struct splash_data *prev_sd = vc->vc_splash_data;
+ splash_prepare(vc, info);
+ if (vc->vc_splash_data && vc->vc_splash_data->splash_state &&
+ vc->vc_splash_data != prev_sd) {
+ vc_resize(vc, vc->vc_splash_data->splash_vc_text_wi
+ / vc->vc_font.width,
+ vc->vc_splash_data->splash_vc_text_he
+ / vc->vc_font.height);
+ con_remap_def_color(vc,
+ vc->vc_splash_data->imgd->splash_color << 4
+ | vc->vc_splash_data->imgd->splash_fg_color);
+ }
+ }
+#endif
+
if (softback_top) {
if (softback_lines)
fbcon_set_origin(vc);
{
struct fb_event event;
+ if (SPLASH_DATA(info)) {
+ splash_blank(vc, info, blank);
+ return;
+ }
+
if (blank) {
unsigned short charmask = vc->vc_hi_font_mask ?
0x1ff : 0xff;
cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ if (SPLASH_DATA(info)) {
+ cols = TEXT_WIDTH_FROM_SPLASH_DATA(info);
+ rows = TEXT_HIGHT_FROM_SPLASH_DATA(info);
+ }
cols /= w;
rows /= h;
vc_resize(vc, cols, rows);
kfree(extent_op);
if (ret) {
- printk("run_delayed_extent_op returned %d\n", ret);
+ printk(KERN_DEBUG "btrfs: run_delayed_extent_op returned %d\n", ret);
return ret;
}
count++;
if (ret) {
- printk("run_one_delayed_ref returned %d\n", ret);
+ printk(KERN_DEBUG "btrfs: run_one_delayed_ref returned %d\n", ret);
return ret;
}
static void set_avail_alloc_bits(struct btrfs_fs_info *fs_info, u64 flags)
{
- u64 extra_flags = flags & BTRFS_BLOCK_GROUP_PROFILE_MASK;
-
- /* chunk -> extended profile */
- if (extra_flags == 0)
- extra_flags = BTRFS_AVAIL_ALLOC_BIT_SINGLE;
+ u64 extra_flags = chunk_to_extended(flags) &
+ BTRFS_EXTENDED_PROFILE_MASK;
if (flags & BTRFS_BLOCK_GROUP_DATA)
fs_info->avail_data_alloc_bits |= extra_flags;
}
/*
+ * returns target flags in extended format or 0 if restripe for this
+ * chunk_type is not in progress
+ */
+ static u64 get_restripe_target(struct btrfs_fs_info *fs_info, u64 flags)
+ {
+ struct btrfs_balance_control *bctl = fs_info->balance_ctl;
+ u64 target = 0;
+
+ BUG_ON(!mutex_is_locked(&fs_info->volume_mutex) &&
+ !spin_is_locked(&fs_info->balance_lock));
+
+ if (!bctl)
+ return 0;
+
+ if (flags & BTRFS_BLOCK_GROUP_DATA &&
+ bctl->data.flags & BTRFS_BALANCE_ARGS_CONVERT) {
+ target = BTRFS_BLOCK_GROUP_DATA | bctl->data.target;
+ } else if (flags & BTRFS_BLOCK_GROUP_SYSTEM &&
+ bctl->sys.flags & BTRFS_BALANCE_ARGS_CONVERT) {
+ target = BTRFS_BLOCK_GROUP_SYSTEM | bctl->sys.target;
+ } else if (flags & BTRFS_BLOCK_GROUP_METADATA &&
+ bctl->meta.flags & BTRFS_BALANCE_ARGS_CONVERT) {
+ target = BTRFS_BLOCK_GROUP_METADATA | bctl->meta.target;
+ }
+
+ return target;
+ }
+
+ /*
* @flags: available profiles in extended format (see ctree.h)
*
* Returns reduced profile in chunk format. If profile changing is in
*/
u64 num_devices = root->fs_info->fs_devices->rw_devices +
root->fs_info->fs_devices->missing_devices;
+ u64 target;
- /* pick restriper's target profile if it's available */
+ /*
+ * see if restripe for this chunk_type is in progress, if so
+ * try to reduce to the target profile
+ */
spin_lock(&root->fs_info->balance_lock);
- if (root->fs_info->balance_ctl) {
- struct btrfs_balance_control *bctl = root->fs_info->balance_ctl;
- u64 tgt = 0;
-
- if ((flags & BTRFS_BLOCK_GROUP_DATA) &&
- (bctl->data.flags & BTRFS_BALANCE_ARGS_CONVERT) &&
- (flags & bctl->data.target)) {
- tgt = BTRFS_BLOCK_GROUP_DATA | bctl->data.target;
- } else if ((flags & BTRFS_BLOCK_GROUP_SYSTEM) &&
- (bctl->sys.flags & BTRFS_BALANCE_ARGS_CONVERT) &&
- (flags & bctl->sys.target)) {
- tgt = BTRFS_BLOCK_GROUP_SYSTEM | bctl->sys.target;
- } else if ((flags & BTRFS_BLOCK_GROUP_METADATA) &&
- (bctl->meta.flags & BTRFS_BALANCE_ARGS_CONVERT) &&
- (flags & bctl->meta.target)) {
- tgt = BTRFS_BLOCK_GROUP_METADATA | bctl->meta.target;
- }
-
- if (tgt) {
+ target = get_restripe_target(root->fs_info, flags);
+ if (target) {
+ /* pick target profile only if it's already available */
+ if ((flags & target) & BTRFS_EXTENDED_PROFILE_MASK) {
spin_unlock(&root->fs_info->balance_lock);
- flags = tgt;
- goto out;
+ return extended_to_chunk(target);
}
}
spin_unlock(&root->fs_info->balance_lock);
flags &= ~BTRFS_BLOCK_GROUP_RAID0;
}
- out:
- /* extended -> chunk profile */
- flags &= ~BTRFS_AVAIL_ALLOC_BIT_SINGLE;
- return flags;
+ return extended_to_chunk(flags);
}
static u64 get_alloc_profile(struct btrfs_root *root, u64 flags)
}
data_sinfo->bytes_may_use += bytes;
trace_btrfs_space_reservation(root->fs_info, "space_info",
- (u64)(unsigned long)data_sinfo,
- bytes, 1);
+ data_sinfo->flags, bytes, 1);
spin_unlock(&data_sinfo->lock);
return 0;
spin_lock(&data_sinfo->lock);
data_sinfo->bytes_may_use -= bytes;
trace_btrfs_space_reservation(root->fs_info, "space_info",
- (u64)(unsigned long)data_sinfo,
- bytes, 0);
+ data_sinfo->flags, bytes, 0);
spin_unlock(&data_sinfo->lock);
}
return 1;
}
+ static u64 get_system_chunk_thresh(struct btrfs_root *root, u64 type)
+ {
+ u64 num_dev;
+
+ if (type & BTRFS_BLOCK_GROUP_RAID10 ||
+ type & BTRFS_BLOCK_GROUP_RAID0)
+ num_dev = root->fs_info->fs_devices->rw_devices;
+ else if (type & BTRFS_BLOCK_GROUP_RAID1)
+ num_dev = 2;
+ else
+ num_dev = 1; /* DUP or single */
+
+ /* metadata for updaing devices and chunk tree */
+ return btrfs_calc_trans_metadata_size(root, num_dev + 1);
+ }
+
+ static void check_system_chunk(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root, u64 type)
+ {
+ struct btrfs_space_info *info;
+ u64 left;
+ u64 thresh;
+
+ info = __find_space_info(root->fs_info, BTRFS_BLOCK_GROUP_SYSTEM);
+ spin_lock(&info->lock);
+ left = info->total_bytes - info->bytes_used - info->bytes_pinned -
+ info->bytes_reserved - info->bytes_readonly;
+ spin_unlock(&info->lock);
+
+ thresh = get_system_chunk_thresh(root, type);
+ if (left < thresh && btrfs_test_opt(root, ENOSPC_DEBUG)) {
+ printk(KERN_INFO "left=%llu, need=%llu, flags=%llu\n",
+ left, thresh, type);
+ dump_space_info(info, 0, 0);
+ }
+
+ if (left < thresh) {
+ u64 flags;
+
+ flags = btrfs_get_alloc_profile(root->fs_info->chunk_root, 0);
+ btrfs_alloc_chunk(trans, root, flags);
+ }
+ }
+
static int do_chunk_alloc(struct btrfs_trans_handle *trans,
struct btrfs_root *extent_root, u64 alloc_bytes,
u64 flags, int force)
int wait_for_alloc = 0;
int ret = 0;
- BUG_ON(!profile_is_valid(flags, 0));
-
space_info = __find_space_info(extent_root->fs_info, flags);
if (!space_info) {
ret = update_space_info(extent_root->fs_info, flags,
force_metadata_allocation(fs_info);
}
+ /*
+ * Check if we have enough space in SYSTEM chunk because we may need
+ * to update devices.
+ */
+ check_system_chunk(trans, extent_root, flags);
+
ret = btrfs_alloc_chunk(trans, extent_root, flags);
if (ret < 0 && ret != -ENOSPC)
goto out;
!space_info->flush);
/* Must have been interrupted, return */
if (ret) {
- printk("%s returning -EINTR\n", __func__);
+ printk(KERN_DEBUG "btrfs: %s returning -EINTR\n", __func__);
return -EINTR;
}
if (used + orig_bytes <= space_info->total_bytes) {
space_info->bytes_may_use += orig_bytes;
trace_btrfs_space_reservation(root->fs_info,
- "space_info",
- (u64)(unsigned long)space_info,
- orig_bytes, 1);
+ "space_info", space_info->flags, orig_bytes, 1);
ret = 0;
} else {
/*
if (used + num_bytes < space_info->total_bytes + avail) {
space_info->bytes_may_use += orig_bytes;
trace_btrfs_space_reservation(root->fs_info,
- "space_info",
- (u64)(unsigned long)space_info,
- orig_bytes, 1);
+ "space_info", space_info->flags, orig_bytes, 1);
ret = 0;
} else {
wait_ordered = true;
spin_lock(&space_info->lock);
space_info->bytes_may_use -= num_bytes;
trace_btrfs_space_reservation(fs_info, "space_info",
- (u64)(unsigned long)space_info,
- num_bytes, 0);
+ space_info->flags, num_bytes, 0);
space_info->reservation_progress++;
spin_unlock(&space_info->lock);
}
num_bytes += div64_u64(data_used + meta_used, 50);
if (num_bytes * 3 > meta_used)
- num_bytes = div64_u64(meta_used, 3) * 2;
+ num_bytes = div64_u64(meta_used, 3);
return ALIGN(num_bytes, fs_info->extent_root->leafsize << 10);
}
block_rsv->reserved += num_bytes;
sinfo->bytes_may_use += num_bytes;
trace_btrfs_space_reservation(fs_info, "space_info",
- (u64)(unsigned long)sinfo, num_bytes, 1);
+ sinfo->flags, num_bytes, 1);
}
if (block_rsv->reserved >= block_rsv->size) {
num_bytes = block_rsv->reserved - block_rsv->size;
sinfo->bytes_may_use -= num_bytes;
trace_btrfs_space_reservation(fs_info, "space_info",
- (u64)(unsigned long)sinfo, num_bytes, 0);
+ sinfo->flags, num_bytes, 0);
sinfo->reservation_progress++;
block_rsv->reserved = block_rsv->size;
block_rsv->full = 1;
return;
trace_btrfs_space_reservation(root->fs_info, "transaction",
- (u64)(unsigned long)trans,
- trans->bytes_reserved, 0);
+ trans->transid, trans->bytes_reserved, 0);
btrfs_block_rsv_release(root, trans->block_rsv, trans->bytes_reserved);
trans->bytes_reserved = 0;
}
space_info->bytes_reserved += num_bytes;
if (reserve == RESERVE_ALLOC) {
trace_btrfs_space_reservation(cache->fs_info,
- "space_info",
- (u64)(unsigned long)space_info,
- num_bytes, 0);
+ "space_info", space_info->flags,
+ num_bytes, 0);
space_info->bytes_may_use -= num_bytes;
}
}
ret = btrfs_del_csums(trans, root, bytenr, num_bytes);
if (ret)
goto abort;
- } else {
- invalidate_mapping_pages(info->btree_inode->i_mapping,
- bytenr >> PAGE_CACHE_SHIFT,
- (bytenr + num_bytes - 1) >> PAGE_CACHE_SHIFT);
}
ret = update_block_group(trans, root, bytenr, num_bytes, 0);
return 0;
}
- static int get_block_group_index(struct btrfs_block_group_cache *cache)
+ static int __get_block_group_index(u64 flags)
{
int index;
- if (cache->flags & BTRFS_BLOCK_GROUP_RAID10)
+
+ if (flags & BTRFS_BLOCK_GROUP_RAID10)
index = 0;
- else if (cache->flags & BTRFS_BLOCK_GROUP_RAID1)
+ else if (flags & BTRFS_BLOCK_GROUP_RAID1)
index = 1;
- else if (cache->flags & BTRFS_BLOCK_GROUP_DUP)
+ else if (flags & BTRFS_BLOCK_GROUP_DUP)
index = 2;
- else if (cache->flags & BTRFS_BLOCK_GROUP_RAID0)
+ else if (flags & BTRFS_BLOCK_GROUP_RAID0)
index = 3;
else
index = 4;
+
return index;
}
+ static int get_block_group_index(struct btrfs_block_group_cache *cache)
+ {
+ return __get_block_group_index(cache->flags);
+ }
+
enum btrfs_loop_type {
- LOOP_FIND_IDEAL = 0,
- LOOP_CACHING_NOWAIT = 1,
- LOOP_CACHING_WAIT = 2,
- LOOP_ALLOC_CHUNK = 3,
- LOOP_NO_EMPTY_SIZE = 4,
+ LOOP_CACHING_NOWAIT = 0,
+ LOOP_CACHING_WAIT = 1,
+ LOOP_ALLOC_CHUNK = 2,
+ LOOP_NO_EMPTY_SIZE = 3,
};
/*
static noinline int find_free_extent(struct btrfs_trans_handle *trans,
struct btrfs_root *orig_root,
u64 num_bytes, u64 empty_size,
- u64 search_start, u64 search_end,
u64 hint_byte, struct btrfs_key *ins,
u64 data)
{
struct btrfs_free_cluster *last_ptr = NULL;
struct btrfs_block_group_cache *block_group = NULL;
struct btrfs_block_group_cache *used_block_group;
+ u64 search_start = 0;
int empty_cluster = 2 * 1024 * 1024;
int allowed_chunk_alloc = 0;
int done_chunk_alloc = 0;
bool failed_alloc = false;
bool use_cluster = true;
bool have_caching_bg = false;
- u64 ideal_cache_percent = 0;
- u64 ideal_cache_offset = 0;
WARN_ON(num_bytes < root->sectorsize);
btrfs_set_key_type(ins, BTRFS_EXTENT_ITEM_KEY);
empty_cluster = 0;
if (search_start == hint_byte) {
- ideal_cache:
block_group = btrfs_lookup_block_group(root->fs_info,
search_start);
used_block_group = block_group;
* picked out then we don't care that the block group is cached.
*/
if (block_group && block_group_bits(block_group, data) &&
- (block_group->cached != BTRFS_CACHE_NO ||
- search_start == ideal_cache_offset)) {
+ block_group->cached != BTRFS_CACHE_NO) {
down_read(&space_info->groups_sem);
if (list_empty(&block_group->list) ||
block_group->ro) {
have_block_group:
cached = block_group_cache_done(block_group);
if (unlikely(!cached)) {
- u64 free_percent;
-
found_uncached_bg = true;
ret = cache_block_group(block_group, trans,
- orig_root, 1);
- BUG_ON(ret < 0); /* -ENOMEM */
- if (block_group->cached == BTRFS_CACHE_FINISHED)
- goto alloc;
-
- free_percent = btrfs_block_group_used(&block_group->item);
- free_percent *= 100;
- free_percent = div64_u64(free_percent,
- block_group->key.offset);
- free_percent = 100 - free_percent;
- if (free_percent > ideal_cache_percent &&
- likely(!block_group->ro)) {
- ideal_cache_offset = block_group->key.objectid;
- ideal_cache_percent = free_percent;
- }
-
- /*
- * The caching workers are limited to 2 threads, so we
- * can queue as much work as we care to.
- */
- if (loop > LOOP_FIND_IDEAL) {
- ret = cache_block_group(block_group, trans,
- orig_root, 0);
- BUG_ON(ret); /* -ENOMEM */
- }
-
- /*
- * If loop is set for cached only, try the next block
- * group.
- */
- if (loop == LOOP_FIND_IDEAL)
- goto loop;
+ orig_root, 0);
+ BUG_ON(ret < 0);
+ ret = 0;
}
- alloc:
if (unlikely(block_group->ro))
goto loop;
}
checks:
search_start = stripe_align(root, offset);
- /* move on to the next group */
- if (search_start + num_bytes >= search_end) {
- btrfs_add_free_space(used_block_group, offset, num_bytes);
- goto loop;
- }
/* move on to the next group */
if (search_start + num_bytes >
if (!ins->objectid && ++index < BTRFS_NR_RAID_TYPES)
goto search;
- /* LOOP_FIND_IDEAL, only search caching/cached bg's, and don't wait for
- * for them to make caching progress. Also
- * determine the best possible bg to cache
+ /*
* LOOP_CACHING_NOWAIT, search partially cached block groups, kicking
* caching kthreads as we move along
* LOOP_CACHING_WAIT, search everything, and wait if our bg is caching
*/
if (!ins->objectid && loop < LOOP_NO_EMPTY_SIZE) {
index = 0;
- if (loop == LOOP_FIND_IDEAL && found_uncached_bg) {
- found_uncached_bg = false;
- loop++;
- if (!ideal_cache_percent)
- goto search;
-
- /*
- * 1 of the following 2 things have happened so far
- *
- * 1) We found an ideal block group for caching that
- * is mostly full and will cache quickly, so we might
- * as well wait for it.
- *
- * 2) We searched for cached only and we didn't find
- * anything, and we didn't start any caching kthreads
- * either, so chances are we will loop through and
- * start a couple caching kthreads, and then come back
- * around and just wait for them. This will be slower
- * because we will have 2 caching kthreads reading at
- * the same time when we could have just started one
- * and waited for it to get far enough to give us an
- * allocation, so go ahead and go to the wait caching
- * loop.
- */
- loop = LOOP_CACHING_WAIT;
- search_start = ideal_cache_offset;
- ideal_cache_percent = 0;
- goto ideal_cache;
- } else if (loop == LOOP_FIND_IDEAL) {
- /*
- * Didn't find a uncached bg, wait on anything we find
- * next.
- */
- loop = LOOP_CACHING_WAIT;
- goto search;
- }
-
loop++;
-
if (loop == LOOP_ALLOC_CHUNK) {
if (allowed_chunk_alloc) {
ret = do_chunk_alloc(trans, root, num_bytes +
struct btrfs_root *root,
u64 num_bytes, u64 min_alloc_size,
u64 empty_size, u64 hint_byte,
- u64 search_end, struct btrfs_key *ins,
- u64 data)
+ struct btrfs_key *ins, u64 data)
{
bool final_tried = false;
int ret;
- u64 search_start = 0;
data = btrfs_get_alloc_profile(root, data);
again:
WARN_ON(num_bytes < root->sectorsize);
ret = find_free_extent(trans, root, num_bytes, empty_size,
- search_start, search_end, hint_byte,
- ins, data);
+ hint_byte, ins, data);
if (ret == -ENOSPC) {
if (!final_tried) {
btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
btrfs_tree_lock(buf);
clean_tree_block(trans, root, buf);
+ clear_bit(EXTENT_BUFFER_STALE, &buf->bflags);
btrfs_set_lock_blocking(buf);
btrfs_set_buffer_uptodate(buf);
return ERR_CAST(block_rsv);
ret = btrfs_reserve_extent(trans, root, blocksize, blocksize,
- empty_size, hint, (u64)-1, &ins, 0);
+ empty_size, hint, &ins, 0);
if (ret) {
unuse_block_rsv(root->fs_info, block_rsv, blocksize);
return ERR_PTR(ret);
trans = btrfs_start_transaction(tree_root, 0);
if (IS_ERR(trans)) {
- kfree(wc);
- btrfs_free_path(path);
err = PTR_ERR(trans);
- goto out;
+ goto out_free;
}
if (block_rsv)
static u64 update_block_group_flags(struct btrfs_root *root, u64 flags)
{
u64 num_devices;
- u64 stripped = BTRFS_BLOCK_GROUP_RAID0 |
- BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID10;
+ u64 stripped;
- if (root->fs_info->balance_ctl) {
- struct btrfs_balance_control *bctl = root->fs_info->balance_ctl;
- u64 tgt = 0;
-
- /* pick restriper's target profile and return */
- if (flags & BTRFS_BLOCK_GROUP_DATA &&
- bctl->data.flags & BTRFS_BALANCE_ARGS_CONVERT) {
- tgt = BTRFS_BLOCK_GROUP_DATA | bctl->data.target;
- } else if (flags & BTRFS_BLOCK_GROUP_SYSTEM &&
- bctl->sys.flags & BTRFS_BALANCE_ARGS_CONVERT) {
- tgt = BTRFS_BLOCK_GROUP_SYSTEM | bctl->sys.target;
- } else if (flags & BTRFS_BLOCK_GROUP_METADATA &&
- bctl->meta.flags & BTRFS_BALANCE_ARGS_CONVERT) {
- tgt = BTRFS_BLOCK_GROUP_METADATA | bctl->meta.target;
- }
-
- if (tgt) {
- /* extended -> chunk profile */
- tgt &= ~BTRFS_AVAIL_ALLOC_BIT_SINGLE;
- return tgt;
- }
- }
+ /*
+ * if restripe for this chunk_type is on pick target profile and
+ * return, otherwise do the usual balance
+ */
+ stripped = get_restripe_target(root->fs_info, flags);
+ if (stripped)
+ return extended_to_chunk(stripped);
/*
* we add in the count of missing devices because we want
num_devices = root->fs_info->fs_devices->rw_devices +
root->fs_info->fs_devices->missing_devices;
+ stripped = BTRFS_BLOCK_GROUP_RAID0 |
+ BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID10;
+
if (num_devices == 1) {
stripped |= BTRFS_BLOCK_GROUP_DUP;
stripped = flags & ~stripped;
if (flags & (BTRFS_BLOCK_GROUP_RAID1 |
BTRFS_BLOCK_GROUP_RAID10))
return stripped | BTRFS_BLOCK_GROUP_DUP;
- return flags;
} else {
/* they already had raid on here, just return */
if (flags & stripped)
if (flags & BTRFS_BLOCK_GROUP_DUP)
return stripped | BTRFS_BLOCK_GROUP_RAID1;
- /* turn single device chunks into raid0 */
- return stripped | BTRFS_BLOCK_GROUP_RAID0;
+ /* this is drive concat, leave it alone */
}
+
return flags;
}
u64 min_free;
u64 dev_min = 1;
u64 dev_nr = 0;
+ u64 target;
int index;
int full = 0;
int ret = 0;
/*
* ok we don't have enough space, but maybe we have free space on our
* devices to allocate new chunks for relocation, so loop through our
- * alloc devices and guess if we have enough space. However, if we
- * were marked as full, then we know there aren't enough chunks, and we
- * can just return.
+ * alloc devices and guess if we have enough space. if this block
+ * group is going to be restriped, run checks against the target
+ * profile instead of the current one.
*/
ret = -1;
- if (full)
- goto out;
/*
* index:
* 3: raid0
* 4: single
*/
- index = get_block_group_index(block_group);
+ target = get_restripe_target(root->fs_info, block_group->flags);
+ if (target) {
+ index = __get_block_group_index(extended_to_chunk(target));
+ } else {
+ /*
+ * this is just a balance, so if we were marked as full
+ * we know there is no space for a new chunk
+ */
+ if (full)
+ goto out;
+
+ index = get_block_group_index(block_group);
+ }
+
if (index == 0) {
dev_min = 4;
/* Divide by 2 */
static void clear_avail_alloc_bits(struct btrfs_fs_info *fs_info, u64 flags)
{
- u64 extra_flags = flags & BTRFS_BLOCK_GROUP_PROFILE_MASK;
-
- /* chunk -> extended profile */
- if (extra_flags == 0)
- extra_flags = BTRFS_AVAIL_ALLOC_BIT_SINGLE;
+ u64 extra_flags = chunk_to_extended(flags) &
+ BTRFS_EXTENDED_PROFILE_MASK;
if (flags & BTRFS_BLOCK_GROUP_DATA)
fs_info->avail_data_alloc_bits &= ~extra_flags;
};
struct nfs_cache_array {
- unsigned int size;
+ int size;
int eof_index;
u64 last_cookie;
struct nfs_cache_array_entry array[0];
struct nfs_cache_array *array;
int i;
- array = kmap_atomic(page, KM_USER0);
+ array = kmap_atomic(page);
for (i = 0; i < array->size; i++)
kfree(array->array[i].string.name);
- kunmap_atomic(array, KM_USER0);
+ kunmap_atomic(array);
}
/*
desc->dir_cookie = &dir_ctx->dir_cookie;
desc->decode = NFS_PROTO(inode)->decode_dirent;
desc->plus = NFS_USE_READDIRPLUS(inode);
+ if (filp->f_pos > 0 && !test_bit(NFS_INO_SEEN_GETATTR, &NFS_I(inode)->flags))
+ desc->plus = 0;
+ clear_bit(NFS_INO_SEEN_GETATTR, &NFS_I(inode)->flags);
nfs_block_sillyrename(dentry);
res = nfs_revalidate_mapping(inode, filp->f_mapping);
}
open_flags = nd->intent.open.flags;
+ attr.ia_valid = 0;
ctx = create_nfs_open_context(dentry, open_flags);
res = ERR_CAST(ctx);
if (nd->flags & LOOKUP_CREATE) {
attr.ia_mode = nd->intent.open.create_mode;
- attr.ia_valid = ATTR_MODE;
+ attr.ia_valid |= ATTR_MODE;
attr.ia_mode &= ~current_umask();
- } else {
+ } else
open_flags &= ~(O_EXCL | O_CREAT);
- attr.ia_valid = 0;
+
+ if (open_flags & O_TRUNC) {
+ attr.ia_valid |= ATTR_SIZE;
+ attr.ia_size = 0;
}
/* Open the file on the server */
struct inode *inode;
struct inode *dir;
struct nfs_open_context *ctx;
+ struct iattr attr;
int openflags, ret = 0;
if (nd->flags & LOOKUP_RCU)
/* We cannot do exclusive creation on a positive dentry */
if ((openflags & (O_CREAT|O_EXCL)) == (O_CREAT|O_EXCL))
goto no_open_dput;
- /* We can't create new files, or truncate existing ones here */
- openflags &= ~(O_CREAT|O_EXCL|O_TRUNC);
+ /* We can't create new files here */
+ openflags &= ~(O_CREAT|O_EXCL);
ctx = create_nfs_open_context(dentry, openflags);
ret = PTR_ERR(ctx);
if (IS_ERR(ctx))
goto out;
+
+ attr.ia_valid = 0;
+ if (openflags & O_TRUNC) {
+ attr.ia_valid |= ATTR_SIZE;
+ attr.ia_size = 0;
+ nfs_wb_all(inode);
+ }
+
/*
* Note: we're not holding inode->i_mutex and so may be racing with
* operations that change the directory. We therefore save the
* change attribute *before* we do the RPC call.
*/
- inode = NFS_PROTO(dir)->open_context(dir, ctx, openflags, NULL);
+ inode = NFS_PROTO(dir)->open_context(dir, ctx, openflags, &attr);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
switch (ret) {
if (!page)
return -ENOMEM;
- kaddr = kmap_atomic(page, KM_USER0);
+ kaddr = kmap_atomic(page);
memcpy(kaddr, symname, pathlen);
if (pathlen < PAGE_SIZE)
memset(kaddr + pathlen, 0, PAGE_SIZE - pathlen);
- kunmap_atomic(kaddr, KM_USER0);
+ kunmap_atomic(kaddr);
error = NFS_PROTO(dir)->symlink(dir, dentry, page, pathlen, &attr);
if (error != 0) {
#include <linux/slab.h>
#include <linux/compat.h>
#include <linux/freezer.h>
+ #include <linux/crc32.h>
- #include <asm/system.h>
#include <asm/uaccess.h>
#include "nfs4_fs.h"
#include "fscache.h"
#include "dns_resolve.h"
#include "pnfs.h"
+ #include "netns.h"
#define NFSDBG_FACILITY NFSDBG_VFS
unlock_new_inode(inode);
} else
nfs_refresh_inode(inode, fattr);
- dprintk("NFS: nfs_fhget(%s/%Ld ct=%d)\n",
+ dprintk("NFS: nfs_fhget(%s/%Ld fh_crc=0x%08x ct=%d)\n",
inode->i_sb->s_id,
(long long)NFS_FILEID(inode),
+ nfs_display_fhandle_hash(fh),
atomic_read(&inode->i_count));
out:
goto out;
}
- #define NFS_VALID_ATTRS (ATTR_MODE|ATTR_UID|ATTR_GID|ATTR_SIZE|ATTR_ATIME|ATTR_ATIME_SET|ATTR_MTIME|ATTR_MTIME_SET|ATTR_FILE)
+ #define NFS_VALID_ATTRS (ATTR_MODE|ATTR_UID|ATTR_GID|ATTR_SIZE|ATTR_ATIME|ATTR_ATIME_SET|ATTR_MTIME|ATTR_MTIME_SET|ATTR_FILE|ATTR_OPEN)
int
nfs_setattr(struct dentry *dentry, struct iattr *attr)
/* Optimization: if the end result is no change, don't RPC */
attr->ia_valid &= NFS_VALID_ATTRS;
- if ((attr->ia_valid & ~ATTR_FILE) == 0)
+ if ((attr->ia_valid & ~(ATTR_FILE|ATTR_OPEN)) == 0)
return 0;
/* Write all dirty data */
struct inode *inode = dentry->d_inode;
int need_atime = NFS_I(inode)->cache_validity & NFS_INO_INVALID_ATIME;
int err;
+ struct dentry *p;
+ struct inode *pi;
+
+ rcu_read_lock();
+ p = dentry->d_parent;
+ pi = rcu_dereference(p)->d_inode;
+ if (pi && !test_bit(NFS_INO_SEEN_GETATTR, &NFS_I(pi)->flags))
+ set_bit(NFS_INO_SEEN_GETATTR, &NFS_I(pi)->flags);
+ rcu_read_unlock();
/* Flush out writes to the server in order to update c/mtime. */
if (S_ISREG(inode->i_mode)) {
return fh;
}
+ #ifdef NFS_DEBUG
+ /*
+ * _nfs_display_fhandle_hash - calculate the crc32 hash for the filehandle
+ * in the same way that wireshark does
+ *
+ * @fh: file handle
+ *
+ * For debugging only.
+ */
+ u32 _nfs_display_fhandle_hash(const struct nfs_fh *fh)
+ {
+ /* wireshark uses 32-bit AUTODIN crc and does a bitwise
+ * not on the result */
+ return ~crc32(0xFFFFFFFF, &fh->data[0], fh->size);
+ }
+
+ /*
+ * _nfs_display_fhandle - display an NFS file handle on the console
+ *
+ * @fh: file handle to display
+ * @caption: display caption
+ *
+ * For debugging only.
+ */
+ void _nfs_display_fhandle(const struct nfs_fh *fh, const char *caption)
+ {
+ unsigned short i;
+
+ if (fh == NULL || fh->size == 0) {
+ printk(KERN_DEFAULT "%s at %p is empty\n", caption, fh);
+ return;
+ }
+
+ printk(KERN_DEFAULT "%s at %p is %u bytes, crc: 0x%08x:\n",
+ caption, fh, fh->size, _nfs_display_fhandle_hash(fh));
+ for (i = 0; i < fh->size; i += 16) {
+ __be32 *pos = (__be32 *)&fh->data[i];
+
+ switch ((fh->size - i - 1) >> 2) {
+ case 0:
+ printk(KERN_DEFAULT " %08x\n",
+ be32_to_cpup(pos));
+ break;
+ case 1:
+ printk(KERN_DEFAULT " %08x %08x\n",
+ be32_to_cpup(pos), be32_to_cpup(pos + 1));
+ break;
+ case 2:
+ printk(KERN_DEFAULT " %08x %08x %08x\n",
+ be32_to_cpup(pos), be32_to_cpup(pos + 1),
+ be32_to_cpup(pos + 2));
+ break;
+ default:
+ printk(KERN_DEFAULT " %08x %08x %08x %08x\n",
+ be32_to_cpup(pos), be32_to_cpup(pos + 1),
+ be32_to_cpup(pos + 2), be32_to_cpup(pos + 3));
+ }
+ }
+ }
+ #endif
+
/**
* nfs_inode_attrs_need_update - check if the inode attributes need updating
* @inode - pointer to inode
unsigned long now = jiffies;
unsigned long save_cache_validity;
- dfprintk(VFS, "NFS: %s(%s/%ld ct=%d info=0x%x)\n",
+ dfprintk(VFS, "NFS: %s(%s/%ld fh_crc=0x%08x ct=%d info=0x%x)\n",
__func__, inode->i_sb->s_id, inode->i_ino,
+ nfs_display_fhandle_hash(NFS_FH(inode)),
atomic_read(&inode->i_count), fattr->valid);
if ((fattr->valid & NFS_ATTR_FATTR_FILEID) && nfsi->fileid != fattr->fileid)
/*
* Big trouble! The inode has become a different object.
*/
- printk(KERN_DEBUG "%s: inode %ld mode changed, %07o to %07o\n",
+ printk(KERN_DEBUG "NFS: %s: inode %ld mode changed, %07o to %07o\n",
__func__, inode->i_ino, inode->i_mode, fattr->mode);
out_err:
/*
INIT_LIST_HEAD(&nfsi->open_files);
INIT_LIST_HEAD(&nfsi->access_cache_entry_lru);
INIT_LIST_HEAD(&nfsi->access_cache_inode_lru);
- INIT_RADIX_TREE(&nfsi->nfs_page_tree, GFP_ATOMIC);
+ INIT_LIST_HEAD(&nfsi->commit_list);
nfsi->npages = 0;
nfsi->ncommit = 0;
atomic_set(&nfsi->silly_count, 1);
destroy_workqueue(wq);
}
+ int nfs_net_id;
+ EXPORT_SYMBOL_GPL(nfs_net_id);
+
+ static int nfs_net_init(struct net *net)
+ {
+ nfs_clients_init(net);
+ return nfs_dns_resolver_cache_init(net);
+ }
+
+ static void nfs_net_exit(struct net *net)
+ {
+ nfs_dns_resolver_cache_destroy(net);
+ nfs_cleanup_cb_ident_idr(net);
+ }
+
+ static struct pernet_operations nfs_net_ops = {
+ .init = nfs_net_init,
+ .exit = nfs_net_exit,
+ .id = &nfs_net_id,
+ .size = sizeof(struct nfs_net),
+ };
+
/*
* Initialize NFS
*/
err = nfs_idmap_init();
if (err < 0)
- goto out9;
+ goto out10;
err = nfs_dns_resolver_init();
if (err < 0)
+ goto out9;
+
+ err = register_pernet_subsys(&nfs_net_ops);
+ if (err < 0)
goto out8;
err = nfs_fscache_register();
goto out0;
#ifdef CONFIG_PROC_FS
- rpc_proc_register(&nfs_rpcstat);
+ rpc_proc_register(&init_net, &nfs_rpcstat);
#endif
if ((err = register_nfs_fs()) != 0)
goto out;
return 0;
out:
#ifdef CONFIG_PROC_FS
- rpc_proc_unregister("nfs");
+ rpc_proc_unregister(&init_net, "nfs");
#endif
nfs_destroy_directcache();
out0:
out6:
nfs_fscache_unregister();
out7:
- nfs_dns_resolver_destroy();
+ unregister_pernet_subsys(&nfs_net_ops);
out8:
- nfs_idmap_quit();
+ nfs_dns_resolver_destroy();
out9:
+ nfs_idmap_quit();
+ out10:
return err;
}
nfs_destroy_inodecache();
nfs_destroy_nfspagecache();
nfs_fscache_unregister();
+ unregister_pernet_subsys(&nfs_net_ops);
nfs_dns_resolver_destroy();
nfs_idmap_quit();
#ifdef CONFIG_PROC_FS
- rpc_proc_unregister("nfs");
+ rpc_proc_unregister(&init_net, "nfs");
#endif
- nfs_cleanup_cb_ident_idr();
unregister_nfs_fs();
nfs_fs_proc_exit();
nfsiod_stop();
* Heavily rewritten for 'one fs - one tree' dcache architecture. AV, Mar 2000
*/
- #include <linux/module.h>
+ #include <linux/export.h>
#include <linux/slab.h>
#include <linux/acct.h>
#include <linux/blkdev.h>
#include <linux/backing-dev.h>
#include <linux/rculist_bl.h>
#include <linux/cleancache.h>
+ #include <linux/fsnotify.h>
#include "internal.h"
return NULL;
}
-/**
- * do_remount_sb - asks filesystem to change mount options.
- * @sb: superblock in question
- * @flags: numeric part of options
- * @data: the rest of options
- * @force: whether or not to force the change
- *
- * Alters the mount options of a mounted file system.
- */
-int do_remount_sb(struct super_block *sb, int flags, void *data, int force)
+#define REMOUNT_FORCE 1
+#define REMOUNT_SHRINK_DCACHE 2
+
+static int __do_remount_sb(struct super_block *sb, int flags, void *data, int rflags)
{
int retval;
int remount_ro;
if (flags & MS_RDONLY)
acct_auto_close(sb);
- shrink_dcache_sb(sb);
+ if (rflags & REMOUNT_SHRINK_DCACHE)
+ shrink_dcache_sb(sb);
sync_filesystem(sb);
remount_ro = (flags & MS_RDONLY) && !(sb->s_flags & MS_RDONLY);
/* If we are remounting RDONLY and current sb is read/write,
make sure there are no rw files opened */
if (remount_ro) {
- if (force) {
+ if (rflags & REMOUNT_FORCE) {
mark_files_ro(sb);
} else {
retval = sb_prepare_remount_readonly(sb);
if (sb->s_op->remount_fs) {
retval = sb->s_op->remount_fs(sb, &flags, data);
if (retval) {
- if (!force)
+ if (!(rflags & REMOUNT_FORCE))
goto cancel_readonly;
/* If forced remount, go ahead despite any errors */
WARN(1, "forced remount of a %s fs returned %i\n",
return retval;
}
+/**
+ * do_remount_sb - asks filesystem to change mount options.
+ * @sb: superblock in question
+ * @flags: numeric part of options
+ * @data: the rest of options
+ * @force: whether or not to force the change
+ *
+ * Alters the mount options of a mounted file system.
+ */
+int do_remount_sb(struct super_block *sb, int flags, void *data, int force)
+{
+ return __do_remount_sb(sb, flags, data,
+ REMOUNT_SHRINK_DCACHE|(force? REMOUNT_FORCE : 0));
+}
+
static void do_emergency_remount(struct work_struct *work)
{
struct super_block *sb, *p = NULL;
}
s->s_flags |= MS_ACTIVE;
} else {
- do_remount_sb(s, flags, data, 0);
+ __do_remount_sb(s, flags, data, 0);
}
return dget(s->s_root);
}
CPU_KEEP(exit.data) \
MEM_KEEP(init.data) \
MEM_KEEP(exit.data) \
+ *(.data.unlikely) \
STRUCT_ALIGN(); \
*(__tracepoints) \
/* implement dynamic printk debug */ \
MEM_KEEP(exit.rodata) \
} \
\
+ EH_FRAME \
+ \
/* Built-in module parameters. */ \
__param : AT(ADDR(__param) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___param) = .; \
*(.init.setup) \
VMLINUX_SYMBOL(__setup_end) = .;
- #define INITCALLS \
- *(.initcallearly.init) \
- VMLINUX_SYMBOL(__early_initcall_end) = .; \
- *(.initcall0.init) \
- *(.initcall0s.init) \
- *(.initcall1.init) \
- *(.initcall1s.init) \
- *(.initcall2.init) \
- *(.initcall2s.init) \
- *(.initcall3.init) \
- *(.initcall3s.init) \
- *(.initcall4.init) \
- *(.initcall4s.init) \
- *(.initcall5.init) \
- *(.initcall5s.init) \
- *(.initcallrootfs.init) \
- *(.initcall6.init) \
- *(.initcall6s.init) \
- *(.initcall7.init) \
- *(.initcall7s.init)
+ #define INIT_CALLS_LEVEL(level) \
+ VMLINUX_SYMBOL(__initcall##level##_start) = .; \
+ *(.initcall##level##.init) \
+ *(.initcall##level##s.init) \
#define INIT_CALLS \
VMLINUX_SYMBOL(__initcall_start) = .; \
- INITCALLS \
+ *(.initcallearly.init) \
+ INIT_CALLS_LEVEL(0) \
+ INIT_CALLS_LEVEL(1) \
+ INIT_CALLS_LEVEL(2) \
+ INIT_CALLS_LEVEL(3) \
+ INIT_CALLS_LEVEL(4) \
+ INIT_CALLS_LEVEL(5) \
+ INIT_CALLS_LEVEL(rootfs) \
+ INIT_CALLS_LEVEL(6) \
+ INIT_CALLS_LEVEL(7) \
VMLINUX_SYMBOL(__initcall_end) = .;
#define CON_INITCALL \
BSS(bss_align) \
. = ALIGN(stop_align); \
VMLINUX_SYMBOL(__bss_stop) = .;
+
+#ifdef CONFIG_STACK_UNWIND
+#define EH_FRAME \
+ /* Unwind data binary search table */ \
+ . = ALIGN(8); \
+ .eh_frame_hdr : AT(ADDR(.eh_frame_hdr) - LOAD_OFFSET) { \
+ VMLINUX_SYMBOL(__start_unwind_hdr) = .; \
+ *(.eh_frame_hdr) \
+ VMLINUX_SYMBOL(__end_unwind_hdr) = .; \
+ } \
+ /* Unwind data */ \
+ . = ALIGN(8); \
+ .eh_frame : AT(ADDR(.eh_frame) - LOAD_OFFSET) { \
+ VMLINUX_SYMBOL(__start_unwind) = .; \
+ *(.eh_frame) \
+ VMLINUX_SYMBOL(__end_unwind) = .; \
+ }
+#else
+#define EH_FRAME
+#endif
extern int __must_check driver_register(struct device_driver *drv);
extern void driver_unregister(struct device_driver *drv);
- extern struct device_driver *get_driver(struct device_driver *drv);
- extern void put_driver(struct device_driver *drv);
extern struct device_driver *driver_find(const char *name,
struct bus_type *bus);
extern int driver_probe_done(void);
extern void driver_remove_file(struct device_driver *driver,
const struct driver_attribute *attr);
- extern int __must_check driver_add_kobj(struct device_driver *drv,
- struct kobject *kobj,
- const char *fmt, ...);
-
extern int __must_check driver_for_each_device(struct device_driver *drv,
struct device *start,
void *data,
extern int __dev_printk(const char *level, const struct device *dev,
struct va_format *vaf);
extern __printf(3, 4)
+
+#if defined(KMSG_COMPONENT) && (defined(CONFIG_KMSG_IDS) || defined(__KMSG_CHECKER))
+/* dev_printk_hash for message documentation */
+#if defined(__KMSG_CHECKER) && defined(KMSG_COMPONENT)
+
+/* generate magic string for scripts/kmsg-doc to parse */
+#define dev_printk_hash(level, dev, format, arg...) \
+ __KMSG_DEV(level _FMT_ format _ARGS_ dev, ## arg _END_)
+
+#elif defined(CONFIG_KMSG_IDS) && defined(KMSG_COMPONENT)
+
+int printk_dev_hash(const char *, const char *, const char *, ...);
+#define dev_printk_hash(level, dev, format, arg...) \
+ printk_dev_hash(level "%s.%06x: ", dev_driver_string(dev), \
+ "%s: " format, dev_name(dev), ## arg)
+
+#endif
+
+#define dev_printk(level, dev, format, arg...) \
+ dev_printk_hash(level , dev, format, ## arg)
+#define dev_emerg(dev, format, arg...) \
+ dev_printk_hash(KERN_EMERG , dev , format , ## arg)
+#define dev_alert(dev, format, arg...) \
+ dev_printk_hash(KERN_ALERT , dev , format , ## arg)
+#define dev_crit(dev, format, arg...) \
+ dev_printk_hash(KERN_CRIT , dev , format , ## arg)
+#define dev_err(dev, format, arg...) \
+ dev_printk_hash(KERN_ERR , dev , format , ## arg)
+#define dev_warn(dev, format, arg...) \
+ dev_printk_hash(KERN_WARNING , dev , format , ## arg)
+#define dev_notice(dev, format, arg...) \
+ dev_printk_hash(KERN_NOTICE , dev , format , ## arg)
+#define _dev_info(dev, format, arg...) \
+ dev_printk_hash(KERN_INFO , dev , format , ## arg)
+#else
int dev_printk(const char *level, const struct device *dev,
const char *fmt, ...)
;
int dev_notice(const struct device *dev, const char *fmt, ...);
extern __printf(2, 3)
int _dev_info(const struct device *dev, const char *fmt, ...);
-
+#endif
#else
static inline int __dev_printk(const char *level, const struct device *dev,
#define dev_info(dev, fmt, arg...) _dev_info(dev, fmt, ##arg)
- #if defined(DEBUG)
- #define dev_dbg(dev, format, arg...) \
- dev_printk(KERN_DEBUG, dev, format, ##arg)
- #elif defined(CONFIG_DYNAMIC_DEBUG)
+ #if defined(CONFIG_DYNAMIC_DEBUG)
#define dev_dbg(dev, format, ...) \
do { \
dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
} while (0)
+ #elif defined(DEBUG)
+ #define dev_dbg(dev, format, arg...) \
+ dev_printk(KERN_DEBUG, dev, format, ##arg)
#else
#define dev_dbg(dev, format, arg...) \
({ \
* @__driver: driver name
* @__register: register function for this driver type
* @__unregister: unregister function for this driver type
+ * @...: Additional arguments to be passed to __register and __unregister.
*
* Use this macro to construct bus specific macros for registering
* drivers, and do not use it on its own.
*/
- #define module_driver(__driver, __register, __unregister) \
+ #define module_driver(__driver, __register, __unregister, ...) \
static int __init __driver##_init(void) \
{ \
- return __register(&(__driver)); \
+ return __register(&(__driver) , ##__VA_ARGS__); \
} \
module_init(__driver##_init); \
static void __exit __driver##_exit(void) \
{ \
- __unregister(&(__driver)); \
+ __unregister(&(__driver) , ##__VA_ARGS__); \
} \
module_exit(__driver##_exit);
#include <linux/fs.h>
#include <linux/init.h>
- #include <linux/device.h>
#include <linux/workqueue.h>
#include <linux/notifier.h>
#include <linux/list.h>
void *fbcon_par; /* fbcon use-only private area */
/* From here on everything is device dependent */
void *par;
+#ifdef CONFIG_BOOTSPLASH
+ struct splash_data *splash_data;
+ char fb_cursordata[64];
+#endif
/* we need the PCI or similar aperture base/size not
smem_start/size as smem_start may just be an object
allocated inside the aperture so may not actually overlap */
/* drivers/video/fbmem.c */
extern int register_framebuffer(struct fb_info *fb_info);
extern int unregister_framebuffer(struct fb_info *fb_info);
+ extern int unlink_framebuffer(struct fb_info *fb_info);
extern void remove_conflicting_framebuffers(struct apertures_struct *a,
const char *name, bool primary);
extern int fb_prepare_logo(struct fb_info *fb_info, int rotate);
#ifndef _LINUX_KERNEL_H
#define _LINUX_KERNEL_H
+ #include <linux/sysinfo.h>
+
/*
* 'kernel.h' contains some often-used function prototypes etc
*/
#include <linux/printk.h>
#include <linux/dynamic_debug.h>
#include <asm/byteorder.h>
- #include <asm/bug.h>
#define USHRT_MAX ((u16)(~0U))
#define SHRT_MAX ((s16)(USHRT_MAX>>1))
} \
)
+ /*
+ * Multiplies an integer by a fraction, while avoiding unnecessary
+ * overflow or loss of precision.
+ */
+ #define mult_frac(x, numer, denom)( \
+ { \
+ typeof(x) quot = (x) / (denom); \
+ typeof(x) rem = (x) % (denom); \
+ (quot * (numer)) + ((rem * (numer)) / (denom)); \
+ } \
+ )
+
+
#define _RET_IP_ (unsigned long)__builtin_return_address(0)
#define _THIS_IP_ ({ __label__ __here; __here: (unsigned long)&&__here; })
#define strict_strtoull kstrtoull
#define strict_strtoll kstrtoll
+ extern int num_to_str(char *buf, int size, unsigned long long num);
+
/* lib/printf utilities */
extern __printf(2, 3) int sprintf(char *buf, const char * fmt, ...);
char *kasprintf(gfp_t gfp, const char *fmt, ...);
extern char *kvasprintf(gfp_t gfp, const char *fmt, va_list args);
- extern int sscanf(const char *, const char *, ...)
- __attribute__ ((format (scanf, 2, 3)));
- extern int vsscanf(const char *, const char *, va_list)
- __attribute__ ((format (scanf, 2, 0)));
+ extern __scanf(2, 3)
+ int sscanf(const char *, const char *, ...);
+ extern __scanf(2, 0)
+ int vsscanf(const char *, const char *, va_list);
extern int get_option(char **str, int *pint);
extern char *get_options(const char *str, int nints, int *ints);
extern int panic_on_oops;
extern int panic_on_unrecovered_nmi;
extern int panic_on_io_nmi;
+extern int unsupported;
extern int sysctl_panic_on_stackoverflow;
extern const char *print_tainted(void);
extern void add_taint(unsigned flag);
+extern void add_nonfatal_taint(unsigned flag);
extern int test_taint(unsigned flag);
extern unsigned long get_taint(void);
extern int root_mountflags;
#define TAINT_FIRMWARE_WORKAROUND 11
#define TAINT_OOT_MODULE 12
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+/*
+ * Take the upper bits to hopefully allow them
+ * to stay the same for more than one release.
+ */
+#define TAINT_NO_SUPPORT 30
+#define TAINT_EXTERNAL_SUPPORT 31
+#endif
+
extern const char hex_asc[];
#define hex_asc_lo(x) hex_asc[((x) & 0x0f)]
#define hex_asc_hi(x) hex_asc[((x) & 0xf0) >> 4]
* Most likely, you want to use tracing_on/tracing_off.
*/
#ifdef CONFIG_RING_BUFFER
- void tracing_on(void);
- void tracing_off(void);
/* trace_off_permanent stops recording with no way to bring it back */
void tracing_off_permanent(void);
- int tracing_is_on(void);
#else
- static inline void tracing_on(void) { }
- static inline void tracing_off(void) { }
static inline void tracing_off_permanent(void) { }
- static inline int tracing_is_on(void) { return 0; }
#endif
enum ftrace_dump_mode {
};
#ifdef CONFIG_TRACING
+ void tracing_on(void);
+ void tracing_off(void);
+ int tracing_is_on(void);
+
extern void tracing_start(void);
extern void tracing_stop(void);
extern void ftrace_off_permanent(void);
static inline void tracing_stop(void) { }
static inline void ftrace_off_permanent(void) { }
static inline void trace_dump_stack(void) { }
+
+ static inline void tracing_on(void) { }
+ static inline void tracing_off(void) { }
+ static inline int tracing_is_on(void) { return 0; }
+
static inline int
trace_printk(const char *fmt, ...)
{
const typeof( ((type *)0)->member ) *__mptr = (ptr); \
(type *)( (char *)__mptr - offsetof(type,member) );})
- #ifdef __CHECKER__
- #define BUILD_BUG_ON_NOT_POWER_OF_2(n)
- #define BUILD_BUG_ON_ZERO(e) (0)
- #define BUILD_BUG_ON_NULL(e) ((void*)0)
- #define BUILD_BUG_ON(condition)
- #define BUILD_BUG() (0)
- #else /* __CHECKER__ */
-
- /* Force a compilation error if a constant expression is not a power of 2 */
- #define BUILD_BUG_ON_NOT_POWER_OF_2(n) \
- BUILD_BUG_ON((n) == 0 || (((n) & ((n) - 1)) != 0))
-
- /* Force a compilation error if condition is true, but also produce a
- result (of value 0 and type size_t), so the expression can be used
- e.g. in a structure initializer (or where-ever else comma expressions
- aren't permitted). */
- #define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
- #define BUILD_BUG_ON_NULL(e) ((void *)sizeof(struct { int:-!!(e); }))
-
- /**
- * BUILD_BUG_ON - break compile if a condition is true.
- * @condition: the condition which the compiler should know is false.
- *
- * If you have some code which relies on certain constants being equal, or
- * other compile-time-evaluated condition, you should use BUILD_BUG_ON to
- * detect if someone changes it.
- *
- * The implementation uses gcc's reluctance to create a negative array, but
- * gcc (as of 4.4) only emits that error for obvious cases (eg. not arguments
- * to inline functions). So as a fallback we use the optimizer; if it can't
- * prove the condition is false, it will cause a link error on the undefined
- * "__build_bug_on_failed". This error message can be harder to track down
- * though, hence the two different methods.
- */
- #ifndef __OPTIMIZE__
- #define BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
- #else
- extern int __build_bug_on_failed;
- #define BUILD_BUG_ON(condition) \
- do { \
- ((void)sizeof(char[1 - 2*!!(condition)])); \
- if (condition) __build_bug_on_failed = 1; \
- } while(0)
- #endif
-
- /**
- * BUILD_BUG - break compile if used.
- *
- * If you have some code that you expect the compiler to eliminate at
- * build time, you should use BUILD_BUG to detect if it is
- * unexpectedly used.
- */
- #define BUILD_BUG() \
- do { \
- extern void __build_bug_failed(void) \
- __linktime_error("BUILD_BUG failed"); \
- __build_bug_failed(); \
- } while (0)
-
- #endif /* __CHECKER__ */
-
/* Trap pasters of __FUNCTION__ at compile-time */
#define __FUNCTION__ (__func__)
# define REBUILD_DUE_TO_FTRACE_MCOUNT_RECORD
#endif
- struct sysinfo;
extern int do_sysinfo(struct sysinfo *info);
#endif /* __KERNEL__ */
- #define SI_LOAD_SHIFT 16
- struct sysinfo {
- long uptime; /* Seconds since boot */
- unsigned long loads[3]; /* 1, 5, and 15 minute load averages */
- unsigned long totalram; /* Total usable main memory size */
- unsigned long freeram; /* Available memory size */
- unsigned long sharedram; /* Amount of shared memory */
- unsigned long bufferram; /* Memory used by buffers */
- unsigned long totalswap; /* Total swap space size */
- unsigned long freeswap; /* swap space still available */
- unsigned short procs; /* Number of current processes */
- unsigned short pad; /* explicit padding for m68k */
- unsigned long totalhigh; /* Total high memory size */
- unsigned long freehigh; /* Available high memory size */
- unsigned int mem_unit; /* Memory unit size in bytes */
- char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding: libc5 uses this.. */
- };
-
#endif
#ifdef __KERNEL__
#include <linux/gfp.h>
+ #include <linux/bug.h>
#include <linux/list.h>
#include <linux/mmzone.h>
#include <linux/rbtree.h>
#define VM_HUGEPAGE 0x01000000 /* MADV_HUGEPAGE marked this vma */
#endif
#define VM_INSERTPAGE 0x02000000 /* The vma has had "vm_insert_page()" done on it */
- #define VM_ALWAYSDUMP 0x04000000 /* Always include in core dumps */
+ #define VM_NODUMP 0x04000000 /* Do not include in the core dump */
#define VM_CAN_NONLINEAR 0x08000000 /* Has ->fault & does nonlinear pages */
#define VM_MIXEDMAP 0x10000000 /* Can contain "struct page" and pure PFN pages */
- #ifndef CONFIG_XEN
#define VM_SAO 0x20000000 /* Strong Access Ordering (powerpc) */
- #else
- #define VM_SAO 0
- #define VM_FOREIGN 0x20000000 /* Has pages belonging to another VM */
- #endif
#define VM_PFN_AT_MMAP 0x40000000 /* PFNMAP vma that is fully mapped at mmap time */
#define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */
*/
#define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_RESERVED | VM_PFNMAP)
- #ifdef CONFIG_XEN
- struct vm_foreign_map {
- struct page **map;
- };
- #endif
-
/*
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
int (*access)(struct vm_area_struct *vma, unsigned long addr,
void *buf, int len, int write);
-
- #ifdef CONFIG_XEN
- /* Area-specific function for clearing the PTE at @ptep. Returns the
- * original value of @ptep. */
- pte_t (*zap_pte)(struct vm_area_struct *vma,
- unsigned long addr, pte_t *ptep, int is_fullmm);
-
- /* called before close() to indicate no more pages should be mapped */
- void (*unmap)(struct vm_area_struct *area);
- #endif
-
#ifdef CONFIG_NUMA
/*
* set_policy() op must add a reference to any non-NULL @new mempolicy
int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
unsigned long size);
- unsigned long zap_page_range(struct vm_area_struct *vma, unsigned long address,
+ void zap_page_range(struct vm_area_struct *vma, unsigned long address,
unsigned long size, struct zap_details *);
- unsigned long unmap_vmas(struct mmu_gather *tlb,
+ void unmap_vmas(struct mmu_gather *tlb,
struct vm_area_struct *start_vma, unsigned long start_addr,
unsigned long end_addr, unsigned long *nr_accounted,
struct zap_details *);
extern void truncate_setsize(struct inode *inode, loff_t newsize);
extern int vmtruncate(struct inode *inode, loff_t offset);
extern int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end);
-
+ void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
int truncate_inode_page(struct address_space *mapping, struct page *page);
int generic_error_remove_page(struct address_space *mapping, struct page *page);
!vma_growsup(vma->vm_next, addr);
}
+ extern pid_t
+ vm_is_stack(struct task_struct *task, struct vm_area_struct *vma, int in_group);
+
extern unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long old_addr, struct vm_area_struct *new_vma,
unsigned long new_addr, unsigned long len);
/*
* per-process(per-mm_struct) statistics.
*/
- static inline void set_mm_counter(struct mm_struct *mm, int member, long value)
- {
- atomic_long_set(&mm->rss_stat.count[member], value);
- }
-
- #if defined(SPLIT_RSS_COUNTING)
- unsigned long get_mm_counter(struct mm_struct *mm, int member);
- #else
static inline unsigned long get_mm_counter(struct mm_struct *mm, int member)
{
- return atomic_long_read(&mm->rss_stat.count[member]);
- }
+ long val = atomic_long_read(&mm->rss_stat.count[member]);
+
+ #ifdef SPLIT_RSS_COUNTING
+ /*
+ * counter is updated in asynchronous manner and may go to minus.
+ * But it's never be expected number for users.
+ */
+ if (val < 0)
+ val = 0;
#endif
+ return (unsigned long)val;
+ }
static inline void add_mm_counter(struct mm_struct *mm, int member, long value)
{
}
#if defined(SPLIT_RSS_COUNTING)
- void sync_mm_rss(struct task_struct *task, struct mm_struct *mm);
+ void sync_mm_rss(struct mm_struct *mm);
#else
- static inline void sync_mm_rss(struct task_struct *task, struct mm_struct *mm)
+ static inline void sync_mm_rss(struct mm_struct *mm)
{
}
#endif
extern void free_area_init(unsigned long * zones_size);
extern void free_area_init_node(int nid, unsigned long * zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
+ extern void free_initmem(void);
+
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
/*
* With CONFIG_HAVE_MEMBLOCK_NODE_MAP set, an architecture may initialise its
extern unsigned long find_min_pfn_with_active_regions(void);
extern void free_bootmem_with_active_regions(int nid,
unsigned long max_low_pfn);
- int add_from_early_node_map(struct range *range, int az,
- int nr_range, int nid);
extern void sparse_memory_present_with_active_regions(int nid);
#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
void task_dirty_inc(struct task_struct *tsk);
/* readahead.c */
+#ifndef CONFIG_KERNEL_DESKTOP
+#define VM_MAX_READAHEAD 512 /* kbytes */
+#else
#define VM_MAX_READAHEAD 128 /* kbytes */
+#endif
#define VM_MIN_READAHEAD 16 /* kbytes (includes current page) */
int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
enum mf_flags {
MF_COUNT_INCREASED = 1 << 0,
+ MF_ACTION_REQUIRED = 1 << 1,
};
- extern void memory_failure(unsigned long pfn, int trapno);
- extern int __memory_failure(unsigned long pfn, int trapno, int flags);
+ extern int memory_failure(unsigned long pfn, int trapno, int flags);
extern void memory_failure_queue(unsigned long pfn, int trapno, int flags);
extern int unpoison_memory(unsigned long pfn);
extern int sysctl_memory_failure_early_kill;
#include <linux/percpu.h>
#include <asm/module.h>
- #include <trace/events/module.h>
-
/* Not Yet Implemented */
#define MODULE_SUPPORTED_DEVICE(name)
/* Size of RO sections of the module (text+rodata) */
unsigned int init_ro_size, core_ro_size;
+ /* The handle returned from unwind_add_table. */
+ void *unwind_info;
+
/* Arch-specific module values */
struct mod_arch_specific arch;
bool is_module_address(unsigned long addr);
bool is_module_percpu_address(unsigned long addr);
bool is_module_text_address(unsigned long addr);
+const char *supported_printable(int taint);
static inline int within_module_core(unsigned long addr, struct module *mod)
{
/* Sometimes we know we already have a refcount, and it's easier not
to handle the error case (which only happens with rmmod --wait). */
- static inline void __module_get(struct module *module)
- {
- if (module) {
- preempt_disable();
- __this_cpu_inc(module->refptr->incs);
- trace_module_get(module, _THIS_IP_);
- preempt_enable();
- }
- }
-
- static inline int try_module_get(struct module *module)
- {
- int ret = 1;
-
- if (module) {
- preempt_disable();
+ extern void __module_get(struct module *module);
- if (likely(module_is_live(module))) {
- __this_cpu_inc(module->refptr->incs);
- trace_module_get(module, _THIS_IP_);
- } else
- ret = 0;
-
- preempt_enable();
- }
- return ret;
- }
+ /* This is the Right Way to get a module: if it fails, it's being removed,
+ * so pretend it's not there. */
+ extern bool try_module_get(struct module *module);
extern void module_put(struct module *module);
#ifdef __KERNEL__
+ /*
+ * Enable dprintk() debugging support for nfs client.
+ */
+ #ifdef CONFIG_NFS_DEBUG
+ # define NFS_DEBUG
+ #endif
+
#include <linux/in.h>
#include <linux/mm.h>
#include <linux/pagemap.h>
*/
__be32 cookieverf[2];
- /*
- * This is the list of dirty unwritten pages.
- */
- struct radix_tree_root nfs_page_tree;
-
unsigned long npages;
unsigned long ncommit;
+ struct list_head commit_list;
/* Open contexts for shared mmap writes */
struct list_head open_files;
#define NFS_INO_PNFS_COMMIT (8) /* use pnfs code for commit */
#define NFS_INO_LAYOUTCOMMIT (9) /* layoutcommit required */
#define NFS_INO_LAYOUTCOMMITTING (10) /* layoutcommit inflight */
+#define NFS_INO_SEEN_GETATTR (11) /* flag to track if app is calling
+ * getattr in a directory during
+ * readdir
+ */
static inline struct nfs_inode *NFS_I(const struct inode *inode)
{
kfree(fh);
}
+ #ifdef NFS_DEBUG
+ extern u32 _nfs_display_fhandle_hash(const struct nfs_fh *fh);
+ static inline u32 nfs_display_fhandle_hash(const struct nfs_fh *fh)
+ {
+ return _nfs_display_fhandle_hash(fh);
+ }
+ extern void _nfs_display_fhandle(const struct nfs_fh *fh, const char *caption);
+ #define nfs_display_fhandle(fh, caption) \
+ do { \
+ if (unlikely(nfs_debug & NFSDBG_FACILITY)) \
+ _nfs_display_fhandle(fh, caption); \
+ } while (0)
+ #else
+ static inline u32 nfs_display_fhandle_hash(const struct nfs_fh *fh)
+ {
+ return 0;
+ }
+ static inline void nfs_display_fhandle(const struct nfs_fh *fh,
+ const char *caption)
+ {
+ }
+ #endif
+
/*
* linux/fs/nfs/nfsroot.c
*/
#ifdef __KERNEL__
# undef ifdebug
# ifdef NFS_DEBUG
# define ifdebug(fac) if (unlikely(nfs_debug & NFSDBG_##fac))
+ # define NFS_IFDEBUG(x) x
# else
# define ifdebug(fac) if (0)
+ # define NFS_IFDEBUG(x)
# endif
#endif /* __KERNEL */
int printk(const char *fmt, ...);
/*
+ * Special printk facility for scheduler use only, _DO_NOT_USE_ !
+ */
+ __printf(1, 2) __cold int printk_sched(const char *fmt, ...);
+
+ /*
* Please don't use printk_ratelimit(), because it shares ratelimiting state
* with all other unrelated printk_ratelimit() callsites. Instead use
* printk_ratelimited() or plain old __ratelimit().
{
return 0;
}
+ static inline __printf(1, 2) __cold
+ int printk_sched(const char *s, ...)
+ {
+ return 0;
+ }
static inline int printk_ratelimit(void)
{
return 0;
#define pr_fmt(fmt) fmt
#endif
+#if defined(__KMSG_CHECKER) && defined(KMSG_COMPONENT)
+
+/* generate magic string for scripts/kmsg-doc to parse */
+#define pr_printk_hash(level, format, ...) \
+ __KMSG_PRINT(level _FMT_ format _ARGS_ #__VA_ARGS__ _END_)
+
+#elif defined(CONFIG_KMSG_IDS) && defined(KMSG_COMPONENT)
+
+int printk_hash(const char *, const char *, ...);
+#define pr_printk_hash(level, format, ...) \
+ printk_hash(level KMSG_COMPONENT ".%06x" ": ", format, ##__VA_ARGS__)
+
+#else /* !defined(CONFIG_KMSG_IDS) */
+
+#define pr_printk_hash(level, format, ...) \
+ printk(level pr_fmt(format), ##__VA_ARGS__)
+
+#endif
+
#define pr_emerg(fmt, ...) \
- printk(KERN_EMERG pr_fmt(fmt), ##__VA_ARGS__)
+ pr_printk_hash(KERN_EMERG, fmt, ##__VA_ARGS__)
#define pr_alert(fmt, ...) \
- printk(KERN_ALERT pr_fmt(fmt), ##__VA_ARGS__)
+ pr_printk_hash(KERN_ALERT, fmt, ##__VA_ARGS__)
#define pr_crit(fmt, ...) \
- printk(KERN_CRIT pr_fmt(fmt), ##__VA_ARGS__)
+ pr_printk_hash(KERN_CRIT, fmt, ##__VA_ARGS__)
#define pr_err(fmt, ...) \
- printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
+ pr_printk_hash(KERN_ERR, fmt, ##__VA_ARGS__)
#define pr_warning(fmt, ...) \
- printk(KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__)
+ pr_printk_hash(KERN_WARNING, fmt, ##__VA_ARGS__)
#define pr_warn pr_warning
#define pr_notice(fmt, ...) \
- printk(KERN_NOTICE pr_fmt(fmt), ##__VA_ARGS__)
+ pr_printk_hash(KERN_NOTICE, fmt, ##__VA_ARGS__)
#define pr_info(fmt, ...) \
- printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
+ pr_printk_hash(KERN_INFO, fmt, ##__VA_ARGS__)
#define pr_cont(fmt, ...) \
- printk(KERN_CONT fmt, ##__VA_ARGS__)
+ pr_printk_hash(KERN_CONT, fmt, ##__VA_ARGS__)
/* pr_devel() should produce zero code unless DEBUG is defined */
#ifdef DEBUG
#endif
/* If you are writing a driver, please use dev_dbg instead */
- #if defined(DEBUG)
- #define pr_debug(fmt, ...) \
- printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)
- #elif defined(CONFIG_DYNAMIC_DEBUG)
+ #if defined(CONFIG_DYNAMIC_DEBUG)
/* dynamic_pr_debug() uses pr_fmt() internally so we don't need it here */
#define pr_debug(fmt, ...) \
dynamic_pr_debug(fmt, ##__VA_ARGS__)
+ #elif defined(DEBUG)
+ #define pr_debug(fmt, ...) \
+ printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)
#else
#define pr_debug(fmt, ...) \
no_printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)
struct rpc_call_ops {
void (*rpc_call_prepare)(struct rpc_task *, void *);
void (*rpc_call_done)(struct rpc_task *, void *);
+ void (*rpc_count_stats)(struct rpc_task *, void *);
void (*rpc_release)(void *);
};
unsigned char nr; /* # tasks remaining for cookie */
unsigned short qlen; /* total # tasks waiting in queue */
struct rpc_timer timer_list;
- #ifdef RPC_DEBUG
+ #if defined(RPC_DEBUG) || defined(RPC_TRACEPOINTS)
const char * name;
#endif
};
struct rpc_task *);
void rpc_wake_up(struct rpc_wait_queue *);
struct rpc_task *rpc_wake_up_next(struct rpc_wait_queue *);
+ struct rpc_task *rpc_wake_up_first(struct rpc_wait_queue *,
+ bool (*)(struct rpc_task *, void *),
+ void *);
void rpc_wake_up_status(struct rpc_wait_queue *, int);
+void rpc_wake_up_softconn_status(struct rpc_wait_queue *, int);
int rpc_queue_empty(struct rpc_wait_queue *);
void rpc_delay(struct rpc_task *, unsigned long);
void * rpc_malloc(struct rpc_task *, size_t);
void rpciod_down(void);
int __rpc_wait_for_completion_task(struct rpc_task *task, int (*)(void *));
#ifdef RPC_DEBUG
- void rpc_show_tasks(void);
+ struct net;
+ void rpc_show_tasks(struct net *);
#endif
int rpc_init_mempool(void);
void rpc_destroy_mempool(void);
return (task->tk_priority + RPC_PRIORITY_LOW == prio);
}
- #ifdef RPC_DEBUG
- static inline const char * rpc_qname(struct rpc_wait_queue *q)
+ #if defined(RPC_DEBUG) || defined (RPC_TRACEPOINTS)
+ static inline const char * rpc_qname(const struct rpc_wait_queue *q)
{
return ((q && q->name) ? q->name : "unknown");
}
+
+ static inline void rpc_assign_waitqueue_name(struct rpc_wait_queue *q,
+ const char *name)
+ {
+ q->name = name;
+ }
+ #else
+ static inline void rpc_assign_waitqueue_name(struct rpc_wait_queue *q,
+ const char *name)
+ {
+ }
#endif
#endif /* _LINUX_SUNRPC_SCHED_H_ */
#ifndef _SCSI_SCSI_DEVICE_H
#define _SCSI_SCSI_DEVICE_H
- #include <linux/device.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <scsi/scsi.h>
#include <linux/atomic.h>
+ struct device;
struct request_queue;
struct scsi_cmnd;
struct scsi_lun;
unsigned use_10_for_ms:1; /* first try 10-byte mode sense/select */
unsigned skip_ms_page_8:1; /* do not use MODE SENSE page 0x08 */
unsigned skip_ms_page_3f:1; /* do not use MODE SENSE page 0x3f */
+ unsigned skip_vpd_pages:1; /* do not read VPD pages */
unsigned use_192_bytes_for_3f:1; /* ask for 192 bytes from page 0x3f */
unsigned no_start_on_add:1; /* do not issue start on add */
unsigned allow_restart:1; /* issue START_UNIT in error handler */
unsigned int single_lun:1; /* Indicates we should only
* allow I/O to one of the luns
* for the device at a time. */
- unsigned int pdt_1f_for_no_lun; /* PDT = 0x1f */
- /* means no lun present */
+ unsigned int pdt_1f_for_no_lun:1; /* PDT = 0x1f
+ * means no lun present. */
+ unsigned int no_report_luns:1; /* Don't use
+ * REPORT LUNS for scanning. */
/* commands actually active on LLD. protected by host lock. */
unsigned int target_busy;
/*
extern void __starget_for_each_device(struct scsi_target *, void *,
void (*fn)(struct scsi_device *,
void *));
+extern struct scsi_device *scsi_device_from_queue(struct request_queue *);
/* only exposed to implement shost_for_each_device */
extern struct scsi_device *__scsi_iterate_devices(struct Scsi_Host *,
#define SCSI_NETLINK_H
#include <linux/netlink.h>
-
+ #include <linux/types.h>
/*
* This file intended to be included by both kernel and user space
/* SCSI Transport Broadcast Groups */
/* leaving groups 0 and 1 unassigned */
#define SCSI_NL_GRP_FC_EVENTS (1<<2) /* Group 2 */
-#define SCSI_NL_GRP_CNT 3
+#define SCSI_NL_GRP_ML_EVENTS (1<<3) /* Group 3 */
+#define SCSI_NL_GRP_CNT 4
/* SCSI_TRANSPORT_MSG event message header */
/* scsi_nl_hdr->transport value */
#define SCSI_NL_TRANSPORT 0
#define SCSI_NL_TRANSPORT_FC 1
-#define SCSI_NL_MAX_TRANSPORTS 2
+#define SCSI_NL_TRANSPORT_ML 2
+#define SCSI_NL_MAX_TRANSPORTS 3
/* Transport-based scsi_nl_hdr->msgtype values are defined in each transport */
+config SUSE_KERNEL
+ def_bool y
+
+config ENTERPRISE_SUPPORT
+ bool "Enable enterprise support facility"
+ depends on SUSE_KERNEL
+ help
+ This feature enables the handling of the "supported" module flag.
+ This flag can be used to report unsupported module loads or even
+ refuse them entirely. It is useful when ensuring that the kernel
+ remains in a state that Novell Technical Services, or its
+ technical partners, is prepared to support.
+
+ Modules in the list of supported modules will be marked supported
+ on build. The default enforcement mode is to report, but not
+ deny, loading of unsupported modules.
+
+ If you aren't building a kernel for an enterprise distribution,
+ say n.
+
+config SPLIT_PACKAGE
+ bool "Split the kernel package into multiple RPMs"
+ depends on SUSE_KERNEL && MODULES
+ help
+ This is an option used by the kernel packaging infrastructure
+ to split kernel modules into different packages. It isn't used
+ by the kernel itself, but allows the the packager to make
+ decisions on a per-config basis.
+
+ If you aren't packaging a kernel for distribution, it's safe to
+ say n.
+
+config KERNEL_DESKTOP
+ bool "Kernel to suit desktop workloads"
+ help
+ This is an option used to tune kernel parameters to better suit
+ desktop workloads.
+
config ARCH
string
option env="ARCH"
This option enables preemptible-RCU code that is common between
the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations.
- config RCU_TRACE
- bool "Enable tracing for RCU"
- help
- This option provides tracing in RCU which presents stats
- in debugfs for debugging RCU implementation.
-
- Say Y here if you want to enable RCU tracing
- Say N if you are unsure.
-
config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value"
range 2 64 if 64BIT
menuconfig CGROUPS
boolean "Control Group support"
depends on EVENTFD
+ default !KERNEL_DESKTOP
help
This option adds support for grouping sets of processes together, for
use with process control subsystems such as Cpusets, CFS, memory
menuconfig CGROUP_SCHED
bool "Group CPU scheduler"
- default n
+ default !KERNEL_DESKTOP
help
This feature lets CPU scheduler recognize task groups and control CPU
bandwidth allocation to such task groups. It uses cgroups to group
#include <linux/rmap.h>
#include <linux/mempolicy.h>
#include <linux/key.h>
+#include <linux/unwind.h>
#include <linux/buffer_head.h>
#include <linux/page_cgroup.h>
#include <linux/debug_locks.h>
extern void sbus_init(void);
extern void prio_tree_init(void);
extern void radix_tree_init(void);
- extern void free_initmem(void);
#ifndef CONFIG_DEBUG_RODATA
static inline void mark_rodata_ro(void) { }
#endif
* at least once to get things moving:
*/
init_idle_bootup_task(current);
- preempt_enable_no_resched();
- schedule();
-
+ schedule_preempt_disabled();
/* Call into cpu_idle with preempt disabled */
- preempt_disable();
cpu_idle();
}
void __init parse_early_options(char *cmdline)
{
- parse_args("early options", cmdline, NULL, 0, do_early_param);
+ parse_args("early options", cmdline, NULL, 0, 0, 0, do_early_param);
}
/* Arch code calls this early on, or if not, just before other parsing. */
static void __init mm_init(void)
{
/*
- * page_cgroup requires countinous pages as memmap
- * and it's bigger than MAX_ORDER unless SPARSEMEM.
+ * page_cgroup requires contiguous pages,
+ * bigger than MAX_ORDER unless SPARSEMEM.
*/
page_cgroup_init_flatmem();
mem_init();
* Need to run as early as possible, to initialize the
* lockdep hash:
*/
+ unwind_init();
lockdep_init();
smp_setup_processor_id();
debug_objects_early_init();
mm_init_owner(&init_mm, &init_task);
mm_init_cpumask(&init_mm);
setup_command_line(command_line);
+ unwind_setup();
setup_nr_cpu_ids();
setup_per_cpu_areas();
smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */
parse_early_param();
parse_args("Booting kernel", static_command_line, __start___param,
__stop___param - __start___param,
- &unknown_bootoption);
+ 0, 0, &unknown_bootoption);
jump_label_init();
}
- extern initcall_t __initcall_start[], __initcall_end[], __early_initcall_end[];
+ extern initcall_t __initcall_start[];
+ extern initcall_t __initcall0_start[];
+ extern initcall_t __initcall1_start[];
+ extern initcall_t __initcall2_start[];
+ extern initcall_t __initcall3_start[];
+ extern initcall_t __initcall4_start[];
+ extern initcall_t __initcall5_start[];
+ extern initcall_t __initcall6_start[];
+ extern initcall_t __initcall7_start[];
+ extern initcall_t __initcall_end[];
+
+ static initcall_t *initcall_levels[] __initdata = {
+ __initcall0_start,
+ __initcall1_start,
+ __initcall2_start,
+ __initcall3_start,
+ __initcall4_start,
+ __initcall5_start,
+ __initcall6_start,
+ __initcall7_start,
+ __initcall_end,
+ };
+
+ static char *initcall_level_names[] __initdata = {
+ "early parameters",
+ "core parameters",
+ "postcore parameters",
+ "arch parameters",
+ "subsys parameters",
+ "fs parameters",
+ "device parameters",
+ "late parameters",
+ };
+
+ static int __init ignore_unknown_bootoption(char *param, char *val)
+ {
+ return 0;
+ }
- static void __init do_initcalls(void)
+ static void __init do_initcall_level(int level)
{
+ extern const struct kernel_param __start___param[], __stop___param[];
initcall_t *fn;
- for (fn = __early_initcall_end; fn < __initcall_end; fn++)
+ strcpy(static_command_line, saved_command_line);
+ parse_args(initcall_level_names[level],
+ static_command_line, __start___param,
+ __stop___param - __start___param,
+ level, level,
+ ignore_unknown_bootoption);
+
+ for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++)
do_one_initcall(*fn);
}
+ static void __init do_initcalls(void)
+ {
+ int level;
+
+ for (level = 0; level < ARRAY_SIZE(initcall_levels) - 1; level++)
+ do_initcall_level(level);
+ }
+
/*
* Ok, the machine is now initialized. None of the devices
* have been touched yet, but the CPU subsystem is up and
{
initcall_t *fn;
- for (fn = __initcall_start; fn < __early_initcall_end; fn++)
+ for (fn = __initcall_start; fn < __initcall0_start; fn++)
do_one_initcall(*fn);
}
choice
prompt "Preemption Model"
+ default PREEMPT if KERNEL_DESKTOP
default PREEMPT_NONE
config PREEMPT_NONE
config PREEMPT
bool "Preemptible Kernel (Low-Latency Desktop)"
- depends on !XEN
select PREEMPT_COUNT
+ select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
help
This option reduces the latency of the kernel by making
all kernel code (that is not executing in a critical section)
obj-$(CONFIG_FREEZER) += freezer.o
obj-$(CONFIG_PROFILING) += profile.o
- obj-$(CONFIG_SYSCTL_SYSCALL_CHECK) += sysctl_check.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += time/
obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o
obj-$(CONFIG_UID16) += uid16.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_KALLSYMS) += kallsyms.o
+obj-$(CONFIG_STACK_UNWIND) += unwind.o
obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
obj-$(CONFIG_KEXEC) += kexec.o
obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
{
return sprintf(buf, "%zu\n", crash_get_memory_size());
}
- #ifndef CONFIG_XEN
static ssize_t kexec_crash_size_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
return ret < 0 ? ret : count;
}
KERNEL_ATTR_RW(kexec_crash_size);
- #else
- KERNEL_ATTR_RO(kexec_crash_size);
- #endif
static ssize_t vmcoreinfo_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
struct kobject *kernel_kobj;
EXPORT_SYMBOL_GPL(kernel_kobj);
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+const char *supported_printable(int taint)
+{
+ int mask = TAINT_PROPRIETARY_MODULE|TAINT_NO_SUPPORT;
+ if ((taint & mask) == mask)
+ return "No, Proprietary and Unsupported modules are loaded";
+ else if (taint & TAINT_PROPRIETARY_MODULE)
+ return "No, Proprietary modules are loaded";
+ else if (taint & TAINT_NO_SUPPORT)
+ return "No, Unsupported modules are loaded";
+ else if (taint & TAINT_EXTERNAL_SUPPORT)
+ return "Yes, External";
+ else
+ return "Yes";
+}
+
+static ssize_t supported_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%s\n", supported_printable(get_taint()));
+}
+KERNEL_ATTR_RO(supported);
+#endif
+
static struct attribute * kernel_attrs[] = {
&fscaps_attr.attr,
#if defined(CONFIG_HOTPLUG)
&kexec_crash_size_attr.attr,
&vmcoreinfo_attr.attr,
#endif
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+ &supported_attr.attr,
+#endif
NULL
};
#include <linux/device.h>
#include <linux/string.h>
#include <linux/mutex.h>
+#include <linux/unwind.h>
#include <linux/rculist.h>
#include <asm/uaccess.h>
#include <asm/cacheflush.h>
/* If this is set, the section belongs in the init part of the module */
#define INIT_OFFSET_MASK (1UL << (BITS_PER_LONG-1))
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+/* Allow unsupported modules switch. */
+#ifdef UNSUPPORTED_MODULES
+int unsupported = UNSUPPORTED_MODULES;
+#else
+int unsupported = 2; /* don't warn when loading unsupported modules. */
+#endif
+
+static int __init unsupported_setup(char *str)
+{
+ get_option(&str, &unsupported);
+ return 1;
+}
+__setup("unsupported=", unsupported_setup);
+#endif
+
/*
* Mutex protects:
* 1) List of modules (also safely readable with preempt_disable),
/* Block module loading/unloading? */
int modules_disabled = 0;
+ core_param(nomodule, modules_disabled, bint, 0);
/* Waiting for a module to finish initializing? */
static DECLARE_WAIT_QUEUE_HEAD(module_wq);
struct _ddebug *debug;
unsigned int num_debug;
struct {
- unsigned int sym, str, mod, vers, info, pcpu;
+ unsigned int sym, str, mod, vers, info, pcpu, unwind;
} index;
};
#endif /* CONFIG_SMP */
+static unsigned int find_unwind(struct load_info *info)
+{
+ int section = 0;
+#ifdef ARCH_UNWIND_SECTION_NAME
+ section = find_sec(info, ARCH_UNWIND_SECTION_NAME);
+ if (section)
+ info->sechdrs[section].sh_flags |= SHF_ALLOC;
+#endif
+ return section;
+}
+
+static void add_unwind_table(struct module *mod, struct load_info *info)
+{
+ int index = info->index.unwind;
+
+ /* Size of section 0 is 0, so this is ok if there is no unwind info. */
+ mod->unwind_info = unwind_add_table(mod,
+ (void *)info->sechdrs[index].sh_addr,
+ info->sechdrs[index].sh_size);
+}
+
#define MODINFO_ATTR(field) \
static void setup_modinfo_##field(struct module *mod, const char *s) \
{ \
static struct module_attribute modinfo_refcnt =
__ATTR(refcnt, 0444, show_refcnt, NULL);
+ void __module_get(struct module *module)
+ {
+ if (module) {
+ preempt_disable();
+ __this_cpu_inc(module->refptr->incs);
+ trace_module_get(module, _RET_IP_);
+ preempt_enable();
+ }
+ }
+ EXPORT_SYMBOL(__module_get);
+
+ bool try_module_get(struct module *module)
+ {
+ bool ret = true;
+
+ if (module) {
+ preempt_disable();
+
+ if (likely(module_is_live(module))) {
+ __this_cpu_inc(module->refptr->incs);
+ trace_module_get(module, _RET_IP_);
+ } else
+ ret = false;
+
+ preempt_enable();
+ }
+ return ret;
+ }
+ EXPORT_SYMBOL(try_module_get);
+
void module_put(struct module *module)
{
if (module) {
buf[l++] = 'F';
if (mod->taints & (1 << TAINT_CRAP))
buf[l++] = 'C';
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+ if (mod->taints & (1 << TAINT_NO_SUPPORT))
+ buf[l++] = 'N';
+ if (mod->taints & (1 << TAINT_EXTERNAL_SUPPORT))
+ buf[l++] = 'X';
+#endif
/*
* TAINT_FORCED_RMMOD: could be added.
* TAINT_UNSAFE_SMP, TAINT_MACHINE_CHECK, TAINT_BAD_PAGE don't
static struct module_attribute modinfo_taint =
__ATTR(taint, 0444, show_taint, NULL);
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+static void setup_modinfo_supported(struct module *mod, const char *s)
+{
+ if (!s) {
+ mod->taints |= (1 << TAINT_NO_SUPPORT);
+ return;
+ }
+
+ if (strcmp(s, "external") == 0)
+ mod->taints |= (1 << TAINT_EXTERNAL_SUPPORT);
+ else if (strcmp(s, "yes"))
+ mod->taints |= (1 << TAINT_NO_SUPPORT);
+}
+
+static ssize_t show_modinfo_supported(struct module_attribute *mattr,
+ struct module_kobject *mk, char *buffer)
+{
+ return sprintf(buffer, "%s\n", supported_printable(mk->mod->taints));
+}
+
+static struct module_attribute modinfo_supported = {
+ .attr = { .name = "supported", .mode = 0444 },
+ .show = show_modinfo_supported,
+ .setup = setup_modinfo_supported,
+};
+#endif
+
static struct module_attribute *modinfo_attrs[] = {
&module_uevent,
&modinfo_version,
&modinfo_coresize,
&modinfo_initsize,
&modinfo_taint,
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+ &modinfo_supported,
+#endif
#ifdef CONFIG_MODULE_UNLOAD
&modinfo_refcnt,
#endif
add_sect_attrs(mod, info);
add_notes_attrs(mod, info);
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+ /* We don't use add_taint() here because it also disables lockdep. */
+ if (mod->taints & (1 << TAINT_EXTERNAL_SUPPORT))
+ add_nonfatal_taint(TAINT_EXTERNAL_SUPPORT);
+ else if (mod->taints == (1 << TAINT_NO_SUPPORT)) {
+ if (unsupported == 0) {
+ printk(KERN_WARNING "%s: module not supported by "
+ "Novell, refusing to load. To override, echo "
+ "1 > /proc/sys/kernel/unsupported\n", mod->name);
+ err = -ENOEXEC;
+ goto out_remove_attrs;
+ }
+ add_nonfatal_taint(TAINT_NO_SUPPORT);
+ if (unsupported == 1) {
+ printk(KERN_WARNING "%s: module is not supported by "
+ "Novell. Novell Technical Services may decline "
+ "your support request if it involves a kernel "
+ "fault.\n", mod->name);
+ }
+ }
+#endif
+
kobject_uevent(&mod->mkobj.kobj, KOBJ_ADD);
return 0;
+out_remove_attrs:
+ remove_notes_attrs(mod);
+ remove_sect_attrs(mod);
+ del_usage_links(mod);
+ module_remove_modinfo_attrs(mod);
out_unreg_param:
module_param_sysfs_remove(mod);
out_unreg_holders:
/* Remove dynamic debug info */
ddebug_remove_module(mod->name);
+ unwind_remove_table(mod->unwind_info, 0);
+
/* Arch-specific cleanup. */
module_arch_cleanup(mod);
return -ENOEXEC;
/* Suck in entire file: we'll want most of it. */
- /* vmalloc barfs on "unusual" numbers. Check here */
- if (len > 64 * 1024 * 1024 || (hdr = vmalloc(len)) == NULL)
+ if ((hdr = vmalloc(len)) == NULL)
return -ENOMEM;
if (copy_from_user(hdr, umod, len) != 0) {
info->index.pcpu = find_pcpusec(info);
+ info->index.unwind = find_unwind(info);
+
/* Check module struct version now, before we try to use module. */
if (!check_modstruct_version(info->sechdrs, info->index.vers, mod))
return ERR_PTR(-ENOEXEC);
mutex_unlock(&module_mutex);
/* Module is ready to execute: parsing args may do that. */
- err = parse_args(mod->name, mod->args, mod->kp, mod->num_kp, NULL);
+ err = parse_args(mod->name, mod->args, mod->kp, mod->num_kp,
+ -32768, 32767, NULL);
if (err < 0)
goto unlink;
if (err < 0)
goto unlink;
+ /* Initialize unwind table */
+ add_unwind_table(mod, &info);
+
/* Get rid of temporary copy. */
free_copy(&info);
/* Drop initial reference. */
module_put(mod);
trim_init_extable(mod);
+ unwind_remove_table(mod->unwind_info, 1);
#ifdef CONFIG_KALLSYMS
mod->num_symtab = mod->core_num_syms;
mod->symtab = mod->core_symtab;
if (last_unloaded_module[0])
printk(" [last unloaded: %s]", last_unloaded_module);
printk("\n");
+#ifdef CONFIG_ENTERPRISE_SUPPORT
+ printk("Supported: %s\n", supported_printable(get_taint()));
+#endif
}
#ifdef CONFIG_MODVERSIONS
#include <linux/cpu.h>
#include <linux/notifier.h>
#include <linux/rculist.h>
+#include <linux/jhash.h>
+#include <linux/device.h>
#include <asm/uaccess.h>
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/printk.h>
+
/*
* Architectures can override it:
*/
return do_syslog(type, buf, len, SYSLOG_FROM_CALL);
}
-#ifdef CONFIG_KGDB_KDB
+#if defined(CONFIG_KGDB_KDB) || defined(CONFIG_DEBUG_KERNEL)
/* kdb dmesg command needs access to the syslog buffer. do_syslog()
* uses locks so it cannot be used during debugging. Just tell kdb
* where the start and end of the physical and logical logs are. This
static void _call_console_drivers(unsigned start,
unsigned end, int msg_log_level)
{
+ trace_console(&LOG_BUF(0), start, end, log_buf_len);
+
if ((msg_log_level < console_loglevel || ignore_loglevel) &&
console_drivers && start != end) {
if ((start & LOG_BUF_MASK) > (end & LOG_BUF_MASK)) {
return console_locked;
}
+ /*
+ * Delayed printk facility, for scheduler-internal messages:
+ */
+ #define PRINTK_BUF_SIZE 512
+
+ #define PRINTK_PENDING_WAKEUP 0x01
+ #define PRINTK_PENDING_SCHED 0x02
+
static DEFINE_PER_CPU(int, printk_pending);
+ static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
void printk_tick(void)
{
if (__this_cpu_read(printk_pending)) {
- __this_cpu_write(printk_pending, 0);
- wake_up_interruptible(&log_wait);
+ int pending = __this_cpu_xchg(printk_pending, 0);
+ if (pending & PRINTK_PENDING_SCHED) {
+ char *buf = __get_cpu_var(printk_sched_buf);
+ printk(KERN_WARNING "[sched_delayed] %s", buf);
+ }
+ if (pending & PRINTK_PENDING_WAKEUP)
+ wake_up_interruptible(&log_wait);
}
}
void wake_up_klogd(void)
{
if (waitqueue_active(&log_wait))
- this_cpu_write(printk_pending, 1);
+ this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
}
/**
#if defined CONFIG_PRINTK
+ int printk_sched(const char *fmt, ...)
+ {
+ unsigned long flags;
+ va_list args;
+ char *buf;
+ int r;
+
+ local_irq_save(flags);
+ buf = __get_cpu_var(printk_sched_buf);
+
+ va_start(args, fmt);
+ r = vsnprintf(buf, PRINTK_BUF_SIZE, fmt, args);
+ va_end(args);
+
+ __this_cpu_or(printk_pending, PRINTK_PENDING_SCHED);
+ local_irq_restore(flags);
+
+ return r;
+ }
+
/*
* printk rate limiting, lifted from the networking subsystem.
*
rcu_read_unlock();
}
#endif
+
+#if defined CONFIG_PRINTK && defined CONFIG_KMSG_IDS
+
+/**
+ * printk_hash - print a kernel message include a hash over the message
+ * @prefix: message prefix including the ".%06x" for the hash
+ * @fmt: format string
+ */
+asmlinkage int printk_hash(const char *prefix, const char *fmt, ...)
+{
+ va_list args;
+ int r;
+
+ r = printk(prefix, jhash(fmt, strlen(fmt), 0) & 0xffffff);
+ va_start(args, fmt);
+ r += vprintk(fmt, args);
+ va_end(args);
+
+ return r;
+}
+EXPORT_SYMBOL(printk_hash);
+
+/**
+ * printk_dev_hash - print a kernel message include a hash over the message
+ * @prefix: message prefix including the ".%06x" for the hash
+ * @dev: device this printk is all about
+ * @fmt: format string
+ */
+asmlinkage int printk_dev_hash(const char *prefix, const char *driver_name,
+ const char *fmt, ...)
+{
+ va_list args;
+ int r;
+
+ r = printk(prefix, driver_name, jhash(fmt, strlen(fmt), 0) & 0xffffff);
+ va_start(args, fmt);
+ r += vprintk(fmt, args);
+ va_end(args);
+
+ return r;
+}
+EXPORT_SYMBOL(printk_dev_hash);
+#endif
#include <linux/swap.h>
#include <linux/slab.h>
#include <linux/sysctl.h>
+ #include <linux/bitmap.h>
#include <linux/signal.h>
#include <linux/printk.h>
#include <linux/proc_fs.h>
#include <linux/oom.h>
#include <linux/kmod.h>
#include <linux/capability.h>
+ #include <linux/binfmts.h>
#include <asm/uaccess.h>
#include <asm/processor.h>
#include <asm/stacktrace.h>
#include <asm/io.h>
#endif
+ #ifdef CONFIG_SPARC
+ #include <asm/setup.h>
+ #endif
#ifdef CONFIG_BSD_PROCESS_ACCT
#include <linux/acct.h>
#endif
#include <linux/inotify.h>
#endif
#ifdef CONFIG_SPARC
- #include <asm/system.h>
#endif
#ifdef CONFIG_SPARC64
#endif
- static struct ctl_table root_table[];
- static struct ctl_table_root sysctl_table_root;
- static struct ctl_table_header root_table_header = {
- {{.count = 1,
- .ctl_table = root_table,
- .ctl_entry = LIST_HEAD_INIT(sysctl_table_root.default_set.list),}},
- .root = &sysctl_table_root,
- .set = &sysctl_table_root.default_set,
- };
- static struct ctl_table_root sysctl_table_root = {
- .root_list = LIST_HEAD_INIT(sysctl_table_root.root_list),
- .default_set.list = LIST_HEAD_INIT(root_table_header.ctl_entry),
- };
-
static struct ctl_table kern_table[];
static struct ctl_table vm_table[];
static struct ctl_table fs_table[];
/* The default sysctl tables: */
- static struct ctl_table root_table[] = {
+ static struct ctl_table sysctl_base_table[] = {
{
.procname = "kernel",
.mode = 0555,
.extra1 = &pid_max_min,
.extra2 = &pid_max_max,
},
+#if defined(CONFIG_MODULES) && defined(CONFIG_ENTERPRISE_SUPPORT)
+ {
+ .procname = "unsupported",
+ .data = &unsupported,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+#endif
{
.procname = "panic_on_oops",
.data = &panic_on_oops,
.proc_handler = proc_dointvec,
},
#endif
+ {
+ .procname = "suid_dumpable",
+ .data = &suid_dumpable,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
#if defined(CONFIG_S390) && defined(CONFIG_SMP)
{
.procname = "spin_retry",
.proc_handler = proc_dointvec,
},
#endif
- #if defined(CONFIG_ACPI_SLEEP) && defined(CONFIG_X86) && !defined(CONFIG_ACPI_PV_SLEEP)
+ #if defined(CONFIG_ACPI_SLEEP) && defined(CONFIG_X86)
{
.procname = "acpi_video_flags",
.data = &acpi_realmode_flags,
{ }
};
- static DEFINE_SPINLOCK(sysctl_lock);
-
- /* called under sysctl_lock */
- static int use_table(struct ctl_table_header *p)
+ int __init sysctl_init(void)
{
- if (unlikely(p->unregistering))
- return 0;
- p->used++;
- return 1;
- }
-
- /* called under sysctl_lock */
- static void unuse_table(struct ctl_table_header *p)
- {
- if (!--p->used)
- if (unlikely(p->unregistering))
- complete(p->unregistering);
- }
-
- /* called under sysctl_lock, will reacquire if has to wait */
- static void start_unregistering(struct ctl_table_header *p)
- {
- /*
- * if p->used is 0, nobody will ever touch that entry again;
- * we'll eliminate all paths to it before dropping sysctl_lock
- */
- if (unlikely(p->used)) {
- struct completion wait;
- init_completion(&wait);
- p->unregistering = &wait;
- spin_unlock(&sysctl_lock);
- wait_for_completion(&wait);
- spin_lock(&sysctl_lock);
- } else {
- /* anything non-NULL; we'll never dereference it */
- p->unregistering = ERR_PTR(-EINVAL);
- }
- /*
- * do not remove from the list until nobody holds it; walking the
- * list in do_sysctl() relies on that.
- */
- list_del_init(&p->ctl_entry);
- }
-
- void sysctl_head_get(struct ctl_table_header *head)
- {
- spin_lock(&sysctl_lock);
- head->count++;
- spin_unlock(&sysctl_lock);
- }
-
- void sysctl_head_put(struct ctl_table_header *head)
- {
- spin_lock(&sysctl_lock);
- if (!--head->count)
- kfree_rcu(head, rcu);
- spin_unlock(&sysctl_lock);
- }
-
- struct ctl_table_header *sysctl_head_grab(struct ctl_table_header *head)
- {
- if (!head)
- BUG();
- spin_lock(&sysctl_lock);
- if (!use_table(head))
- head = ERR_PTR(-ENOENT);
- spin_unlock(&sysctl_lock);
- return head;
- }
-
- void sysctl_head_finish(struct ctl_table_header *head)
- {
- if (!head)
- return;
- spin_lock(&sysctl_lock);
- unuse_table(head);
- spin_unlock(&sysctl_lock);
- }
-
- static struct ctl_table_set *
- lookup_header_set(struct ctl_table_root *root, struct nsproxy *namespaces)
- {
- struct ctl_table_set *set = &root->default_set;
- if (root->lookup)
- set = root->lookup(root, namespaces);
- return set;
- }
-
- static struct list_head *
- lookup_header_list(struct ctl_table_root *root, struct nsproxy *namespaces)
- {
- struct ctl_table_set *set = lookup_header_set(root, namespaces);
- return &set->list;
- }
-
- struct ctl_table_header *__sysctl_head_next(struct nsproxy *namespaces,
- struct ctl_table_header *prev)
- {
- struct ctl_table_root *root;
- struct list_head *header_list;
- struct ctl_table_header *head;
- struct list_head *tmp;
-
- spin_lock(&sysctl_lock);
- if (prev) {
- head = prev;
- tmp = &prev->ctl_entry;
- unuse_table(prev);
- goto next;
- }
- tmp = &root_table_header.ctl_entry;
- for (;;) {
- head = list_entry(tmp, struct ctl_table_header, ctl_entry);
-
- if (!use_table(head))
- goto next;
- spin_unlock(&sysctl_lock);
- return head;
- next:
- root = head->root;
- tmp = tmp->next;
- header_list = lookup_header_list(root, namespaces);
- if (tmp != header_list)
- continue;
-
- do {
- root = list_entry(root->root_list.next,
- struct ctl_table_root, root_list);
- if (root == &sysctl_table_root)
- goto out;
- header_list = lookup_header_list(root, namespaces);
- } while (list_empty(header_list));
- tmp = header_list->next;
- }
- out:
- spin_unlock(&sysctl_lock);
- return NULL;
- }
-
- struct ctl_table_header *sysctl_head_next(struct ctl_table_header *prev)
- {
- return __sysctl_head_next(current->nsproxy, prev);
- }
-
- void register_sysctl_root(struct ctl_table_root *root)
- {
- spin_lock(&sysctl_lock);
- list_add_tail(&root->root_list, &sysctl_table_root.root_list);
- spin_unlock(&sysctl_lock);
- }
-
- /*
- * sysctl_perm does NOT grant the superuser all rights automatically, because
- * some sysctl variables are readonly even to root.
- */
-
- static int test_perm(int mode, int op)
- {
- if (!current_euid())
- mode >>= 6;
- else if (in_egroup_p(0))
- mode >>= 3;
- if ((op & ~mode & (MAY_READ|MAY_WRITE|MAY_EXEC)) == 0)
- return 0;
- return -EACCES;
- }
-
- int sysctl_perm(struct ctl_table_root *root, struct ctl_table *table, int op)
- {
- int mode;
-
- if (root->permissions)
- mode = root->permissions(root, current->nsproxy, table);
- else
- mode = table->mode;
-
- return test_perm(mode, op);
- }
-
- static void sysctl_set_parent(struct ctl_table *parent, struct ctl_table *table)
- {
- for (; table->procname; table++) {
- table->parent = parent;
- if (table->child)
- sysctl_set_parent(table, table->child);
- }
- }
-
- static __init int sysctl_init(void)
- {
- sysctl_set_parent(NULL, root_table);
- #ifdef CONFIG_SYSCTL_SYSCALL_CHECK
- sysctl_check_table(current->nsproxy, root_table);
- #endif
+ register_sysctl_table(sysctl_base_table);
return 0;
}
- core_initcall(sysctl_init);
-
- static struct ctl_table *is_branch_in(struct ctl_table *branch,
- struct ctl_table *table)
- {
- struct ctl_table *p;
- const char *s = branch->procname;
-
- /* branch should have named subdirectory as its first element */
- if (!s || !branch->child)
- return NULL;
-
- /* ... and nothing else */
- if (branch[1].procname)
- return NULL;
-
- /* table should contain subdirectory with the same name */
- for (p = table; p->procname; p++) {
- if (!p->child)
- continue;
- if (p->procname && strcmp(p->procname, s) == 0)
- return p;
- }
- return NULL;
- }
-
- /* see if attaching q to p would be an improvement */
- static void try_attach(struct ctl_table_header *p, struct ctl_table_header *q)
- {
- struct ctl_table *to = p->ctl_table, *by = q->ctl_table;
- struct ctl_table *next;
- int is_better = 0;
- int not_in_parent = !p->attached_by;
-
- while ((next = is_branch_in(by, to)) != NULL) {
- if (by == q->attached_by)
- is_better = 1;
- if (to == p->attached_by)
- not_in_parent = 1;
- by = by->child;
- to = next->child;
- }
-
- if (is_better && not_in_parent) {
- q->attached_by = by;
- q->attached_to = to;
- q->parent = p;
- }
- }
-
- /**
- * __register_sysctl_paths - register a sysctl hierarchy
- * @root: List of sysctl headers to register on
- * @namespaces: Data to compute which lists of sysctl entries are visible
- * @path: The path to the directory the sysctl table is in.
- * @table: the top-level table structure
- *
- * Register a sysctl table hierarchy. @table should be a filled in ctl_table
- * array. A completely 0 filled entry terminates the table.
- *
- * The members of the &struct ctl_table structure are used as follows:
- *
- * procname - the name of the sysctl file under /proc/sys. Set to %NULL to not
- * enter a sysctl file
- *
- * data - a pointer to data for use by proc_handler
- *
- * maxlen - the maximum size in bytes of the data
- *
- * mode - the file permissions for the /proc/sys file, and for sysctl(2)
- *
- * child - a pointer to the child sysctl table if this entry is a directory, or
- * %NULL.
- *
- * proc_handler - the text handler routine (described below)
- *
- * de - for internal use by the sysctl routines
- *
- * extra1, extra2 - extra pointers usable by the proc handler routines
- *
- * Leaf nodes in the sysctl tree will be represented by a single file
- * under /proc; non-leaf nodes will be represented by directories.
- *
- * sysctl(2) can automatically manage read and write requests through
- * the sysctl table. The data and maxlen fields of the ctl_table
- * struct enable minimal validation of the values being written to be
- * performed, and the mode field allows minimal authentication.
- *
- * There must be a proc_handler routine for any terminal nodes
- * mirrored under /proc/sys (non-terminals are handled by a built-in
- * directory handler). Several default handlers are available to
- * cover common cases -
- *
- * proc_dostring(), proc_dointvec(), proc_dointvec_jiffies(),
- * proc_dointvec_userhz_jiffies(), proc_dointvec_minmax(),
- * proc_doulongvec_ms_jiffies_minmax(), proc_doulongvec_minmax()
- *
- * It is the handler's job to read the input buffer from user memory
- * and process it. The handler should return 0 on success.
- *
- * This routine returns %NULL on a failure to register, and a pointer
- * to the table header on success.
- */
- struct ctl_table_header *__register_sysctl_paths(
- struct ctl_table_root *root,
- struct nsproxy *namespaces,
- const struct ctl_path *path, struct ctl_table *table)
- {
- struct ctl_table_header *header;
- struct ctl_table *new, **prevp;
- unsigned int n, npath;
- struct ctl_table_set *set;
-
- /* Count the path components */
- for (npath = 0; path[npath].procname; ++npath)
- ;
-
- /*
- * For each path component, allocate a 2-element ctl_table array.
- * The first array element will be filled with the sysctl entry
- * for this, the second will be the sentinel (procname == 0).
- *
- * We allocate everything in one go so that we don't have to
- * worry about freeing additional memory in unregister_sysctl_table.
- */
- header = kzalloc(sizeof(struct ctl_table_header) +
- (2 * npath * sizeof(struct ctl_table)), GFP_KERNEL);
- if (!header)
- return NULL;
-
- new = (struct ctl_table *) (header + 1);
-
- /* Now connect the dots */
- prevp = &header->ctl_table;
- for (n = 0; n < npath; ++n, ++path) {
- /* Copy the procname */
- new->procname = path->procname;
- new->mode = 0555;
-
- *prevp = new;
- prevp = &new->child;
-
- new += 2;
- }
- *prevp = table;
- header->ctl_table_arg = table;
-
- INIT_LIST_HEAD(&header->ctl_entry);
- header->used = 0;
- header->unregistering = NULL;
- header->root = root;
- sysctl_set_parent(NULL, header->ctl_table);
- header->count = 1;
- #ifdef CONFIG_SYSCTL_SYSCALL_CHECK
- if (sysctl_check_table(namespaces, header->ctl_table)) {
- kfree(header);
- return NULL;
- }
- #endif
- spin_lock(&sysctl_lock);
- header->set = lookup_header_set(root, namespaces);
- header->attached_by = header->ctl_table;
- header->attached_to = root_table;
- header->parent = &root_table_header;
- for (set = header->set; set; set = set->parent) {
- struct ctl_table_header *p;
- list_for_each_entry(p, &set->list, ctl_entry) {
- if (p->unregistering)
- continue;
- try_attach(p, header);
- }
- }
- header->parent->count++;
- list_add_tail(&header->ctl_entry, &header->set->list);
- spin_unlock(&sysctl_lock);
-
- return header;
- }
-
- /**
- * register_sysctl_table_path - register a sysctl table hierarchy
- * @path: The path to the directory the sysctl table is in.
- * @table: the top-level table structure
- *
- * Register a sysctl table hierarchy. @table should be a filled in ctl_table
- * array. A completely 0 filled entry terminates the table.
- *
- * See __register_sysctl_paths for more details.
- */
- struct ctl_table_header *register_sysctl_paths(const struct ctl_path *path,
- struct ctl_table *table)
- {
- return __register_sysctl_paths(&sysctl_table_root, current->nsproxy,
- path, table);
- }
-
- /**
- * register_sysctl_table - register a sysctl table hierarchy
- * @table: the top-level table structure
- *
- * Register a sysctl table hierarchy. @table should be a filled in ctl_table
- * array. A completely 0 filled entry terminates the table.
- *
- * See register_sysctl_paths for more details.
- */
- struct ctl_table_header *register_sysctl_table(struct ctl_table *table)
- {
- static const struct ctl_path null_path[] = { {} };
-
- return register_sysctl_paths(null_path, table);
- }
-
- /**
- * unregister_sysctl_table - unregister a sysctl table hierarchy
- * @header: the header returned from register_sysctl_table
- *
- * Unregisters the sysctl table and all children. proc entries may not
- * actually be removed until they are no longer used by anyone.
- */
- void unregister_sysctl_table(struct ctl_table_header * header)
- {
- might_sleep();
-
- if (header == NULL)
- return;
-
- spin_lock(&sysctl_lock);
- start_unregistering(header);
- if (!--header->parent->count) {
- WARN_ON(1);
- kfree_rcu(header->parent, rcu);
- }
- if (!--header->count)
- kfree_rcu(header, rcu);
- spin_unlock(&sysctl_lock);
- }
-
- int sysctl_is_seen(struct ctl_table_header *p)
- {
- struct ctl_table_set *set = p->set;
- int res;
- spin_lock(&sysctl_lock);
- if (p->unregistering)
- res = 0;
- else if (!set->is_seen)
- res = 1;
- else
- res = set->is_seen(set);
- spin_unlock(&sysctl_lock);
- return res;
- }
-
- void setup_sysctl_set(struct ctl_table_set *p,
- struct ctl_table_set *parent,
- int (*is_seen)(struct ctl_table_set *))
- {
- INIT_LIST_HEAD(&p->list);
- p->parent = parent ? parent : &sysctl_table_root.default_set;
- p->is_seen = is_seen;
- }
-
- #else /* !CONFIG_SYSCTL */
- struct ctl_table_header *register_sysctl_table(struct ctl_table * table)
- {
- return NULL;
- }
-
- struct ctl_table_header *register_sysctl_paths(const struct ctl_path *path,
- struct ctl_table *table)
- {
- return NULL;
- }
-
- void unregister_sysctl_table(struct ctl_table_header * table)
- {
- }
-
- void setup_sysctl_set(struct ctl_table_set *p,
- struct ctl_table_set *parent,
- int (*is_seen)(struct ctl_table_set *))
- {
- }
-
- void sysctl_head_put(struct ctl_table_header *head)
- {
- }
-
#endif /* CONFIG_SYSCTL */
/*
}
}
- while (val_a <= val_b)
- set_bit(val_a++, tmp_bitmap);
-
+ bitmap_set(tmp_bitmap, val_a, val_b - val_a + 1);
first = 0;
proc_skip_char(&kbuf, &left, '\n');
}
if (*ppos)
bitmap_or(bitmap, bitmap, tmp_bitmap, bitmap_len);
else
- memcpy(bitmap, tmp_bitmap,
- BITS_TO_LONGS(bitmap_len) * sizeof(unsigned long));
+ bitmap_copy(bitmap, tmp_bitmap, bitmap_len);
}
kfree(tmp_bitmap);
*lenp -= left;
EXPORT_SYMBOL(proc_dostring);
EXPORT_SYMBOL(proc_doulongvec_minmax);
EXPORT_SYMBOL(proc_doulongvec_ms_jiffies_minmax);
- EXPORT_SYMBOL(register_sysctl_table);
- EXPORT_SYMBOL(register_sysctl_paths);
- EXPORT_SYMBOL(unregister_sysctl_table);
{ CTL_INT, KERN_COMPAT_LOG, "compat-log" },
{ CTL_INT, KERN_MAX_LOCK_DEPTH, "max_lock_depth" },
{ CTL_INT, KERN_PANIC_ON_NMI, "panic_on_unrecovered_nmi" },
+ { CTL_INT, KERN_SETUID_DUMPABLE, "suid_dumpable" },
{}
};
};
- #ifdef CONFIG_XEN
- #include <xen/sysctl.h>
- static const struct bin_table bin_xen_table[] = {
- { CTL_INT, CTL_XEN_INDEPENDENT_WALLCLOCK, "independent_wallclock" },
- { CTL_ULONG, CTL_XEN_PERMITTED_CLOCK_JITTER, "permitted_clock_jitter" },
- {}
- };
- #endif
-
static const struct bin_table bin_s390dbf_table[] = {
{ CTL_INT, 5678 /* CTL_S390DBF_STOPPABLE */, "debug_stoppable" },
{ CTL_INT, 5679 /* CTL_S390DBF_ACTIVE */, "debug_active" },
{ CTL_DIR, CTL_BUS, "bus", bin_bus_table },
{ CTL_DIR, CTL_ABI, "abi" },
/* CTL_CPU not used */
- #ifdef CONFIG_XEN
- { CTL_DIR, CTL_XEN, "xen", bin_xen_table },
- #endif
/* CTL_ARLAN "arlan" no longer used */
{ CTL_DIR, CTL_S390DBF, "s390dbf", bin_s390dbf_table },
{ CTL_DIR, CTL_SUNRPC, "sunrpc", bin_sunrpc_table },
hard and soft lockups.
Softlockups are bugs that cause the kernel to loop in kernel
- mode for more than 60 seconds, without giving other tasks a
+ mode for more than 20 seconds, without giving other tasks a
chance to run. The current stack trace is displayed upon
detection and the system will stay locked up.
Hardlockups are bugs that cause the CPU to loop in kernel mode
- for more than 60 seconds, without letting other interrupts have a
+ for more than 10 seconds, without letting other interrupts have a
chance to run. The current stack trace is displayed upon detection
and the system will stay locked up.
The overhead should be minimal. A periodic hrtimer runs to
- generate interrupts and kick the watchdog task every 10-12 seconds.
- An NMI is generated every 60 seconds or so to check for hardlockups.
+ generate interrupts and kick the watchdog task every 4 seconds.
+ An NMI is generated every 10 seconds or so to check for hardlockups.
+
+ The frequency of hrtimer and NMI events and the soft and hard lockup
+ thresholds can be controlled through the sysctl watchdog_thresh.
config HARDLOCKUP_DETECTOR
def_bool LOCKUP_DETECTOR && PERF_EVENTS && HAVE_PERF_EVENTS_NMI && \
- !ARCH_HAS_NMI_WATCHDOG
+ !HAVE_NMI_WATCHDOG
config BOOTPARAM_HARDLOCKUP_PANIC
bool "Panic (Reboot) On Hard Lockups"
help
Say Y here to enable the kernel to panic on "hard lockups",
which are bugs that cause the kernel to loop in kernel
- mode with interrupts disabled for more than 60 seconds.
+ mode with interrupts disabled for more than 10 seconds (configurable
+ using the watchdog_thresh sysctl).
Say N if unsure.
help
Say Y here to enable the kernel to panic on "soft lockups",
which are bugs that cause the kernel to loop in kernel
- mode for more than 60 seconds, without giving other tasks a
- chance to run.
+ mode for more than 20 seconds (configurable using the watchdog_thresh
+ sysctl), without giving other tasks a chance to run.
The panic can be used in combination with panic_timeout,
to cause the system to reboot automatically after a
config DEBUG_SPINLOCK
bool "Spinlock and rw-lock debugging: basic checks"
depends on DEBUG_KERNEL
+ select UNINLINE_SPIN_UNLOCK
help
Say Y here and build SMP to catch missing spinlock initialization
and certain other kinds of spinlock errors commonly made. This is
larger and slower, but it gives very useful debugging information
in case of kernel bugs. (precise oopses/stacktraces/warnings)
+config UNWIND_INFO
+ bool "Compile the kernel with frame unwind information"
+ depends on !IA64 && !PARISC && !ARM
+ depends on !MODULES || !(MIPS || PPC || SUPERH || V850)
+ help
+ If you say Y here the resulting kernel image will be slightly larger
+ but not slower, and it will give very useful debugging information.
+ If you don't debug the kernel, you can say N, but we may not be able
+ to solve problems without frame unwind information or frame pointers.
+
+config STACK_UNWIND
+ bool "Stack unwind support"
+ depends on UNWIND_INFO
+ depends on X86
+ help
+ This enables more precise stack traces, omitting all unrelated
+ occurrences of pointers into kernel code from the dump.
+
config BOOT_PRINTK_DELAY
bool "Delay each boot printk message by N milliseconds"
depends on DEBUG_KERNEL && PRINTK && GENERIC_CALIBRATE_DELAY
Say Y if you want to enable such checks.
+ config RCU_CPU_STALL_INFO
+ bool "Print additional diagnostics on RCU CPU stall"
+ depends on (TREE_RCU || TREE_PREEMPT_RCU) && DEBUG_KERNEL
+ default n
+ help
+ For each stalled CPU that is aware of the current RCU grace
+ period, print out additional per-CPU diagnostic information
+ regarding scheduling-clock ticks, idle state, and,
+ for RCU_FAST_NO_HZ kernels, idle-entry state.
+
+ Say N if you are unsure.
+
+ Say Y if you want to enable such diagnostics.
+
+ config RCU_TRACE
+ bool "Enable tracing for RCU"
+ depends on DEBUG_KERNEL
+ help
+ This option provides tracing in RCU which presents stats
+ in debugfs for debugging RCU implementation.
+
+ Say Y here if you want to enable RCU tracing
+ Say N if you are unsure.
+
config KPROBES_SANITY_TEST
bool "Kprobes sanity tests"
depends on DEBUG_KERNEL
depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT
depends on !X86_64
select STACKTRACE
- select FRAME_POINTER if !PPC && !S390 && !MICROBLAZE && !ARM_UNWIND
+ select FRAME_POINTER if !PPC && !S390 && !MICROBLAZE && !X86 && !ARM_UNWIND
+ select UNWIND_INFO if X86 && !FRAME_POINTER
help
Provide stacktrace filter for fault-injection capabilities
depends on DEBUG_KERNEL
depends on STACKTRACE_SUPPORT
depends on PROC_FS
- select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM_UNWIND
+ select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !X86 && !ARM_UNWIND
+ select UNWIND_INFO if X86 && !FRAME_POINTER
select KALLSYMS
select KALLSYMS_ALL
select STACKTRACE
Enable this option if you want to use the LatencyTOP tool
to find out which userspace is blocking on what kernel operations.
- config SYSCTL_SYSCALL_CHECK
- bool "Sysctl checks"
- depends on SYSCTL
- ---help---
- sys_sysctl uses binary paths that have been found challenging
- to properly maintain and use. This enables checks that help
- you to keep things correct.
-
source mm/Kconfig.debug
source kernel/trace/Kconfig
int i;
int bad = 0;
- #ifdef CONFIG_XEN
- if (PageForeign(page)) {
- PageForeignDestructor(page, order);
- return false;
- }
- #endif
-
trace_mm_page_free(page, order);
kmemcheck_free_shadow(page, order);
unsigned long flags;
int wasMlocked = __TestClearPageMlocked(page);
- #ifdef CONFIG_XEN
- WARN_ON(PageForeign(page) && wasMlocked);
- #endif
if (!free_pages_prepare(page, order))
return;
}
/*
- * Spill all the per-cpu pages from all CPUs back into the buddy allocator
+ * Spill all the per-cpu pages from all CPUs back into the buddy allocator.
+ *
+ * Note that this code is protected against sending an IPI to an offline
+ * CPU but does not guarantee sending an IPI to newly hotplugged CPUs:
+ * on_each_cpu_mask() blocks hotplug and won't talk to offlined CPUs but
+ * nothing keeps CPUs from showing up after we populated the cpumask and
+ * before the call to on_each_cpu_mask().
*/
void drain_all_pages(void)
{
- on_each_cpu(drain_local_pages, NULL, 1);
+ int cpu;
+ struct per_cpu_pageset *pcp;
+ struct zone *zone;
+
+ /*
+ * Allocate in the BSS so we wont require allocation in
+ * direct reclaim path for CONFIG_CPUMASK_OFFSTACK=y
+ */
+ static cpumask_t cpus_with_pcps;
+
+ /*
+ * We don't care about racing with CPU hotplug event
+ * as offline notification will cause the notified
+ * cpu to drain that CPU pcps and on_each_cpu_mask
+ * disables preemption as part of its processing
+ */
+ for_each_online_cpu(cpu) {
+ bool has_pcps = false;
+ for_each_populated_zone(zone) {
+ pcp = per_cpu_ptr(zone->pageset, cpu);
+ if (pcp->pcp.count) {
+ has_pcps = true;
+ break;
+ }
+ }
+ if (has_pcps)
+ cpumask_set_cpu(cpu, &cpus_with_pcps);
+ else
+ cpumask_clear_cpu(cpu, &cpus_with_pcps);
+ }
+ on_each_cpu_mask(&cpus_with_pcps, drain_local_pages, NULL, 1);
}
#ifdef CONFIG_HIBERNATION
int migratetype;
int wasMlocked = __TestClearPageMlocked(page);
- #ifdef CONFIG_XEN
- WARN_ON(PageForeign(page) && wasMlocked);
- #endif
if (!free_pages_prepare(page, 0))
return;
va_end(args);
}
- pr_warn("%s: page allocation failure: order:%d, mode:0x%x\n",
+ if (!(gfp_mask & __GFP_WAIT)) {
+ pr_info("The following is only an harmless informational message.\n");
+ pr_info("Unless you get a _continuous_flood_ of these messages it means\n");
+ pr_info("everything is working fine. Allocations from irqs cannot be\n");
+ pr_info("perfectly reliable and the kernel is designed to handle that.\n");
+ }
+ pr_info("%s: page allocation failure. order:%d, mode:0x%x\n",
current->comm, order, gfp_mask);
dump_stack();
goto out;
}
/* Exhausted what can be done so it's blamo time */
- out_of_memory(zonelist, gfp_mask, order, nodemask);
+ out_of_memory(zonelist, gfp_mask, order, nodemask, false);
out:
clear_zonelist_oom(zonelist, gfp_mask);
if (!order)
return NULL;
- if (compaction_deferred(preferred_zone)) {
+ if (compaction_deferred(preferred_zone, order)) {
*deferred_compaction = true;
return NULL;
}
if (page) {
preferred_zone->compact_considered = 0;
preferred_zone->compact_defer_shift = 0;
+ if (order >= preferred_zone->compact_order_failed)
+ preferred_zone->compact_order_failed = order + 1;
count_vm_event(COMPACTSUCCESS);
return page;
}
* defer if the failure was a sync compaction failure.
*/
if (sync_migration)
- defer_compaction(preferred_zone);
+ defer_compaction(preferred_zone, order);
cond_resched();
}
if ((gfp_mask & __GFP_FS) && !(gfp_mask & __GFP_NORETRY)) {
if (oom_killer_disabled)
goto nopage;
+ /* Coredumps can quickly deplete all memory reserves */
+ if ((current->flags & PF_DUMPCORE) &&
+ !(gfp_mask & __GFP_NOFAIL))
+ goto nopage;
page = __alloc_pages_may_oom(gfp_mask, order,
zonelist, high_zoneidx,
nodemask, preferred_zone,
{
enum zone_type high_zoneidx = gfp_zone(gfp_mask);
struct zone *preferred_zone;
- struct page *page;
+ struct page *page = NULL;
int migratetype = allocflags_to_migratetype(gfp_mask);
+ unsigned int cpuset_mems_cookie;
gfp_mask &= gfp_allowed_mask;
if (unlikely(!zonelist->_zonerefs->zone))
return NULL;
- get_mems_allowed();
+ retry_cpuset:
+ cpuset_mems_cookie = get_mems_allowed();
+
/* The preferred zone is used for statistics later */
first_zones_zonelist(zonelist, high_zoneidx,
nodemask ? : &cpuset_current_mems_allowed,
&preferred_zone);
- if (!preferred_zone) {
- put_mems_allowed();
- return NULL;
- }
+ if (!preferred_zone)
+ goto out;
/* First allocation attempt */
page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask, order,
page = __alloc_pages_slowpath(gfp_mask, order,
zonelist, high_zoneidx, nodemask,
preferred_zone, migratetype);
- put_mems_allowed();
trace_mm_page_alloc(page, order, gfp_mask, migratetype);
+
+ out:
+ /*
+ * When updating a task's mems_allowed, it is possible to race with
+ * parallel threads in such a way that an allocation can fail while
+ * the mask is being updated. If a page allocation is about to fail,
+ * check if the cpuset changed during allocation and if so, retry.
+ */
+ if (unlikely(!put_mems_allowed(cpuset_mems_cookie) && !page))
+ goto retry_cpuset;
+
return page;
}
EXPORT_SYMBOL(__alloc_pages_nodemask);
bool skip_free_areas_node(unsigned int flags, int nid)
{
bool ret = false;
+ unsigned int cpuset_mems_cookie;
if (!(flags & SHOW_MEM_FILTER_NODES))
goto out;
- get_mems_allowed();
- ret = !node_isset(nid, cpuset_current_mems_allowed);
- put_mems_allowed();
+ do {
+ cpuset_mems_cookie = get_mems_allowed();
+ ret = !node_isset(nid, cpuset_current_mems_allowed);
+ } while (!put_mems_allowed(cpuset_mems_cookie));
out:
return ret;
}
}
}
- int __init add_from_early_node_map(struct range *range, int az,
- int nr_range, int nid)
- {
- unsigned long start_pfn, end_pfn;
- int i;
-
- /* need to go over early_node_map to find out good range for node */
- for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL)
- nr_range = add_range(range, az, nr_range, start_pfn, end_pfn);
- return nr_range;
- }
-
/**
* sparse_memory_present_with_active_regions - Call memory_present for each active range
* @nid: The node to call memory_present for. If MAX_NUMNODES, all nodes will be used.
* memory. When they don't, some nodes will have more kernelcore than
* others
*/
- static void __init find_zone_movable_pfns_for_nodes(unsigned long *movable_pfn)
+ static void __init find_zone_movable_pfns_for_nodes(void)
{
int i, nid;
unsigned long usable_startpfn;
/* Find the PFNs that ZONE_MOVABLE begins at in each node */
memset(zone_movable_pfn, 0, sizeof(zone_movable_pfn));
- find_zone_movable_pfns_for_nodes(zone_movable_pfn);
+ find_zone_movable_pfns_for_nodes();
/* Print out the zone ranges */
printk("Zone PFN ranges:\n");
int cpu = (unsigned long)hcpu;
if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+ lru_add_drain_cpu(cpu);
drain_pages(cpu);
/*
spin_unlock_irqrestore(&zone->lock, flags);
}
- #ifdef CONFIG_XEN
- for_each_populated_zone(zone) {
- unsigned int cpu;
-
- for_each_online_cpu(cpu) {
- unsigned long high;
-
- high = percpu_pagelist_fraction
- ? zone->present_pages / percpu_pagelist_fraction
- : 5 * zone_batchsize(zone);
- setup_pagelist_highmark(
- per_cpu_ptr(zone->pageset, cpu), high);
- }
- }
- #endif
-
/* update totalreserve_pages */
calculate_totalreserve_pages();
}
}
/**
- * truncate_inode_pages - truncate range of pages specified by start & end byte offsets
+ * truncate_inode_pages_range - truncate range of pages specified by start & end byte offsets
* @mapping: mapping to truncate
* @lstart: offset from which to truncate
* @lend: offset to which to truncate
index++;
}
cleancache_invalidate_inode(mapping);
+ /*
+ * Cycle the tree_lock to make sure all __delete_from_page_cache()
+ * calls run from page reclaim have finished as well (this handles the
+ * case when page reclaim took the last page from our range).
+ */
+ spin_lock_irq(&mapping->tree_lock);
+ spin_unlock_irq(&mapping->tree_lock);
}
EXPORT_SYMBOL(truncate_inode_pages_range);
return 0;
}
+
+ /**
+ * truncate_pagecache_range - unmap and remove pagecache that is hole-punched
+ * @inode: inode
+ * @lstart: offset of beginning of hole
+ * @lend: offset of last byte of hole
+ *
+ * This function should typically be called before the filesystem
+ * releases resources associated with the freed range (eg. deallocates
+ * blocks). This way, pagecache will always stay logically coherent
+ * with on-disk format, and the filesystem would not have to deal with
+ * situations such as writepage being called for a page that has already
+ * had its underlying blocks deallocated.
+ */
+ void truncate_pagecache_range(struct inode *inode, loff_t lstart, loff_t lend)
+ {
+ struct address_space *mapping = inode->i_mapping;
+ loff_t unmap_start = round_up(lstart, PAGE_SIZE);
+ loff_t unmap_end = round_down(1 + lend, PAGE_SIZE) - 1;
+ /*
+ * This rounding is currently just for example: unmap_mapping_range
+ * expands its hole outwards, whereas we want it to contract the hole
+ * inwards. However, existing callers of truncate_pagecache_range are
+ * doing their own page rounding first; and truncate_inode_pages_range
+ * currently BUGs if lend is not pagealigned-1 (it handles partial
+ * page at start of hole, but not partial page at end of hole). Note
+ * unmap_mapping_range allows holelen 0 for all, and we allow lend -1.
+ */
+
+ /*
+ * Unlike in truncate_pagecache, unmap_mapping_range is called only
+ * once (before truncating pagecache), and without "even_cows" flag:
+ * hole-punching should not remove private COWed pages from the hole.
+ */
+ if ((u64)unmap_end > (u64)unmap_start)
+ unmap_mapping_range(mapping, unmap_start,
+ 1 + unmap_end - unmap_start, 0);
+ truncate_inode_pages_range(mapping, lstart, lend);
+ }
+ EXPORT_SYMBOL(truncate_pagecache_range);
If unsure, say `N'.
+ config NF_CONNTRACK_TIMEOUT
+ bool 'Connection tracking timeout'
+ depends on NETFILTER_ADVANCED
+ help
+ This option enables support for connection tracking timeout
+ extension. This allows you to attach timeout policies to flow
+ via the CT target.
+
+ If unsure, say `N'.
+
config NF_CONNTRACK_TIMESTAMP
bool 'Connection tracking timestamping'
depends on NETFILTER_ADVANCED
To compile it as a module, choose M here. If unsure, say N.
+config NF_CONNTRACK_SLP
+ tristate "SLP protocol support"
+ depends on NF_CONNTRACK
+ depends on NETFILTER_ADVANCED
+ help
+ SLP queries are sometimes sent as broadcast messages from an
+ unprivileged port and responded to with unicast messages to the
+ same port. This make them hard to firewall properly because connection
+ tracking doesn't deal with broadcasts. This helper tracks locally
+ originating broadcast SLP queries and the corresponding
+ responses. It relies on correct IP address configuration, specifically
+ netmask and broadcast address.
+
+ To compile it as a module, choose M here. If unsure, say N.
+
config NF_CT_NETLINK
tristate 'Connection tracking netlink interface'
select NETFILTER_NETLINK
help
This option enables support for a netlink-based userspace interface
+ config NF_CT_NETLINK_TIMEOUT
+ tristate 'Connection tracking timeout tuning via Netlink'
+ select NETFILTER_NETLINK
+ depends on NETFILTER_ADVANCED
+ help
+ This option enables support for connection tracking timeout
+ fine-grain tuning. This allows you to attach specific timeout
+ policies to flows, instead of using the global timeout policy.
+
+ If unsure, say `N'.
+
endif # NF_CONNTRACK
# transparent proxy support
For more information on the LEDs available on your system, see
Documentation/leds/leds-class.txt
+ config NETFILTER_XT_TARGET_LOG
+ tristate "LOG target support"
+ default m if NETFILTER_ADVANCED=n
+ help
+ This option adds a `LOG' target, which allows you to create rules in
+ any iptables table which records the packet header to the syslog.
+
+ To compile it as a module, choose M here. If unsure, say N.
+
config NETFILTER_XT_TARGET_MARK
tristate '"MARK" target support'
depends on NETFILTER_ADVANCED
netfilter-objs := core.o nf_log.o nf_queue.o nf_sockopt.o
nf_conntrack-y := nf_conntrack_core.o nf_conntrack_standalone.o nf_conntrack_expect.o nf_conntrack_helper.o nf_conntrack_proto.o nf_conntrack_l3proto_generic.o nf_conntrack_proto_generic.o nf_conntrack_proto_tcp.o nf_conntrack_proto_udp.o nf_conntrack_extend.o nf_conntrack_acct.o
+ nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMEOUT) += nf_conntrack_timeout.o
nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMESTAMP) += nf_conntrack_timestamp.o
nf_conntrack-$(CONFIG_NF_CONNTRACK_EVENTS) += nf_conntrack_ecache.o
# netlink interface for nf_conntrack
obj-$(CONFIG_NF_CT_NETLINK) += nf_conntrack_netlink.o
+ obj-$(CONFIG_NF_CT_NETLINK_TIMEOUT) += nfnetlink_cttimeout.o
# connection tracking helpers
nf_conntrack_h323-objs := nf_conntrack_h323_main.o nf_conntrack_h323_asn1.o
obj-$(CONFIG_NF_CONNTRACK_SANE) += nf_conntrack_sane.o
obj-$(CONFIG_NF_CONNTRACK_SIP) += nf_conntrack_sip.o
obj-$(CONFIG_NF_CONNTRACK_TFTP) += nf_conntrack_tftp.o
+obj-$(CONFIG_NF_CONNTRACK_SLP) += nf_conntrack_slp.o
# transparent proxy support
obj-$(CONFIG_NETFILTER_TPROXY) += nf_tproxy_core.o
obj-$(CONFIG_NETFILTER_XT_TARGET_DSCP) += xt_DSCP.o
obj-$(CONFIG_NETFILTER_XT_TARGET_HL) += xt_HL.o
obj-$(CONFIG_NETFILTER_XT_TARGET_LED) += xt_LED.o
+ obj-$(CONFIG_NETFILTER_XT_TARGET_LOG) += xt_LOG.o
obj-$(CONFIG_NETFILTER_XT_TARGET_NFLOG) += xt_NFLOG.o
obj-$(CONFIG_NETFILTER_XT_TARGET_NFQUEUE) += xt_NFQUEUE.o
obj-$(CONFIG_NETFILTER_XT_TARGET_NOTRACK) += xt_NOTRACK.o
#define RPCDBG_FACILITY RPCDBG_SCHED
#endif
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/sunrpc.h>
+
/*
* RPC slabs and memory pools
*/
queue->qlen = 0;
setup_timer(&queue->timer_list.timer, __rpc_queue_timer_fn, (unsigned long)queue);
INIT_LIST_HEAD(&queue->timer_list.list);
- #ifdef RPC_DEBUG
- queue->name = qname;
- #endif
+ rpc_assign_waitqueue_name(queue, qname);
}
void rpc_init_priority_wait_queue(struct rpc_wait_queue *queue, const char *qname)
static void rpc_set_active(struct rpc_task *task)
{
+ trace_rpc_task_begin(task->tk_client, task, NULL);
+
rpc_task_set_debuginfo(task);
set_bit(RPC_TASK_ACTIVE, &task->tk_runstate);
}
unsigned long flags;
int ret;
+ trace_rpc_task_complete(task->tk_client, task, NULL);
+
spin_lock_irqsave(&wq->lock, flags);
clear_bit(RPC_TASK_ACTIVE, &task->tk_runstate);
ret = atomic_dec_and_test(&task->tk_count);
dprintk("RPC: %5u sleep_on(queue \"%s\" time %lu)\n",
task->tk_pid, rpc_qname(q), jiffies);
+ trace_rpc_task_sleep(task->tk_client, task, q);
+
__rpc_add_wait_queue(q, task, queue_priority);
BUG_ON(task->tk_callback != NULL);
return;
}
+ trace_rpc_task_wakeup(task->tk_client, task, queue);
+
__rpc_remove_wait_queue(queue, task);
rpc_make_runnable(task);
/*
* Wake up the next task on a priority queue.
*/
- static struct rpc_task * __rpc_wake_up_next_priority(struct rpc_wait_queue *queue)
+ static struct rpc_task *__rpc_find_next_queued_priority(struct rpc_wait_queue *queue)
{
struct list_head *q;
struct rpc_task *task;
new_owner:
rpc_set_waitqueue_owner(queue, task->tk_owner);
out:
return task;
}
+ static struct rpc_task *__rpc_find_next_queued(struct rpc_wait_queue *queue)
+ {
+ if (RPC_IS_PRIORITY(queue))
+ return __rpc_find_next_queued_priority(queue);
+ if (!list_empty(&queue->tasks[0]))
+ return list_first_entry(&queue->tasks[0], struct rpc_task, u.tk_wait.list);
+ return NULL;
+ }
+
/*
- * Wake up the next task on the wait queue.
+ * Wake up the first task on the wait queue.
*/
- struct rpc_task * rpc_wake_up_next(struct rpc_wait_queue *queue)
+ struct rpc_task *rpc_wake_up_first(struct rpc_wait_queue *queue,
+ bool (*func)(struct rpc_task *, void *), void *data)
{
struct rpc_task *task = NULL;
- dprintk("RPC: wake_up_next(%p \"%s\")\n",
+ dprintk("RPC: wake_up_first(%p \"%s\")\n",
queue, rpc_qname(queue));
spin_lock_bh(&queue->lock);
- if (RPC_IS_PRIORITY(queue))
- task = __rpc_wake_up_next_priority(queue);
- else {
- task_for_first(task, &queue->tasks[0])
+ task = __rpc_find_next_queued(queue);
+ if (task != NULL) {
+ if (func(task, data))
rpc_wake_up_task_queue_locked(queue, task);
+ else
+ task = NULL;
}
spin_unlock_bh(&queue->lock);
return task;
}
+ EXPORT_SYMBOL_GPL(rpc_wake_up_first);
+
+ static bool rpc_wake_up_next_func(struct rpc_task *task, void *data)
+ {
+ return true;
+ }
+
+ /*
+ * Wake up the next task on the wait queue.
+ */
+ struct rpc_task *rpc_wake_up_next(struct rpc_wait_queue *queue)
+ {
+ return rpc_wake_up_first(queue, rpc_wake_up_next_func, NULL);
+ }
EXPORT_SYMBOL_GPL(rpc_wake_up_next);
/**
*/
void rpc_wake_up(struct rpc_wait_queue *queue)
{
- struct rpc_task *task, *next;
struct list_head *head;
spin_lock_bh(&queue->lock);
head = &queue->tasks[queue->maxpriority];
for (;;) {
- list_for_each_entry_safe(task, next, head, u.tk_wait.list)
+ while (!list_empty(head)) {
+ struct rpc_task *task;
+ task = list_first_entry(head,
+ struct rpc_task,
+ u.tk_wait.list);
rpc_wake_up_task_queue_locked(queue, task);
+ }
if (head == &queue->tasks[0])
break;
head--;
*/
void rpc_wake_up_status(struct rpc_wait_queue *queue, int status)
{
- struct rpc_task *task, *next;
struct list_head *head;
spin_lock_bh(&queue->lock);
head = &queue->tasks[queue->maxpriority];
for (;;) {
- list_for_each_entry_safe(task, next, head, u.tk_wait.list) {
+ while (!list_empty(head)) {
+ struct rpc_task *task;
+ task = list_first_entry(head,
+ struct rpc_task,
+ u.tk_wait.list);
task->tk_status = status;
rpc_wake_up_task_queue_locked(queue, task);
}
}
EXPORT_SYMBOL_GPL(rpc_wake_up_status);
+/**
+ * rpc_wake_up_softconn_status - wake up all SOFTCONN rpc_tasks and set their
+ * status value.
+ * @queue: rpc_wait_queue on which the tasks are sleeping
+ * @status: status value to set
+ *
+ * Grabs queue->lock
+ */
+void rpc_wake_up_softconn_status(struct rpc_wait_queue *queue, int status)
+{
+ struct rpc_task *task, *next;
+ struct list_head *head;
+
+ spin_lock_bh(&queue->lock);
+ head = &queue->tasks[queue->maxpriority];
+ for (;;) {
+ list_for_each_entry_safe(task, next, head, u.tk_wait.list)
+ if (RPC_IS_SOFTCONN(task)) {
+ task->tk_status = status;
+ rpc_wake_up_task_queue_locked(queue, task);
+ }
+ if (head == &queue->tasks[0])
+ break;
+ head--;
+ }
+ spin_unlock_bh(&queue->lock);
+}
+EXPORT_SYMBOL_GPL(rpc_wake_up_softconn_status);
+
static void __rpc_queue_timer_fn(unsigned long ptr)
{
struct rpc_wait_queue *queue = (struct rpc_wait_queue *)ptr;
if (do_action == NULL)
break;
}
+ trace_rpc_task_run_action(task->tk_client, task, task->tk_action);
do_action(task);
/*
/*
* xprtsock tunables
*/
- unsigned int xprt_udp_slot_table_entries = RPC_DEF_SLOT_TABLE;
- unsigned int xprt_tcp_slot_table_entries = RPC_MIN_SLOT_TABLE;
- unsigned int xprt_max_tcp_slot_table_entries = RPC_MAX_SLOT_TABLE;
+ static unsigned int xprt_udp_slot_table_entries = RPC_DEF_SLOT_TABLE;
+ static unsigned int xprt_tcp_slot_table_entries = RPC_MIN_SLOT_TABLE;
+ static unsigned int xprt_max_tcp_slot_table_entries = RPC_MAX_SLOT_TABLE;
- unsigned int xprt_min_resvport = RPC_DEF_MIN_RESVPORT;
- unsigned int xprt_max_resvport = RPC_DEF_MAX_RESVPORT;
+ static unsigned int xprt_min_resvport = RPC_DEF_MIN_RESVPORT;
+ static unsigned int xprt_max_resvport = RPC_DEF_MAX_RESVPORT;
#define XS_TCP_LINGER_TO (15U * HZ)
static unsigned int xs_tcp_fin_timeout __read_mostly = XS_TCP_LINGER_TO;
case -ECONNREFUSED:
case -ECONNRESET:
case -ENETUNREACH:
- /* retry with existing socket, after a delay */
+ /* Retry with existing socket after a delay, except
+ * for SOFTCONN tasks which fail. */
+ xprt_clear_connecting(xprt);
+ rpc_wake_up_softconn_status(&xprt->pending, status);
+ return;
case 0:
case -EINPROGRESS:
case -EALREADY:
idle_time = (long)(jiffies - xprt->last_used) / HZ;
seq_printf(seq, "\txprt:\tlocal %lu %lu %lu %ld %lu %lu %lu "
- "%llu %llu\n",
+ "%llu %llu %lu %llu %llu\n",
xprt->stat.bind_count,
xprt->stat.connect_count,
xprt->stat.connect_time,
xprt->stat.recvs,
xprt->stat.bad_xids,
xprt->stat.req_u,
- xprt->stat.bklog_u);
+ xprt->stat.bklog_u,
+ xprt->stat.max_slots,
+ xprt->stat.sending_u,
+ xprt->stat.pending_u);
}
/**
{
struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
- seq_printf(seq, "\txprt:\tudp %u %lu %lu %lu %lu %Lu %Lu\n",
+ seq_printf(seq, "\txprt:\tudp %u %lu %lu %lu %lu %llu %llu "
+ "%lu %llu %llu\n",
transport->srcport,
xprt->stat.bind_count,
xprt->stat.sends,
xprt->stat.recvs,
xprt->stat.bad_xids,
xprt->stat.req_u,
- xprt->stat.bklog_u);
+ xprt->stat.bklog_u,
+ xprt->stat.max_slots,
+ xprt->stat.sending_u,
+ xprt->stat.pending_u);
}
/**
if (xprt_connected(xprt))
idle_time = (long)(jiffies - xprt->last_used) / HZ;
- seq_printf(seq, "\txprt:\ttcp %u %lu %lu %lu %ld %lu %lu %lu %Lu %Lu\n",
+ seq_printf(seq, "\txprt:\ttcp %u %lu %lu %lu %ld %lu %lu %lu "
+ "%llu %llu %lu %llu %llu\n",
transport->srcport,
xprt->stat.bind_count,
xprt->stat.connect_count,
xprt->stat.recvs,
xprt->stat.bad_xids,
xprt->stat.req_u,
- xprt->stat.bklog_u);
+ xprt->stat.bklog_u,
+ xprt->stat.max_slots,
+ xprt->stat.sending_u,
+ xprt->stat.pending_u);
}
/*
static struct rpc_xprt_ops bc_tcp_ops = {
.reserve_xprt = xprt_reserve_xprt,
.release_xprt = xprt_release_xprt,
+ .rpcbind = xs_local_rpcbind,
.buf_alloc = bc_malloc,
.buf_free = bc_free,
.send_request = bc_send_request,
warning-1 += -Wold-style-definition
warning-1 += $(call cc-option, -Wmissing-include-dirs)
warning-1 += $(call cc-option, -Wunused-but-set-variable)
+ warning-1 += $(call cc-disable-warning, missing-field-initializers)
warning-2 := -Waggregate-return
warning-2 += -Wcast-align
warning-2 += -Wnested-externs
warning-2 += -Wshadow
warning-2 += $(call cc-option, -Wlogical-op)
+ warning-2 += $(call cc-option, -Wmissing-field-initializers)
warning-3 := -Wbad-function-cast
warning-3 += -Wcast-qual
$(warning kbuild: Makefile.build is included improperly)
endif
- ifeq ($(CONFIG_XEN),y)
- Makefile.xen := $(if $(KBUILD_EXTMOD),$(KBUILD_EXTMOD),$(objtree)/scripts)/Makefile.xen
- $(Makefile.xen): $(srctree)/scripts/Makefile.xen.awk $(srctree)/scripts/Makefile.build
- @echo ' Updating $@'
- $(if $(shell echo a | $(AWK) '{ print gensub(/a/, "AA", "g"); }'),\
- ,$(error 'Your awk program does not define gensub. Use gawk or another awk with gensub'))
- @$(AWK) -f $< $(filter-out $<,$^) >$@
-
- xen-src-single-used-m := $(patsubst $(srctree)/%,%,$(wildcard $(addprefix $(srctree)/,$(single-used-m:.o=-xen.c))))
- xen-single-used-m := $(xen-src-single-used-m:-xen.c=.o)
- single-used-m := $(filter-out $(xen-single-used-m),$(single-used-m))
-
- -include $(Makefile.xen)
- endif
-
# ===========================================================================
ifneq ($(strip $(lib-y) $(lib-m) $(lib-n) $(lib-)),)
$(CPP) -D__GENKSYMS__ $(c_flags) $< | \
$(GENKSYMS) $(if $(1), -T $(2)) -a $(ARCH) \
$(if $(KBUILD_PRESERVE),-p) \
+ $(if $(KBUILD_OVERRIDE),-o) \
-r $(firstword $(wildcard $(2:.symtypes=.symref) /dev/null))
quiet_cmd_cc_symtypes_c = SYM $(quiet_modtag) $@
# Built-in and composite module parts
$(obj)/%.o: $(src)/%.c $(recordmcount_source) FORCE
$(call cmd,force_checksrc)
+ $(call cmd,force_check_kmsg)
$(call if_changed_rule,cc_o_c)
# Single-part modules are special since we need to mark them in $(MODVERDIR)
$(single-used-m): $(obj)/%.o: $(src)/%.c $(recordmcount_source) FORCE
$(call cmd,force_checksrc)
+ $(call cmd,force_check_kmsg)
$(call if_changed_rule,cc_o_c)
@{ echo $(@:.o=.ko); echo $@; } > $(MODVERDIR)/$(@F:.o=.mod)
targets += $(multi-used-y) $(multi-used-m)
+# kmsg check tool
+ifneq ($(KBUILD_KMSG_CHECK),0)
+ ifeq ($(KBUILD_KMSG_CHECK),2)
+ kmsg_cmd := print
+ quiet_cmd_force_check_kmsg = KMSG_PRINT $<
+ $(shell [ -d $(objtree)/man ] || mkdir -p $(objtree)/man)
+ else
+ kmsg_cmd := check
+ quiet_cmd_force_check_kmsg = KMSG_CHECK $<
+ endif
+ cmd_force_check_kmsg = $(KMSG_CHECK) $(kmsg_cmd) $(CC) $(c_flags) $< ;
+endif
# Descending
# ---------------------------------------------------------------------------
#define ALL_INIT_DATA_SECTIONS \
".init.setup$", ".init.rodata$", \
- ".devinit.rodata$", ".cpuinit.rodata$", ".meminit.rodata$" \
+ ".devinit.rodata$", ".cpuinit.rodata$", ".meminit.rodata$", \
".init.data$", ".devinit.data$", ".cpuinit.data$", ".meminit.data$"
#define ALL_EXIT_DATA_SECTIONS \
".exit.data$", ".devexit.data$", ".cpuexit.data$", ".memexit.data$"
}
}
+void *supported_file;
+unsigned long supported_size;
+
+static const char *supported(struct module *mod)
+{
+ unsigned long pos = 0;
+ char *line;
+
+ /* In a first shot, do a simple linear scan. */
+ while ((line = get_next_line(&pos, supported_file,
+ supported_size))) {
+ const char *basename, *how = "yes";
+ char *l = line;
+
+ /* optional type-of-support flag */
+ for (l = line; *l != '\0'; l++) {
+ if (*l == ' ' || *l == '\t') {
+ *l = '\0';
+ how = l + 1;
+ break;
+ }
+ }
+
+ /* skip directory components */
+ if ((l = strrchr(line, '/')))
+ line = l + 1;
+ /* strip .ko extension */
+ l = line + strlen(line);
+ if (l - line > 3 && !strcmp(l-3, ".ko"))
+ *(l-3) = '\0';
+
+ /* skip directory components */
+ if ((basename = strrchr(mod->name, '/')))
+ basename++;
+ else
+ basename = mod->name;
+ if (!strcmp(basename, line))
+ return how;
+ }
+ return NULL;
+}
+
static void read_symbols(char *modname)
{
const char *symname;
buf_printf(b, "\nMODULE_INFO(staging, \"Y\");\n");
}
+static void add_supported_flag(struct buffer *b, struct module *mod)
+{
+ const char *how = supported(mod);
+ if (how)
+ buf_printf(b, "\nMODULE_INFO(supported, \"%s\");\n", how);
+}
+
/**
* Record CRCs for unresolved symbols
**/
fclose(file);
}
+static void read_supported(const char *fname)
+{
+ supported_file = grab_file(fname, &supported_size);
+ if (!supported_file)
+ ; /* ignore error */
+}
+
/* parse Module.symvers file. line format:
* 0x12345678<tab>symbol<tab>module[[<tab>export]<tab>something]
**/
struct buffer buf = { };
char *kernel_read = NULL, *module_read = NULL;
char *dump_write = NULL;
+ const char *supported = NULL;
int opt;
int err;
struct ext_sym_list *extsym_iter;
struct ext_sym_list *extsym_start = NULL;
- while ((opt = getopt(argc, argv, "i:I:e:cmsSo:awM:K:")) != -1) {
+ while ((opt = getopt(argc, argv, "i:I:e:cmsSo:awM:K:N:")) != -1) {
switch (opt) {
case 'i':
kernel_read = optarg;
case 'w':
warn_unresolved = 1;
break;
+ case 'N':
+ supported = optarg;
+ break;
default:
exit(1);
}
}
+ if (supported)
+ read_supported(supported);
if (kernel_read)
read_dump(kernel_read, 1);
if (module_read)
add_header(&buf, mod);
add_intree_flag(&buf, !external_module);
add_staging_flag(&buf, mod->name);
+ add_supported_flag(&buf, mod);
err |= add_versions(&buf, mod);
add_depends(&buf, mod, modules);
add_moddevtable(&buf, mod);