Compare commits

...

46 Commits

Author SHA1 Message Date
wufayuan bc85f26e39 将爬虫采集器替换并嵌入合并到系统中,大幅度提高爬取速度
2 years ago
p3t2ja9zs d0f631cd44 Add LICENCE
2 years ago
wufayuan ddcf395145 将爬虫采集器替换并嵌入合并到系统中,大幅度提高爬取速度
2 years ago
wufayuan 13bcb4a915 重写了爬虫采集器,弃用selnium,爬取手机端知网,多线程爬取
2 years ago
wufayuan 60f93c0f0e 重写了爬虫,弃用selnium,爬取手机端知网,多线程爬取
2 years ago
wufayuan ad427ef9bc 最后一次正确的测试
2 years ago
wufayuan 3c186535e9 基本功能初步完成,优化:取消一条一条写入数据库,任务结果爬取完毕后一次性写入数据库;优化分布式集群;优化系统参数配置,只需要在settings.ini里改就行;优化任务分发模块,若当前接收任务太多,多出的任务将处于等待状态,当分布式节点或服务器爬虫出现空闲的时候等待状态的任务才开始运行
2 years ago
wufayuan f4aedd9cfd 基本功能初步完成,优化:取消一条一条写入数据库,任务结果爬取完毕后一次性写入数据库;优化分布式集群;优化系统参数配置,只需要在settings.ini里改就行;优化任务分发模块,若当前接收任务太多,多出的任务将处于等待状态,当分布式节点或服务器爬虫出现空闲的时候等待状态的任务才开始运行
2 years ago
wufayuan 27899262f5 基本功能初步完成,优化:取消一条一条写入数据库,任务结果爬取完毕后一次性写入数据库;优化分布式集群;优化系统参数配置,只需要在settings.ini里改就行;优化任务分发模块,若当前接收任务太多,多出的任务将处于等待状态,当分布式节点或服务器爬虫出现空闲的时候等待状态的任务才开始运行
2 years ago
wufayuan ff11f3bfc1 基本功能初步完成
2 years ago
wufayuan 06e1f4c565 重写了connect通信程序与服务器通信系统,彻底重写了终端节点集群,对整个系统进行了较大幅度的优化,优化集群为多进程,增加轮询间隔,小优化
2 years ago
wufayuan 58c2162918 重写了connect通信程序与服务器通信系统,彻底重写了终端节点集群,对整个系统进行了较大幅度的优化,优化集群为多进程,增加轮询间隔
2 years ago
wufayuan f7c0dd043d 重写了connect通信程序与服务器通信系统,彻底重写了终端节点集群,对整个系统进行了较大幅度的优化
2 years ago
wufayuan ae894c1fc0 重写了connect通信程序与服务器通信系统,彻底重写了终端节点集群,对整个系统进行了较大幅度的优化
2 years ago
wufayuan 4fbc6cc294 重写了connect通信程序与服务器通信系统,目前可用性应当大幅提升,运行情况正常
2 years ago
wufayuan 2f4fa14b2b 重写了connect通信程序与服务器通信系统,目前可用性应当大幅提升,运行情况正常
2 years ago
wufayuan b681c1b92d 也许可以运行了
2 years ago
wufayuan 592d6f9941 较为完整的代码
2 years ago
wufayuan 846d44206e 较为完整的代码
2 years ago
wufayuan 39cbef6fe5 较为完整的代码
2 years ago
wufayuan 5ffd4fd363 较为完整的代码
2 years ago
wufayuan 1757411834 较为完整的代码
2 years ago
wufayuan da9136e30d 较为完整的
2 years ago
wufayuan 89f513ef95 重新上传了爬虫服务器程序结构图
3 years ago
wufayuan 9bd7cbcc9d 重新上传了爬虫服务器程序结构图
3 years ago
wufayuan 1df39c735e 重新上传了爬虫服务器程序结构图
3 years ago
wufayuan df874efba1 完善readme
3 years ago
wufayuan db0776ae56 完善readme
3 years ago
wufayuan a25f843862 完善readme
3 years ago
wufayuan 5702c8e9f5 完善readme
3 years ago
wufayuan f3588e82ac 完善readme
3 years ago
wufayuan 656ead319e 完善readme
3 years ago
wufayuan b1a90b646c 重构了整个项目,使之更符合“多个系统-相互协同”的模型,同时,实现了爬虫任务系统及其分发与远程和本地结果组合。多系统采用轮询的方式,一旦接受任务就开启对应的执行线程,整个系统得以真正实现多用户同时访问。此外,完善了cookie机制,包括用户认证与识别,完善了从数据库中提取数据并组合,完善了多系统协调机制,初步实现了客户端的初始代码等等
3 years ago
wufayuan a1a73aa412 初步完成服务器向多个客户端递交爬虫请求,客户端返回爬虫结果到全局外部变量
3 years ago
wufayuan 64a607e50b 梳理了项目结构
3 years ago
wufayuan e69f4ea071 初步完善了服务器中存在的“当前用户信息表”,以及保存它的外部变量,同时修缮了数据库处理,以及提供了cookie认证机制,经过初步验证;此外初步实现了爬虫任务分发功能,还未验证,因为需要改写客户端文件。
3 years ago
wufayuan 3d8e40bb5e 进一步完善同时请求登陆注册等用户相关情况
3 years ago
wufayuan 0c45592b8f 进一步完善爬取结果写入数据库
3 years ago
wufayuan 2dbf99feda 进一步完善爬取结果写入数据库
3 years ago
wufayuan 9501253095 完善爬取结果写入数据库
3 years ago
wufayuan 36e16a99ba 完善爬取结果写入数据库
3 years ago
wufayuan d551d46612 实现web服务器向爬虫服务器通信,请求登录和注册,以及丰富了两者之间的通信类型
3 years ago
wufayuan d85e78127c 实现web服务器向爬虫服务器通信,请求登录和注册,以及丰富了两者之间的通信类型
3 years ago
wufayuan eb4dec2e7b 实现web服务器向爬虫服务器通信,请求登录和注册,以及丰富了两者之间的通信类型
3 years ago
wufayuan 1a9e10313a 实现web服务器向爬虫服务器通信,请求登录和注册,以及丰富了两者之间的通信类型
3 years ago
wufayuan 888089ca40 将ui服务器初步写成,实现与爬虫服务器基本通信
3 years ago

1
.gitignore vendored

@ -1 +1,2 @@
!/dcs/tests/zhiwang.py
!/dcs/tools/cookie.py

@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="DataSourceManagerImpl" format="xml" multifile-model="true">
<data-source source="LOCAL" name="test@localhost" uuid="d6ef9694-a0d9-4fdd-9c8b-f3b09aba32ab">
<driver-ref>mysql.8</driver-ref>
<synchronize>true</synchronize>
<jdbc-driver>com.mysql.cj.jdbc.Driver</jdbc-driver>
<jdbc-url>jdbc:mysql://localhost:3306/test</jdbc-url>
<working-dir>$ProjectFileDir$</working-dir>
</data-source>
</component>
</project>

@ -2,7 +2,7 @@
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" />
<orderEntry type="inheritedJdk" />
<orderEntry type="jdk" jdkName="Python 3.10 (DWSpider)" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>

@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="JavaScriptLibraryMappings">
<includedPredefinedLibrary name="Node.js Core" />
</component>
</project>

@ -0,0 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.10 (DWSpider)" project-jdk-type="Python SDK" />
</project>

@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="SqlDialectMappings">
<file url="file://$PROJECT_DIR$/dcs/tools/database.py" dialect="MySQL" />
</component>
</project>

@ -1,2 +1,72 @@
# dcs
# 分布式爬虫系统
## 下载&安装
### 爬虫
#### 安装selenium
```bash
pip3 install selenium
```
#### 安装 mysqlpymysql 并配置
#### 下载edge浏览器引擎
https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/
![img](https://img-blog.csdnimg.cn/20201014171452760.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3RrMTAyMw==,size_16,color_FFFFFF,t_70)
浏览器 --> 设置 --> 关于 Microsoft Edge --> 版本信息。和上面对应(浏览器图标也要对应上,是这个带 绿色 的)
![img](https://img-blog.csdnimg.cn/20201014171642418.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3RrMTAyMw==,size_16,color_FFFFFF,t_70)
把下载的浏览器引擎程序放在 dcs/bin 目录下
可以用下面的脚本测试
```python
from time import sleep
from selenium import webdriver
driverfile_path = r'G:\Users\god\PycharmProjects\dcs\bin\msedgedriver.exe'
driver = webdriver.Edge(executable_path=driverfile_path)
driver.get(r'https://www.baidu.com/')
sleep(5)
driver.close()
```
上面的路径需要自己对应改一下
## 运行
python3 运行 main.py 文件,开启 server、spider、user_process、requester、communicate 五个服务线程,分布式爬虫系统服务端开始运行和监测。
node 运行 login.js即可开启web服务器可接收浏览器请求之后与爬虫服务器通信取得结果后返回浏览器。
再运行 client.py 文件,运行客户端,开始请求爬虫任务,服务端即可接收、分配并执行、组合,最终返回结果到客户端。
## 运行截图
![image-20220421204241089](https://code.educoder.net/repo/p3t2ja9zs/dcs/raw/branch/master/docs/pictures/server_start.png)
![image-20220421204341598](https://code.educoder.net/repo/p3t2ja9zs/dcs/raw/branch/master/docs/pictures/server_running.png)
![image-20220421204402347](https://code.educoder.net/repo/p3t2ja9zs/dcs/raw/branch/master/docs/pictures/client_result.png)
## 项目结构图
![image-20220421204402357](https://code.educoder.net/repo/p3t2ja9zs/dcs/raw/branch/master/docs/pictures/CRAWL_SERVER.jpg)
## 服务器运行日志
> https://code.educoder.net/attachments/entries/get_file?download_url=https://code.educoder.net/api/p3t2ja9zs/dcs/raw?filepath=dcs/dcs.log&ref=master
## 更新日志
## V1.0
基本框架搭建完毕实现核心的类“P2P”机制

Binary file not shown.

@ -0,0 +1,85 @@
from collections import deque
from typing import Any
class CUI:
def __init__(self, user_name, login_time, login_state, state, cookie, address):
self.user_name = user_name
self.login_time = login_time
self.login_state = login_state
self.state = state
self.cookie = cookie
self.address = address
self.crawl_result = deque()
class global_var:
"""需要定义全局变量的放在这里"""
connection = None
free_spiders = []
current_user_info: list[CUI] = [CUI('god', None, None, None, 'god', None)]
requester = None
server = None
spider = None
up = None
communicator = None
server_socket = None
configs = None
test = None
def get_free_addresses() -> tuple[Any, ...]:
fs = []
for i in global_var.current_user_info:
if i.state == 'free':
fs.append(i.address)
return tuple(fs)
def exists(cookie):
for i in global_var.current_user_info:
if i.cookie == cookie:
return True
return False
def add_user(user_name, login_time, login_state, state, cookie, address=None):
global_var.current_user_info.append(CUI(user_name, login_time, login_state, state, cookie, address))
def set_state_client(cookie, state=None, address=None):
if address:
for i in global_var.current_user_info:
if i.address == address:
i.state = state
break
return
for i in global_var.current_user_info:
if i.cookie == cookie:
i.state = state
break
def set_crawl_result(cookie, result):
for i in global_var.current_user_info:
if i.cookie == cookie:
i.crawl_result.append(result)
break
def get_crawl_result(cookie):
for i in global_var.current_user_info:
if i.cookie == cookie:
return i.crawl_result
def get_by_cookie(cookie):
for i in global_var.current_user_info:
if i.cookie == cookie:
return i
return None
def delete_user(cookie):
i = get_by_cookie(cookie)
global_var.current_user_info.remove(i)

@ -1,5 +1,15 @@
[server]
ip = 192.168.43.241
port = 7777
daemon = True
buffer_size = 8 * 1024 * 1024
[crawler]
max_count_of_crawlers = 10
[database]
ip = 192.168.43.65
user = root
password = 427318Aa
database = test

@ -0,0 +1,120 @@
import json
import multiprocessing
import socket
import struct
from json import JSONDecoder
from dcs.tests.fastcrawler import *
from dcs.tools import message_process as mp
from dcs.tools.message_process import parse_request, generate_response
def crawl_zhiwang(word, pages_start, pages_end):
logger.info(f'[CRAWLER] crawling pages {pages_start}-{pages_end} of keyword {word}...')
logger.info(f'[CRAWLER] local crawler is starting...')
res = {} # 保存终端爬取结果
fast_crawler = Fast_crawler()
while pages_start < pages_end:
papers = fast_crawler.crawl(word, pages_start)
for paper in papers:
write2res(paper, res)
pages_start += 1
return res
def write2res(paper: Paper, res):
for author in paper.authors:
if author.name:
res.update(
{len(res): {'name': author.name, 'college': author.college, 'major': author.major,
'title': paper.title}})
def send_request(socket2server, req):
socket2server.sendall(mp.generate_request(req))
responseJson = JSONDecoder().decode(
mp.read_bytes(socket2server, struct.unpack('!Q', socket2server.recv(8))[0]).decode(
"utf-8"))
return responseJson
def crawl(request_map) -> dict:
result_map = crawl_zhiwang(request_map['word'], request_map['pages_start'], request_map['pages_end'])
# sleep(10)
# result_map = {0: {'name': 'remote', 'college': 'remote', 'major': 'remote', 'title': 'remote'},
# 1: {'name': 'remote1', 'college': 'remote1', 'major': 'remote', 'title': 'remote'}}
return result_map
class Client(multiprocessing.Process):
def __init__(self, server_ip, server_port, local_ip, local_port):
super(Client, self).__init__()
self.server_ip = server_ip
self.server_port = server_port
self.local_ip = local_ip
self.local_port = local_port
self.client_name = f'client_{self.local_port}'
self.client_password = f'client_{self.local_port}'
def run(self) -> None:
ssocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
ssocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
ssocket.bind((self.local_ip, self.local_port)) # ip 不能是'' !
ssocket.listen()
socket_to_server = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
socket_to_server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
socket_to_server.connect((self.server_ip, self.server_port))
request = {'action': 'register', 'user': self.client_name, 'password': self.client_password}
logger.info(f'[RESPONSE] {send_request(socket_to_server, request)}')
request = {'action': 'login', 'user': self.client_name, 'password': self.client_password, 'address': (self.local_ip, self.local_port)}
response = send_request(socket_to_server, request)
logger.info(f'[RESPONSE] {response}')
cookie = response['cookie']
request = {'action': 'report_free', 'cookie': cookie}
logger.info(f'[RESPONSE] {send_request(socket_to_server, request)}')
while True:
try:
client_socket, _ = ssocket.accept()
request_map = parse_request(client_socket)
if request_map['type'] == 'request':
logger.info("[REQUEST] receiving help request: " + json.dumps(request_map, ensure_ascii=False))
try:
response_map = crawl(request_map)
except Exception as e:
logger.error(f'[Error] {e.__class__.__name__}: {str(e)}')
try:
response_map = crawl(request_map)
except Exception as e:
logger.error(f'[Error] {e.__class__.__name__}: {str(e)}')
# 爬取失败
response_map = {'0': {'name': None, 'college': None, 'major': None, 'title': None}, 'failed_task': {request_map}, 'success': False}
response_map.update({'cookie': request_map['cookie']})
client_socket.sendall(generate_response(response_map))
logger.info(f'[RESPONSE] sending client result {response_map}...')
request = {'action': 'report_free', 'cookie': cookie}
logger.info(f'[RESPONSE] {send_request(socket_to_server, request)}')
elif request_map['type'] == 'end':
logger.info(f"[REQUEST] end")
logger.debug(f"communication end from {client_socket.getpeername()}!")
request = {'action': 'end'}
socket_to_server.sendall(mp.generate_request(request))
break
except Exception as e:
logger.error(f'[Error] {e.__class__.__name__}: {str(e)}')
if __name__ == '__main__':
client = Client('127.0.0.1', 7777, '127.0.0.1', 9998)
client.start()
client.join()

@ -0,0 +1,25 @@
# 分布式节点集群服务器
from loguru import logger
from dcs.clients.client import Client
start = 9000
ip = '192.168.43.241'
port = 7777
local_ip = ip
local_port = None
# 开启的分布节点数量
count = 3
if __name__ == '__main__':
clients = []
socket_to_servers = []
for i in range(start, start + count):
client = Client(ip, port, local_ip, i)
client.daemon = True
clients.append(client)
[c.start() for c in clients]
logger.info('[CLIENTS] starting all client nodes...')
[c.join() for c in clients]

@ -0,0 +1,32 @@
import socket
import threading
from time import sleep
from loguru import logger
from dcs.tools.message_process import generate_response
class Communicator(threading.Thread):
def __init__(self):
super(Communicator, self).__init__()
self.responser_list: list[tuple[str, socket.socket, dict]] = []
self.info_list: list[tuple[tuple, dict]] = []
def add_response(self, response_type: str, client_socket: socket.socket, response_map: dict):
response_map.update({'type': response_type})
self.responser_list.append((response_type, client_socket, response_map))
def add_info(self, info_type: str, address: tuple, info_map: dict):
info_map.update({'type': info_type})
self.info_list.append((address, info_map))
def run(self) -> None:
while True:
for responser in self.responser_list:
response_type, client_socket, response_map = responser[0], responser[1], responser[2]
logger.info(f'[COMMUNICATE] sending response to {client_socket.getpeername()}: {response_map}')
client_socket.sendall(generate_response(response_map))
self.responser_list.remove(responser)
sleep(1)

File diff suppressed because one or more lines are too long

@ -1,21 +1,49 @@
# -*- coding: UTF-8 -*-
from dcs.tests.server import Server
from dcs.server import Server
from configparser import ConfigParser
from loguru import logger
from dcs.tools.database import create_user_info
from dcs.requester import Requester
from dcs.spider import Spider
from conf.config import global_var
from dcs.user_process import UP
from dcs.communicate import Communicator
logger.info('[SERVER] starting the servers...')
logger.add('./dcs.log', rotation='10 MB', enqueue=True, backtrace=True, diagnose=True)
logger.info('reading config args...')
logger.info('[SERVER] reading config args...')
configFile = '../conf/settings.ini'
con = ConfigParser()
con.read(configFile, encoding='utf-8')
global_var.configs = con
items = con.items('server')
items = dict(items)
create_user_info()
global_var.server = Server(str(items['ip']), int(items['port']), eval(items['buffer_size']))
global_var.server.daemon = items['daemon']
global_var.server.start()
global_var.requester = Requester()
global_var.requester.daemon = True
global_var.requester.start()
global_var.spider = Spider()
global_var.spider.daemon = True
global_var.spider.start()
global_var.up = UP()
global_var.up.daemon = True
global_var.up.start()
global_var.communicator = Communicator()
global_var.communicator.daemon = True
global_var.communicator.start()
logger.info('starting the server...')
server = Server(int(items['port']))
server.daemon = items['daemon']
server.start()
server.join()
global_var.server.join()
global_var.requester.join()
global_var.spider.join()
global_var.up.join()
global_var.communicator.join()
logger.warning('Overing...')

@ -0,0 +1,81 @@
import socket
import struct
import threading
from collections import deque
from json import JSONEncoder, JSONDecoder
from time import sleep
from loguru import logger
from conf.config import set_crawl_result
from dcs.tests.spider_task import Spider_partial_task
def read_bytes(s: 'socket.socket', size: 'int') -> 'bytes':
data = ''.encode('utf-8')
while len(data) < size:
rsp_data = s.recv(size - len(data))
data += rsp_data
if len(rsp_data) == 0:
break
return data
def generate_request(request) -> 'bytes':
request_bytes = JSONEncoder().encode(request).encode("utf-8")
return struct.pack("!Q", len(request_bytes)) + request_bytes
class Requester(threading.Thread):
def __init__(self):
super().__init__()
self.daemon = True
self.reqs = []
pass
def is_remote_task_complete(self, client_address, request_map):
pass
def get(self, client_address, task: Spider_partial_task):
# logger.info(f'[REQUESTER] sending crawl request to {str(client_address)}')
req = Req(client_address, task)
self.reqs.append(req)
req.start()
class Req(threading.Thread):
def __init__(self, client_address, task: Spider_partial_task):
super(Req, self).__init__()
self.client_address = client_address
self.task: Spider_partial_task = task
self.request_map = task.request_map
self.responseJson = None
def run(self) -> None:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_client:
socket_to_client.connect(tuple(self.client_address))
self.request_map.update({'type': 'request'})
logger.info(f'[REQUESTER] sending request {self.request_map} to client {self.client_address}...')
socket_to_client.sendall(generate_request(self.request_map))
self.responseJson = JSONDecoder().decode(
read_bytes(socket_to_client, struct.unpack('!Q', socket_to_client.recv(8))[0]).decode(
"utf-8"))
cookie = self.responseJson['cookie']
del self.responseJson['cookie']
logger.info(f'[REMOTE] receiving remote task result {self.responseJson} from {self.client_address}, saving...')
set_crawl_result(cookie, self.responseJson)
self.task.pages_start = self.task.pages_end # finished
self.task.thread = None
if __name__ == '__main__':
address = ('127.0.0.1', 7798)
address1 = ('127.0.0.1', 7799)
my_request = {'request': 'I am asking you...'}
my_request1 = {'request1': 'I am asking you...'}
res = deque()
requester = Requester()
requester.start()
sleep(2)
print(res)

@ -0,0 +1,29 @@
import threading
import socket
from loguru import logger
from dcs.tests.requestHandler import RequestHandler
from conf.config import global_var
class Server(threading.Thread): # 将监听和处理分离, 以便同时响应多个客户端
def __init__(self, ip: 'str', port: 'int', buffer_size: 'int'):
super().__init__()
self.port: 'int' = port
self.buffer_size = buffer_size
self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_socket.bind((ip, port))
global_var.server_socket = self.server_socket
self.client_sockets: list[socket.socket] = []
def run(self) -> None:
self.server_socket.listen()
while True:
client_socket, _ = self.server_socket.accept()
logger.info(f'[SERVER] connected to client {client_socket.getpeername()}')
self.client_sockets.append(client_socket)
r = RequestHandler(client_socket)
r.start()
# self.server_socket.close()

@ -0,0 +1,29 @@
import socket
import threading
from time import sleep
from loguru import logger
from conf.config import global_var
from dcs.tests.spider_task import Spider_task
class Spider(threading.Thread):
def __init__(self):
super(Spider, self).__init__()
self.tasks: list[Spider_task] = []
self.daemon = True
self.max_count_of_crawlers = int(dict(global_var.configs.items('crawler'))['max_count_of_crawlers'])
self.crawlers = 0
def add_task(self, request_map: dict, client_socket: socket.socket):
self.tasks.append(Spider_task(client_socket, request_map))
def run(self) -> None:
while True:
for task in self.tasks:
logger.info(f'[REQUEST HANDLER] processing spider request...')
task.start()
self.tasks.remove(task)
sleep(1)

@ -1,149 +0,0 @@
# -*- coding: UTF-8 -*-
import struct
from threading import Thread
import socket
from json import JSONEncoder, JSONDecoder
import sys
# -------------------------------配置--------------------------------------------
# ------------------------------config--------------------------------------------
if len(sys.argv) < 2:
ip = "127.0.0.1" # server的ip
else:
ip = sys.argv[1]
port = 7777 # server的port
def read_bytes(s: 'socket.socket', size: 'int') -> 'bytes':
"""
从socket读取size个字节
:param s:套接字
:param size:要读取的大小
:return:读取的字节数在遇到套接字关闭的情况下返回的数据的长度可能小于 size
"""
data = ''.encode('utf-8')
while len(data) < size:
rsp_data = s.recv(size - len(data))
data += rsp_data
if len(rsp_data) == 0:
break
return data
def generate_request(request) -> 'bytes':
"""
根据传入的dict生成请求
请求包含 8字节头长度+头数据
:param request: dict
:return: bytes 请求数据
"""
request_bytes = JSONEncoder().encode(request).encode("utf-8")
return struct.pack("!Q", len(request_bytes)) + request_bytes
class Client(Thread):
def __init__(self, ip: str, port: int) -> None:
"""
:param ip: 服务器IP
:param port: 服务器端口
"""
super().__init__()
self.ip = ip
self.port = port
def test(self):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_server:
socket_to_server.connect((self.ip, self.port))
request = dict()
request['action'] = 'test'
full_request = generate_request(request)
socket_to_server.sendall(full_request)
responseJson = JSONDecoder().decode(
read_bytes(socket_to_server, struct.unpack('!Q', socket_to_server.recv(8))[0]).decode(
"utf-8"))
return responseJson['test']
def translate(self, word: str):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_server:
socket_to_server.connect((self.ip, self.port))
request = dict()
request['action'] = 'translate'
request['word'] = word
full_request = generate_request(request)
socket_to_server.sendall(full_request)
responseJson = JSONDecoder().decode(
read_bytes(socket_to_server, struct.unpack('!Q', socket_to_server.recv(8))[0]).decode(
"utf-8"))
return responseJson['translate']
def crawling(self, word: str):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_server:
socket_to_server.connect((self.ip, self.port))
request = dict()
request['action'] = 'crawl zhiwang'
request['word'] = word
full_request = generate_request(request)
socket_to_server.sendall(full_request)
responseJson = JSONDecoder().decode(
read_bytes(socket_to_server, struct.unpack('!Q', socket_to_server.recv(8))[0]).decode(
"utf-8"))
return responseJson['crawl zhiwang']
def report_status(self, status: str):
# status: free or busy
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_server:
socket_to_server.connect((self.ip, self.port))
request = dict()
request['action'] = 'report_' + status
request['spider_info'] = (ip, port)
full_request = generate_request(request)
socket_to_server.sendall(full_request)
responseJson = JSONDecoder().decode(
read_bytes(socket_to_server, struct.unpack('!Q', socket_to_server.recv(8))[0]).decode(
"utf-8"))
return responseJson['report_'+status]
def end(self):
"""
结束通信
:return:
"""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_server:
socket_to_server.connect((self.ip, self.port))
request = dict()
request['action'] = 'end'
full_request = generate_request(request)
socket_to_server.sendall(full_request)
print("end communication!")
def run(self) -> None:
print(self.report_status('free'))
print(self.crawling(input("word:")))
self.report_status('free')
self.end()
download_task = Client(ip, port)
download_task.daemon = True
download_task.start()
download_task.join()

@ -1,16 +0,0 @@
class global_var:
"""需要定义全局变量的放在这里,最好定义一个初始值"""
free_spiders = []
# 对于每个全局变量都需要定义get_value和set_value接口
def add_free_spider(spider_info):
global_var.free_spiders.append(spider_info)
def get_free_spiders():
return global_var.free_spiders
def delete_spider_by_id(spider_info):
global_var.free_spiders.remove(spider_info)

@ -1,89 +0,0 @@
from hashlib import *
import pymysql
# 获取数据库连接对象
def mysql_conn():
conn = pymysql.connect(host='127.0.0.1', user='root', passwd='111111', db='qqq')
return conn
def register():
try:
# 获取数据库连接对象
conn = mysql_conn()
# 获取数据库操作cursor游标
cur = conn.cursor()
# 编写查询的sql语句
select_sql = f'select password from sh_users where username = "{u_name}"'
# 执行sql语句
cur.execute(select_sql)
# 获取执行结果 fetch_one(),判断结果
res = cur.fetchone()
# 如果res返回None 表示没有找到数据,不存在可注册,存在注册失败
if res is not None:
print('用户名已存在,注册失败', res)
else:
print('该用户名可以使用')
# 注册-> 插入数据手动commit
insert_sql = 'insert into sh_users (username, password) values (%s,%s)'
insert_params = [u_name, sha_pwd]
cur.execute(insert_sql, insert_params)
conn.commit()
print('注册成功', u_name)
# 关闭连接
cur.close()
conn.close()
except Exception as e:
print(e)
def login():
try:
conn = mysql_conn()
cur = conn.cursor()
select_sql = f'select password from sh_users where username = "{u_name}"'
cur.execute(select_sql)
res = cur.fetchone()
if res is None:
# 登录:根据用户名没有获取密码
print('用户名错误,登录失败')
else:
# res有值用户名正确判断密码正确与否
m_pwd = res[0]
print(m_pwd, '===========================')
if m_pwd == sha_pwd:
print('登录成功', u_name)
else:
print('密码错误,登录失败')
# 关闭连接
cur.close()
conn.close()
except Exception as e:
print(e)
def cancel():
try:
conn = mysql_conn()
cur = conn.cursor()
select_sql = f'delete from sh_users where username = "{u_name}"'
cur.execute(select_sql)
cur.close()
conn.close()
except Exception as e:
print(e)
if __name__ == '__main__':
u_name = input('请输入用户名')
u_pwd = input('请输入密码')
# sha1加密
s1 = sha1()
s1.update(u_pwd.encode())
sha_pwd = s1.hexdigest()
print(sha_pwd)
# register()
login()

@ -0,0 +1,165 @@
import threading
from collections import deque
import bs4
# 定义论文类
import requests
from loguru import logger
class Paper:
def __init__(self, title, authors):
self.title = title
self.authors = authors
def __str__(self):
s = f'title: {self.title}\n'
for i in self.authors:
s += f'author: {i}\n'
return s
# 定义作者类
class Author:
def __init__(self, name, college, major):
self.name = name
self.college = college
self.major = major
def __str__(self):
return f'{self.name}, {self.college}, {self.major}'
class Fast_crawler:
def __init__(self):
self.url = 'https://wap.cnki.net/touch/web/Article/Search'
self.headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.114 Safari/537.36 Edg/103.0.1264.49'}
self.cookies = 'Ecp_ClientId=1220704081600243712; Ecp_loginuserbk=wh0302; knsLeftGroupSelectItem=1;2;; Ecp_ClientIp=202.197.9.22; SID_sug=126002; _pk_ses=*; Ecp_IpLoginFail=220708116.162.2.165; _pk_id=6f3fe3b8-fcc4-4111-ad5f-c2f8aba3a0e3.1656893782.2.1657209667.1657209661.; ASP.NET_SessionId=uppv4o3sgpf45j1lmsc4ogin; SID_kns8=015123157; dblang=ch'
def get_search_html(self, word, index):
params = {
"searchtype": "0",
"dbtype": "",
"pageindex": index,
"pagesize": "10",
"ispart": "False",
"theme_kw": "",
"title_kw": "",
"full_kw": "",
"author_kw": "",
"depart_kw": "",
"key_kw": "",
"abstract_kw": "",
"source_kw": "",
"teacher_md": "",
"catalog_md": "",
"depart_md": "",
"refer_md": "",
"name_meet": "",
"collect_meet": "",
"keyword": word,
"remark": "",
"fieldtype": "101",
"sorttype": "0",
"articletype": "-1",
"yeartype": "0",
"yearinterval": "",
"screentype": "0",
"isscreen": "",
"subject_sc": "",
"research_sc": "",
"depart_sc": "",
"sponsor_sc": "",
"author_sc": "",
"teacher_sc": "",
"subjectcode_sc": "",
"researchcode_sc": "",
"departcode_sc": "",
"sponsorcode_sc": "",
"authorcode_sc": "",
"teachercode_sc": "",
"starttime_sc": "",
"endtime_sc": "",
"timestate_sc": ""
}
self.headers.update({'X-Requested-With': 'XMLHttpRequest'})
res = requests.post(self.url, data=params, headers=self.headers)
if res.status_code == 200:
soup = bs4.BeautifulSoup(res.text, 'html.parser')
return soup
def get_html_by_link(self, link):
# logger.debug(link)
res = requests.get('https:' + link, headers=self.headers)
if res.status_code == 200:
soup = bs4.BeautifulSoup(res.text, 'html.parser')
return soup
@staticmethod
def get_paper_links(soup: bs4.BeautifulSoup):
links = soup.find_all('a', class_='c-company-top-link')
links_list = []
for i in links:
if i == 'javascript:void(0);':
continue
links_list.append(i.attrs['href'])
return links_list
def parse_paper_html(self, soup: bs4.BeautifulSoup, res):
title = soup.find('div', class_='c-card__title2').text.strip()
authors = soup.find('div', class_='c-card__author')
authors_links = [i.attrs['href'] for i in authors.find_all('a')]
authors_links_pcd = [i for i in authors_links if i != 'javascript:void(0);']
if len(authors_links_pcd) == 0:
logger.warning('[PARSE] paper parser can not find authors info!')
res.append(Paper(title, [Author(None, None, None)]))
return
authors_list = []
for i in authors_links_pcd:
author_result = self.parse_author_html(self.get_html_by_link(i))
if author_result:
authors_list.append(author_result)
res.append(Paper(title, authors_list))
@staticmethod
def parse_author_html(soup: bs4.BeautifulSoup):
try:
name = soup.find('div', class_='zz-middle-name-text').text.strip()
college = soup.find('div', class_='zz-middle-company').text.strip()
major = soup.find('div', class_='zz-info-chart').find('span').text.strip()
except AttributeError:
# logger.warning(f'[PARSE] author parser can not find personal info')
# name = soup.find('div', class_='c-nav__item c-nav__title').text.strip()
# college = None
# major = None
# 这是机构信息,在作者详细信息里面有, 暂时不爬
return None
return Author(name, college, major)
def crawl(self, word, index):
sh = self.get_search_html(word, index)
pl = self.get_paper_links(sh)
res: deque[Paper] = deque()
threads = []
for p in pl:
if p == 'javascript:void(0);':
continue
p = self.get_html_by_link(p)
t = threading.Thread(target=self.parse_paper_html, args=(p, res,))
threads.append(t)
# break
[t.start() for t in threads]
[t.join() for t in threads]
return res
if __name__ == '__main__':
crawler = Fast_crawler()
data = crawler.crawl('computer', 1)
for r in data:
print(r)

@ -1,77 +1,45 @@
import socket
import threading
import json
import struct
import dcs.tests.config
import threading
from loguru import logger
from dcs.tests.spider import Spider
from dcs.tools.message_process import parse_request, check
from conf.config import global_var
class RequestHandler(threading.Thread):
def __init__(self, file_server: 'FileServer', client_socket: 'socket.socket', request_map: 'dict'):
def __init__(self, client_socket: 'socket.socket'):
super().__init__()
self.file_server = file_server
self.client_socket = client_socket
self.request_map = request_map
self.daemon = True
pass
def run(self) -> None:
try:
if self.request_map['action'] == 'test':
logger.info(f"[REQUEST] test")
response = {
'test': 'hello TEST'
}
response_binary = json.JSONEncoder().encode(response).encode("utf-8")
response_binary_len = len(response_binary)
response_binary_len_binary = struct.pack("!Q", response_binary_len)
response_binary = response_binary_len_binary + response_binary
self.client_socket.sendall(response_binary)
logger.info(f"[RESPONSE] test: {response['test']}, header size: {response_binary_len}")
elif self.request_map['action'] == 'translate':
logger.info(f"[REQUEST] translate")
spider = Spider(self.request_map['word'])
response = {
'translate': spider.run()
}
response_binary = json.JSONEncoder().encode(response).encode("utf-8")
response_binary_len = len(response_binary)
response_binary_len_binary = struct.pack("!Q", response_binary_len)
response_binary = response_binary_len_binary + response_binary
self.client_socket.sendall(response_binary)
logger.info(f"[RESPONSE] translate: {response['translate']}, header size: {response_binary_len}")
elif self.request_map['action'] == 'crawl zhiwang':
logger.info(f"[REQUEST] crawl zhiwang")
spider = Spider(self.request_map['word'])
spider.run()
response = {
'crawl zhiwang': 'success' # TODO
}
response_binary = json.JSONEncoder().encode(response).encode("utf-8")
response_binary_len = len(response_binary)
response_binary_len_binary = struct.pack("!Q", response_binary_len)
response_binary = response_binary_len_binary + response_binary
self.client_socket.sendall(response_binary)
logger.info(
f"[RESPONSE] crawl zhiwang: {response['crawl zhiwang']}, header size: {response_binary_len}")
elif self.request_map['action'] == 'report_free':
logger.info(f"[REQUEST] report free")
if self.request_map['spider_info'] not in dcs.tests.config.get_free_spiders():
dcs.tests.config.add_free_spider(self.request_map['spider_info'])
response = {
'report_free': 'success marked ' + str(self.request_map['spider_info'])
}
response_binary = json.JSONEncoder().encode(response).encode("utf-8")
response_binary_len = len(response_binary)
response_binary_len_binary = struct.pack("!Q", response_binary_len)
while True:
try:
request_map = parse_request(self.client_socket)
except struct.error:
break
except Exception as e:
logger.error(f'[Error] {e.__class__.__name__}: {str(e)}')
break
response_binary = response_binary_len_binary + response_binary
self.client_socket.sendall(response_binary)
logger.info(
f"[RESPONSE] report free: {response['report_free']}, header size: {response_binary_len}")
finally:
self.client_socket.close()
if request_map['action'] == 'end':
logger.info(f"[REQUEST] end: communication over from {self.client_socket.getpeername()}!")
break
elif request_map['action'] == 'start':
logger.info(f"[REQUEST] start: communication begin from {self.client_socket.getpeername()}!")
elif request_map['action'] == 'crawl zhiwang':
chk_res = check(request_map)
if chk_res is None:
if request_map['cookie'] != 'god':
logger.warning("[ERROR] user info error!")
break
global_var.spider.add_task(request_map, self.client_socket)
elif request_map['action'] in ['report_free', 'login', 'register']:
global_var.up.add_request(request_map, self.client_socket)
else:
logger.error(f"no action {request_map['action']}!")
global_var.communicator.add_response('error', self.client_socket,
{request_map['action']: f"no action {request_map['action']}!"})
except Exception as e:
logger.error(f'[Error] {e.__class__.__name__}: {str(e)}')

@ -1,41 +0,0 @@
import threading
import socket
import json
import struct
from dcs.tests.requestHandler import RequestHandler
from loguru import logger
def read_bytes(s: 'socket.socket', size: 'int') -> 'bytes':
data = ''.encode('utf-8')
while len(data) < size:
rsp_data = s.recv(size - len(data))
data += rsp_data
if len(rsp_data) == 0:
break
return data
class Server(threading.Thread):
def __init__(self, port: 'int'):
super().__init__()
self.port: 'int' = port
self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_socket.bind(('', port))
self.buffer_size = 8 * 1024 * 1024
def run(self) -> None:
self.server_socket.listen()
while True:
client_socket, _ = self.server_socket.accept()
request_header_size = struct.unpack("!Q", read_bytes(client_socket, 8))[0]
request_map = json.JSONDecoder().decode(read_bytes(client_socket, request_header_size).decode("utf-8"))
# end请求要在主线程处理不然退出就不会及时响应
if request_map['action'] == 'end':
logger.info(f"[REQUEST] end")
logger.warning("communication over!")
break
r = RequestHandler(self, client_socket, request_map)
r.start()
self.server_socket.close()

@ -1,65 +0,0 @@
import threading
import dcs.tests.config
from msedge.selenium_tools import Edge
from msedge.selenium_tools import EdgeOptions
from dcs.tests.zhiwang import *
from loguru import logger
def translate(word):
url = 'http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule'
data = {'i': word,
'from': 'AUTO',
'to': 'AUTO',
'smartresult': 'dict',
'client': 'fanyideskweb',
'doctype': 'json',
'version': '2.1',
'keyfrom': 'fanyi.web',
'action': 'FY_BY_REALTIME',
'typoResult': 'false'}
r = requests.post(url, data)
answer = r.json()
result = answer['translateResult'][0][0]['tgt']
return result
def crawl_zhiwang(word, pages_start=1, pages_end=2):
edge_options = EdgeOptions()
edge_options.use_chromium = True
edge_options.add_argument('headless')
driver = Edge(options=edge_options, executable_path=r'G:\Users\god\PycharmProjects\dcs\bin\msedgedriver.exe')
soup = driver_open(driver, word) # 搜索word
papers = [] # 用于保存爬取到的论文
# 爬取第一篇
if pages_start == 1:
spider(driver, soup, papers)
pages_start += 1
for pn in range(pages_start, pages_end):
content = change_page(driver, pn)
spider(driver, content, papers)
driver.close()
# TODO 写入数据库
class Spider(threading.Thread):
def __init__(self, word: str, pages_start=1, pages_end=1):
super().__init__()
self.word = word
self.daemon = True
self.pages_start = pages_start
self.pages_end = pages_end
pass
def distribute_spiders(self):
free_spiders = dcs.tests.config.get_free_spiders()
for sp in free_spiders:
pass
print(self.pages_start, sp)
# TODO 发布任务
def run(self) -> None:
logger.info('crawling...')
self.distribute_spiders()
crawl_zhiwang(word=self.word, pages_start=self.pages_start, pages_end=self.pages_end)

@ -0,0 +1,178 @@
import random
import socket
from time import sleep
from typing import Optional
from conf.config import global_var, get_free_addresses, get_crawl_result, get_by_cookie, set_state_client
from dcs.tests.fastcrawler import *
from dcs.tools.database import get_last_crawl_id, create_crawl_result_table
from dcs.tools.database import write_results2database
def write2results(paper: Paper, results: list):
for author in paper.authors:
if author.name:
results.append((author.name, author.college, author.major, paper.title))
class Crawler(threading.Thread):
def __init__(self, partial_task: 'Spider_partial_task', last_crawl_id, results):
super(Crawler, self).__init__()
self.partial_task = partial_task
self.last_crawl_id = last_crawl_id
self.results = results
def crawl_zhiwang(self, user_name=None):
logger.info(f'[CRAWLER] local crawler is starting...')
table_name = f'{user_name}_crawl_result'
create_crawl_result_table(table_name=table_name)
self.partial_task.crawl_id = self.last_crawl_id + 1
fast_crawler = Fast_crawler()
while self.partial_task.pages_start < self.partial_task.pages_end:
papers = fast_crawler.crawl(self.partial_task.word, self.partial_task.pages_start)
for paper in papers:
write2results(paper, results=self.results)
self.partial_task.pages_start += 1
def test_simulation(self, user_name):
table_name = f'{user_name}_crawl_result'
create_crawl_result_table(table_name=table_name)
last_crawl_id = get_last_crawl_id(table_name=table_name)
self.partial_task.crawl_id = last_crawl_id + 1
# 模拟爬取
logger.debug('simulation crawling...')
paper = Paper('test', [Author('test', 'test', 'test')])
write2results(paper, results=self.results)
write2results(paper, results=self.results)
write2results(paper, results=self.results)
# over
sleep(10)
self.partial_task.pages_start = self.partial_task.pages_end
def run(self) -> None:
try:
self.crawl_zhiwang(user_name=self.partial_task.cui.user_name)
# self.test_simulation(user_name=self.partial_task.cui.user_name)
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
finally:
logger.info(f'[TASK] local partial crawl task finished: {str(self.partial_task)}')
self.partial_task.thread = None
class Spider_partial_task:
def __init__(self, full_task: 'Spider_task', request_map: dict):
self.full_task: Spider_task = full_task
self.request_map = request_map
self.thread: 'threading.Thread|None' = None
self.word = self.request_map['word']
self.pages_start = self.request_map['pages_start']
self.pages_end = self.request_map['pages_end']
self.cui = get_by_cookie(self.request_map['cookie'])
self.task_type: Optional[str] = None
self.crawl_id = None
def is_partial_task_crawl_completely(self):
finished = (self.pages_start == self.pages_end)
if finished:
if self.task_type == 'local':
global_var.spider.crawlers -= 1
return finished
def __str__(self):
return f'{self.full_task.client_socket.getpeername(), self.request_map}'
class Spider_task(threading.Thread):
def __init__(self, client_socket: socket.socket, request_map: dict):
super().__init__()
self.free_remote_nodes = None
self.table_name = f'{Spider_partial_task(self, request_map).cui.user_name}_crawl_result'
self.last_crawl_id = get_last_crawl_id(table_name=self.table_name)
self.client_socket = client_socket
self.request_map = request_map
self.partial_tasks: list[Spider_partial_task] = []
self.const_page = 1
self.results = []
def distribute_task(self):
# distribute tasks, 3 pages as a task
# [pages_start, pages_end), like [1,3) means 1,2 page
logger.info(f'[TASK] distributing task: {self.client_socket.getpeername(), self.request_map}')
pages_start = self.request_map['pages_start']
pages_end = self.request_map['pages_end']
while pages_start < pages_end:
tmp = self.request_map.copy()
tmp['pages_start'] = pages_start
if pages_start + self.const_page < pages_end:
pages_start += self.const_page
else:
pages_start = pages_end
tmp['pages_end'] = pages_start
self.partial_tasks.append(Spider_partial_task(self, tmp))
def is_all_task_crawled(self):
for task in self.partial_tasks:
if not task.is_partial_task_crawl_completely():
return False
return True
def compose_result(self):
logger.info('[COMPOSE] composing task...')
remote_result = get_crawl_result(self.request_map['cookie'])
for result_map in list(remote_result):
create_crawl_result_table(table_name=self.table_name)
for _, data in result_map.items():
write2results(Paper(data['title'], [Author(data['name'], data['college'], data['major'])]), self.results)
logger.info(f'[RESULT] {self.results}')
logger.info(f'[DATABASE] writing crawl results to database...')
write_results2database(self.results, self.table_name, self.last_crawl_id)
result = {'crawl_id': self.last_crawl_id+1, 'table_name': self.table_name} # , 'data': self.results}
global_var.communicator.add_response('response', self.client_socket, result)
def run(self) -> None:
global_var.communicator.add_response('crawling state', self.client_socket,
{'crawling state': 'starting, please wait...'})
self.distribute_task()
while True:
self.free_remote_nodes = list(get_free_addresses())
random.shuffle(self.free_remote_nodes)
logger.info(f'[REMOTE] free nodes: {self.free_remote_nodes}')
for task in self.partial_tasks:
if task.is_partial_task_crawl_completely():
continue
else:
current_task_thread = task.thread
if current_task_thread is None:
for f_node in self.free_remote_nodes:
address = f_node
logger.info(f'[TASK] generating remote task {task.request_map}')
task.thread = global_var.requester
task.task_type = 'remote'
global_var.requester.get(address, task)
set_state_client('busy', address=f_node)
sleep(1)
self.free_remote_nodes.remove(f_node)
break
else:
logger.warning(f'[TASK] generate remote task failed, no free remote nodes! spider task {task.request_map} is at state waiting...')
logger.info(f'[TASK] generating local task {task.request_map}')
if global_var.spider.crawlers >= global_var.spider.max_count_of_crawlers:
logger.warning(f'[TASK] generate local task failed, crawlers exceed! spider task {task.request_map} is at state waiting...')
break
else:
_crawler = Crawler(task, self.last_crawl_id, self.results)
task.thread = _crawler
task.task_type = 'local'
_crawler.start()
global_var.spider.crawlers += 1
if self.is_all_task_crawled():
break
sleep(5) # 每5秒轮询一次
self.compose_result()

@ -0,0 +1,61 @@
import socket
import threading
import conf.config as config
import dcs.tools.database as database
from loguru import logger
from conf.config import global_var
class Urh(threading.Thread):
def __init__(self, request_map: dict, client_socket: 'socket.socket'):
super().__init__()
self.request_map: dict = request_map
self.client_socket = client_socket
def report_state(self, state):
logger.info(f"[REQUEST] report free")
config.set_state_client(self.request_map['cookie'], state)
response = {
'report_free': 'success marked ' + str(self.request_map['cookie'])
}
global_var.communicator.add_response('report_free', self.client_socket, response)
logger.info(f"[RESPONSE] report free: {response['report_free']}")
def login(self, user, password, address):
logger.info(f"[REQUEST] login")
database.mysql_conn()
response = database.login(user, password, address)
response = {
'cookie': response
}
global_var.communicator.add_response('login', self.client_socket, response)
logger.info(f"[RESPONSE] login: {response['cookie']}")
def register(self, user, password):
logger.info(f"[REQUEST] register")
database.mysql_conn()
response = database.register(user, password)
response = {
'register': response
}
global_var.communicator.add_response('register', self.client_socket, response)
logger.info(f"[RESPONSE] register: {response['register']}")
def get_task_process(self):
pass
def run(self) -> None:
if self.request_map['action'] == 'report_free':
self.report_state('free')
elif self.request_map['action'] == 'login':
if self.request_map.__contains__('address'):
address = self.request_map['address']
else:
address = None
self.login(self.request_map['user'], self.request_map['password'], address)
elif self.request_map['action'] == 'register':
self.register(self.request_map['user'], self.request_map['password'])
elif self.request_map['action'] == 'get task process':
pass
else:
self.client_socket.close()

@ -1,21 +1,25 @@
'''
知网论文数据爬取
'''
# 知网论文数据爬取
from bs4 import BeautifulSoup
import time
import requests
# 定义论文类
import requests
from bs4 import BeautifulSoup
from loguru import logger
from selenium.webdriver.common.by import By
# 定义论文类
class Paper:
def __init__(self, title, authors):
self.title = title
self.authors = authors
def __str__(self):
s = f'title: {self.title}\n'
for i in self.authors:
s += f'author: {i}\n'
return s
# 定义作者类
class Author:
@ -24,18 +28,21 @@ class Author:
self.college = college
self.major = major
def __str__(self):
return f'{self.name}, {self.college}, {self.major}'
# 进入知网首页并搜索关键词
def driver_open(driver, key_word):
url = "https://www.cnki.net/"
driver.get(url)
time.sleep(2)
# time.sleep(1)
driver.find_element(by=By.CSS_SELECTOR, value='#txt_SearchText').send_keys(key_word)
time.sleep(2)
# time.sleep(2)
# 点击搜索按钮
driver.find_element(by=By.CSS_SELECTOR, value=
'body > div.wrapper.section1 > div.searchmain > div > div.input-box > input.search-btn').click()
time.sleep(5)
driver.find_element(by=By.CSS_SELECTOR,
value='body > div.wrapper.section1 > div.searchmain > div > div.input-box > input.search-btn').click()
time.sleep(1.5) # 必须要等待
content = driver.page_source.encode('utf-8')
# driver.close()
soup = BeautifulSoup(content, 'lxml')
@ -43,12 +50,16 @@ def driver_open(driver, key_word):
def spider(driver, soup, papers):
logger.info("[CRAWLER] crawling a soup...")
tbody = soup.find_all('tbody')
try:
tbody = BeautifulSoup(str(tbody[0]), 'lxml')
except:return
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
return
tr = tbody.find_all('tr')
for item in tr:
logger.info("[CRAWLER] crawling an item...")
tr_bf = BeautifulSoup(str(item), 'lxml')
td_name = tr_bf.find_all('td', class_='name')
@ -56,7 +67,7 @@ def spider(driver, soup, papers):
a_name = td_name_bf.find_all('a')
# get_text()是获取标签中的所有文本,包含其子标签中的文本
title = a_name[0].get_text().strip()
print("title : " + title)
# print("title : " + title)
td_author = tr_bf.find_all('td', class_='author')
td_author_bf = BeautifulSoup(str(td_author), 'lxml')
@ -66,21 +77,23 @@ def spider(driver, soup, papers):
for author in a_author:
skey, code = get_skey_code(author) # 获取作者详情页url的skey和code
name = author.get_text().strip() # 获取学者的名字
print('name : ' + name)
# print('name : ' + name)
college, major = get_author_info(skey, code) # 在作者详情页获取大学和专业, major是一个数组
au = Author(name, college, major) # 创建一个学者对象
authors.append(au)
print('\n')
# print('\n')
# print('\n')
paper = Paper(title, authors)
papers.append(paper)
time.sleep(1) # 每调一次spider休息1s
# break # TODO: this is to shorten time of crawling
# time.sleep(1) # 每调一次spider休息1s
# pn表示当前要爬的页数
def change_page(driver, pn):
driver.find_element(by=By.CSS_SELECTOR, value='#page' + str(pn)).click()
time.sleep(5)
time.sleep(1)
content = driver.page_source.encode('utf-8')
soup = BeautifulSoup(content, 'lxml')
return soup
@ -112,10 +125,10 @@ def get_author_info(skey, code):
college = h3[0].get_text().strip()
major = h3[1].get_text().strip()
# major = major.split(';')[0: -1]
print('college:' + college)
print('major: ' + major)
# print('college:' + college)
# print('major: ' + major)
return college, major
print("无详细信息")
# print("无详细信息")
return None, None

@ -0,0 +1,18 @@
from hashlib import *
class Cookie:
def __init__(self, user_name: str, login_time: str, login_state: str, cookie=None):
self.user_name = user_name
self.login_time = login_time
self.login_state = login_state
self.cookie = cookie
def generate_cookie(self):
s1 = sha1()
s1.update(str(self.user_name+self.login_time+self.login_state).encode())
self.cookie = s1.hexdigest()
return self.cookie
def __str__(self):
return self.cookie

@ -0,0 +1,236 @@
import pymysql
from loguru import logger
import dcs.tools.cookie as cookie
from conf import config
from conf.config import global_var as var
# 获取数据库连接对象
def mysql_conn():
database = dict(var.configs.items('database'))
try:
# logger.debug('connecting to database...')
conn = pymysql.connect(host=database['ip'], user=database['user'], passwd=database['password'],
db=database['database'])
return conn
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
def register(u_name, u_pwd):
# s1 = sha1()
# s1.update(u_pwd.encode())
# sha_pwd = s1.hexdigest()
sha_pwd = u_pwd
try:
# 获取数据库连接对象
conn = mysql_conn()
# 获取数据库操作cursor游标
cur = conn.cursor()
# 编写查询的sql语句
select_sql = f'select user_password from user_info where user_name = "{u_name}"'
# 执行sql语句
cur.execute(select_sql)
# 获取执行结果 fetch_one(),判断结果
res = cur.fetchone()
# 如果res返回None 表示没有找到数据,不存在可注册,存在注册失败
if res is not None:
info = '用户名已存在,注册失败'
else:
# 注册-> 插入数据手动commit
insert_sql = 'insert into user_info (user_name, user_password, create_time, login_state) values (%s,%s,now(),false)'
insert_params = [u_name, sha_pwd]
cur.execute(insert_sql, insert_params)
conn.commit()
info = '注册成功'
# 关闭连接
cur.close()
conn.close()
return info
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
def get_now():
try:
conn = mysql_conn()
cur = conn.cursor()
select_sql = f'select now()'
cur.execute(select_sql)
res = cur.fetchone()
# 关闭连接
cur.close()
conn.close()
return res[0]
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
def login(u_name, u_pwd, address):
# s1 = sha1()
# s1.update(u_pwd.encode())
# sha_pwd = s1.hexdigest()
sha_pwd = u_pwd
try:
conn = mysql_conn()
cur = conn.cursor()
select_sql = f'select user_password from user_info where user_name = "{u_name}"'
cur.execute(select_sql)
res = cur.fetchone()
if res is None:
# 登录:根据用户名没有获取密码
info = '用户名错误,登录失败'
else:
# res有值用户名正确判断密码正确与否
m_pwd = res[0]
if m_pwd == sha_pwd:
# info = '用户' + u_name + '登录成功'
time = str(get_now())
info = cookie.Cookie(u_name, time, 'true').generate_cookie()
config.add_user(u_name, time, 'true', 'busy', info, address)
else:
info = '密码错误,登录失败'
# 关闭连接
cur.close()
conn.close()
return info
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
def cancel(u_name):
try:
conn = mysql_conn()
cur = conn.cursor()
select_sql = f'delete from user_info where user_name = "{u_name}"'
cur.execute(select_sql)
cur.close()
conn.close()
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
def get_last_crawl_id(table_name: str) -> int:
"""
:param table_name: 目标用户对应的爬取结果信息表
:return: 要取得的用户最后一次爬取序列号
"""
try:
conn = mysql_conn()
cur = conn.cursor()
get_id_sql = f'SELECT crawl_id from {table_name} where time = (SELECT max(time) FROM {table_name})'
cur.execute(get_id_sql)
last_crawl_id_res = cur.fetchone()
if last_crawl_id_res is None:
last_crawl_id_res = [0]
last_crawl_id = int(last_crawl_id_res[0])
cur.close()
conn.close()
return last_crawl_id
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
return 0
def drop_table(table_name: str):
try:
conn = mysql_conn()
cur = conn.cursor()
drop_sql = f'drop table if exists {table_name}'
cur.execute(drop_sql)
conn.commit()
cur.close()
conn.close()
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
def get_crawl_result_by_crawl_id(table_name: str, crawl_id: int):
try:
conn = mysql_conn()
cur = conn.cursor()
select_sql = f'select id, name, college, major, paper from {table_name} where crawl_id = {crawl_id}'
cur.execute(select_sql)
result = cur.fetchall()
cur.close()
conn.close()
return result
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
def create_table(create_sql: str):
try:
conn = mysql_conn()
cur = conn.cursor()
cur.execute(create_sql)
conn.commit()
cur.close()
conn.close()
except Exception as e:
logger.warning(f'[DATABASE] {str(e)}')
def create_crawl_result_table(table_name: str):
create_sql = f'create table if not exists {table_name} (' \
f'id int primary key not null auto_increment,' \
f'crawl_id int not null,' \
f'time timestamp not null,' \
f'name varchar(100),' \
f'college varchar(200),' \
f'major varchar(200),' \
f'paper varchar(200)' \
f')'
create_table(create_sql)
def create_user_info(table_name: str = 'user_info'):
create_sql = f'create table if not exists {table_name} (' \
f'id int primary key not null auto_increment,' \
f'create_time timestamp not null default now(),' \
f'user_name varchar(100),' \
f'user_password varchar(200),' \
f'login_state boolean default false' \
f')'
create_table(create_sql)
def write_results2database(res: list, table_name: str, last_crawl_id: int):
try:
logger.info(f'[DATABASE] writing {last_crawl_id+1}st crawl results to table {table_name} in database...')
conn = mysql_conn()
cur = conn.cursor()
insert_sql = f"insert into {table_name} (name,college,major,paper,crawl_id,time) values (%s,%s,%s,%s,{last_crawl_id + 1},now())"
cur.executemany(insert_sql, res)
conn.commit()
cur.close()
conn.close()
info = '插入成功'
logger.info(f'[DATABASE] writing successful of {last_crawl_id + 1}st crawl results!')
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
info = '插入失败'
return info
if __name__ == '__main__':
get_now()
# create_crawl_result_table('table_name')
# print(write_result2database(['name', 'college', 'major', 'paper'], "table_name", last_crawl_id=0))
pass
'''
u_name = input('请输入用户名')
u_pwd = input('请输入密码')
# sha1加密
s1 = sha1()
s1.update(u_pwd.encode())
sha_pwd = s1.hexdigest()
print(sha_pwd)
# register()
login()
'''

@ -0,0 +1,56 @@
import socket
import json
import struct
from json import JSONEncoder
from loguru import logger
from conf.config import exists
def parse_request(client_socket: socket.socket):
data = read_bytes(client_socket, 8)
request_header_size = struct.unpack("!Q", data)[0]
data = read_bytes(client_socket, request_header_size)
request_map = json.JSONDecoder().decode(data.decode("utf-8"))
return request_map
def generate_response(response):
response_binary = json.JSONEncoder().encode(response).encode("utf-8")
response_binary_len = len(response_binary)
response_binary_len_binary = struct.pack("!Q", response_binary_len)
response_binary = response_binary_len_binary + response_binary
return response_binary
def read_bytes(s: 'socket.socket', size: 'int') -> 'bytes':
data = ''.encode('utf-8')
while len(data) < size:
rsp_data = s.recv(size - len(data))
data += rsp_data
if len(rsp_data) == 0:
break
return data
def check(cookie):
try:
if exists(cookie['cookie']):
return cookie
return None
except Exception as e:
logger.error(f'[ERROR] {str(e)}')
return None
def generate_request(request) -> 'bytes':
"""
根据传入的dict生成请求
请求包含 8字节头长度+头数据
:param request: dict
:return: bytes 请求数据
"""
request_bytes = JSONEncoder().encode(request).encode("utf-8")
return struct.pack("!Q", len(request_bytes)) + request_bytes

@ -0,0 +1,24 @@
import socket
import threading
from time import sleep
from dcs.tests.user_request_handler import Urh
from loguru import logger
class UP(threading.Thread):
def __init__(self):
super(UP, self).__init__()
self.requests: list[tuple[socket.socket, dict]] = []
def add_request(self, request_map: dict, client_socket: socket.socket):
self.requests.append((client_socket, request_map))
def run(self) -> None:
while True:
for request in self.requests:
logger.info(f'[REQUEST HANDLER] processing user request...')
urh = Urh(request[1], request[0])
urh.start()
self.requests.remove(request)
sleep(1)

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 289 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

@ -1,6 +1,6 @@
loguru~=0.6.0
requests~=2.27.1
pandas~=1.3.4
bs4~=0.0.1
beautifulsoup4~=4.10.0
selenium~=4.1.3
selenium~=3.141.0
PyMySQL~=1.0.2

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

@ -0,0 +1,121 @@
# -*- coding: utf-8 -*-
"""
Created on Tue May 24 15:49:30 2022
@author: Jation
"""
import pandas as pd
import numpy as np
import pymysql
from HTMLTable import HTMLTable
import argparse
b = []
a={"1": {"name": "test", "college": "test", "major": "test", "paper": "test"}, "2": {"name": "test", "college": "test", "major": "test", "paper": "test"}, "3": {"name": "test", "college": "test", "major": "test", "paper": "test"}, "4": {"name": "test", "college": "test", "major": "test", "paper": "test"}, "5": {"name": "test", "college": "test", "major": "test", "paper": "test"}, "6": {"name": "test", "college": "test", "major": "test", "paper": "test"}, "type": "response"}
for i in a:
#print(a[i])
if(i == 'type'):
continue
d = []
for c in a[i].values():
d.append(c)
b.append(d)
# d.append(c)
#b.append(d)
#print(a)
#print(b)
def shu(a):
table = HTMLTable(caption='输出结果')
# 表头行
table.append_header_rows((
('name', 'test', 'college', 'major'),
))
# 合并单元格
data_1 = a
# 数据行
table.append_data_rows((
data_1
))
# 标题样式
table.caption.set_style({
'font-size': '30px',
'font-weight': 'bold',
'margin': '10px',
})
# 表格样式,即<table>标签样式
table.set_style({
'border-collapse': 'collapse',
'word-break': 'normal',
'white-space': 'normal',
'font-size': '14px',
})
# 统一设置所有单元格样式,<td>或<th>
table.set_cell_style({
'border-color': '#000',
'border-width': '1px',
'border-style': 'solid',
'padding': '5px',
})
# 表头样式
table.set_header_row_style({
'color': '#fff',
'background-color': '#48a6fb',
'font-size': '18px',
})
# 覆盖表头单元格字体样式
table.set_header_cell_style({
'padding': '15px',
})
# 调小次表头字体大小
# 遍历数据行,如果增长量为负,标红背景颜色
html = table.to_html()
f = open('C:/Users/Jation/Desktop/应用开发/dcs/ui/comment.html','w',encoding = 'utf-8-sig')
f.write(html)
# 1. 连接数据库,
conn = pymysql.connect(
host='10.129.16.173',
user='root',
password='427318Aa',
db='test',
charset='utf8',
# autocommit=True, # 如果插入数据,, 是否自动提交? 和conn.commit()功能一致。
)
# ****python, 必须有一个游标对象, 用来给数据库发送sql语句 并执行的.
# 2. 创建游标对象,
cur = conn.cursor()
# 3. 对于数据库进行增删改查
parser = argparse.ArgumentParser('Automanager')
parser.add_argument('--id', type = str, required = True)
# 自动发微博
args = parser.parse_args()
id = args.id
# 4). **************************数据库查询*****************************
sqli = "select name,college,major,paper from "+ id+"_crawl_result;"
print(sqli)
result = cur.execute(sqli) # 默认不返回查询结果集, 返回数据记录数。
print(result)
'''print(cur.fetchone()) # 1). 获取下一个查询结果集;
print(cur.fetchone())
print(cur.fetchone())
print(cur.fetchmany(4))''' # 2). 获取制定个数个查询结果集;
info = cur.fetchall() # 3). 获取所有的查询结果
print(info)
# 5). 移动游标指针
path = "C:/Users/Jation/Desktop/应用开发/dcs/ui/comment.csv"
f = open(path,'w')
f.truncate()
shu(info)
# 4. 关闭游标
cur.close()
# 5. 关闭连接
conn.close()

File diff suppressed because one or more lines are too long

@ -0,0 +1,108 @@
import json
import socket
import struct
import argparse
from json import JSONEncoder, JSONDecoder
def parse_request(client_socket: socket.socket):
request_header_size = struct.unpack("!Q", read_bytes(client_socket, 8))[0]
request_map = json.JSONDecoder().decode(read_bytes(client_socket, request_header_size).decode("utf-8"))
return request_map
def generate_request(request_info) -> 'bytes':
"""
根据传入的dict生成请求
请求包含 8字节头长度+头数据
:param request_info: dict
:return: bytes 请求数据
"""
request_bytes = JSONEncoder().encode(request_info).encode("utf-8")
return struct.pack("!Q", len(request_bytes)) + request_bytes
def read_bytes(s: 'socket.socket', size: 'int') -> 'bytes':
"""
从socket读取size个字节
:param s:套接字
:param size:要读取的大小
:return:读取的字节数在遇到套接字关闭的情况下返回的数据的长度可能小于 size
"""
data = ''.encode('utf-8')
while len(data) < size:
rsp_data = s.recv(size - len(data))
data += rsp_data
if len(rsp_data) == 0:
break
return data
def send_request(request_info, socket_to_server):
full_request = generate_request(request_info)
socket_to_server.sendall(full_request)
if request_info['action'] == 'end' or request_info['action'] == 'start':
return
responseJson = JSONDecoder().decode(
read_bytes(socket_to_server, struct.unpack('!Q', socket_to_server.recv(8))[0]).decode(
"utf-8"))
return responseJson
def receive_response(server_socket):
request_map = parse_request(server_socket)
print("receiving response:\n" + json.dumps(request_map, ensure_ascii=False))
with open('result.json', 'w', encoding='utf-8') as f:
json.dump(request_map, f, ensure_ascii=False, indent=4)
if __name__ == '__main__':
# 使用方法 python .\connect.py --ip 127.0.0.1 --port 7777
# crawling --word computer --cookie 95f94e1ab71bdf96b85fef6e8f746c58eeb5f9fa --pages_start 1 --pages_end 10
parser = argparse.ArgumentParser('connect-manager')
parser.add_argument('--ip', type=str, required=True)
parser.add_argument('--port', type=str, required=True)
subparsers = parser.add_subparsers(help='provide actions including crawling, login, register',
dest='action') # 创建子解析器
parser_crawling = subparsers.add_parser('crawling')
parser_crawling.add_argument('--word', type=str, required=True)
parser_crawling.add_argument('--pages_end', type=int, required=True)
parser_crawling.add_argument('--pages_start', type=int, required=True)
parser_crawling.add_argument('--cookie', type=str, required=True)
parser_login = subparsers.add_parser('login')
parser_login.add_argument('--user', type=str, required=True)
parser_login.add_argument('--password', type=str, required=True)
parser_register = subparsers.add_parser('register')
parser_register.add_argument('--user', type=str, required=True)
parser_register.add_argument('--password', type=str, required=True)
args = parser.parse_args()
local_port = 9010
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.connect((args.ip, int(args.port)))
request = {'action': 'start'}
send_request(request, server_socket)
if args.action == 'crawling':
request = {'action': 'crawl zhiwang', 'word': args.word, 'pages_start': args.pages_start,
'pages_end': args.pages_end, 'cookie': args.cookie}
elif args.action == 'login' or args.action == 'register':
request = {'action': args.action, 'user': args.user, 'password': args.password}
response = send_request(request, server_socket)
print(response['cookie'])
if args.action == 'crawling':
receive_response(server_socket)
request = {'action': 'end'}
send_request(request, server_socket)
server_socket.close()

@ -0,0 +1,198 @@
import json
import socket
import struct
import argparse
from json import JSONEncoder, JSONDecoder
import pymysql
from HTMLTable import HTMLTable
import pandas as pd
import numpy as np
import json
def parse_request(client_socket: socket.socket):
request_header_size = struct.unpack("!Q", read_bytes(client_socket, 8))[0]
request_map = json.JSONDecoder().decode(read_bytes(client_socket, request_header_size).decode("utf-8"))
return request_map
def generate_request(request_info) -> 'bytes':
"""
根据传入的dict生成请求
请求包含 8字节头长度+头数据
:param request_info: dict
:return: bytes 请求数据
"""
request_bytes = JSONEncoder().encode(request_info).encode("utf-8")
return struct.pack("!Q", len(request_bytes)) + request_bytes
def read_bytes(s: 'socket.socket', size: 'int') -> 'bytes':
"""
从socket读取size个字节
:param s:套接字
:param size:要读取的大小
:return:读取的字节数在遇到套接字关闭的情况下返回的数据的长度可能小于 size
"""
data = ''.encode('utf-8')
while len(data) < size:
rsp_data = s.recv(size - len(data))
data += rsp_data
if len(rsp_data) == 0:
break
return data
def shu(a):
table = HTMLTable(caption='输出结果')
# 表头行
table.append_header_rows((
('name', 'college', 'major', 'paper'),
))
# 合并单元格
data_1 = a
# 数据行
table.append_data_rows((
data_1
))
# 标题样式
table.caption.set_style({
'font-size': '30px',
'font-weight': 'bold',
'margin': '10px',
})
# 表格样式,即<table>标签样式
table.set_style({
'border-collapse': 'collapse',
'word-break': 'normal',
'white-space': 'normal',
'font-size': '14px',
})
# 统一设置所有单元格样式,<td>或<th>
table.set_cell_style({
'border-color': '#000',
'border-width': '1px',
'border-style': 'solid',
'padding': '5px',
})
# 表头样式
table.set_header_row_style({
'color': '#fff',
'background-color': '#48a6fb',
'font-size': '18px',
})
# 覆盖表头单元格字体样式
table.set_header_cell_style({
'padding': '15px',
})
# 调小次表头字体大小
table[1].set_cell_style({
'padding': '8px',
'font-size': '15px',
})
# 遍历数据行,如果增长量为负,标红背景颜色
html = table.to_html()
f = open('C:/Users/Jation/Desktop/应用开发/dcs/ui/tmmps.html','w',encoding = 'utf-8-sig')
f.write(html)
def send_request(ip, port, request_info):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_server:
socket_to_server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
socket_to_server.bind(('', 9005))
socket_to_server.connect((ip, int(port)))
full_request = generate_request(request_info)
socket_to_server.sendall(full_request)
responseJson = JSONDecoder().decode(
read_bytes(socket_to_server, struct.unpack('!Q', socket_to_server.recv(8))[0]).decode(
"utf-8"))
return responseJson
def receive_response():
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind(('', 9005))
server_socket.listen()
while True:
client_socket, _ = server_socket.accept()
request_map = parse_request(client_socket)
if request_map['type'] == 'response':
print("receiving response:\n" + json.dumps(request_map, ensure_ascii=False))
a = json.dumps(request_map, ensure_ascii=False)
a = json.loads(a)
b = []
c =''
d =''
for i in a:
#print(a[i])
if(i == 'type'):
continue
if(i == 'crawl_id'):
c = a[i]
c = str(c)
if(i == 'table_name'):
d = a[i]
'''for c in a[i].values():
d.append(c)'''
b.append(d)
continue
sqli = "select name,college,major,paper from "+d+" where crawl_id = " +c+";"
result = cur.execute(sqli)
info = cur.fetchall()
shu(info)
break
conn = pymysql.connect(
host='10.129.16.173',
user='root',
password='427318Aa',
db='test',
charset='utf8',
# autocommit=True, # 如果插入数据,, 是否自动提交? 和conn.commit()功能一致。
)
cur = conn.cursor()
if __name__ == '__main__':
# 使用方法 python .\connect.py --ip 127.0.0.1 --port 7777
# crawling --word computer --cookie 95f94e1ab71bdf96b85fef6e8f746c58eeb5f9fa --pages_start 1 --pages_end 10
parser = argparse.ArgumentParser('connect-manager')
parser.add_argument('--ip', type=str, required=True)
parser.add_argument('--port', type=str, required=True)
subparsers = parser.add_subparsers(help='provide actions including crawling, login, register',
dest='action') # 创建子解析器
parser_crawling = subparsers.add_parser('crawling')
parser_crawling.add_argument('--word', type=str, required=True)
parser_crawling.add_argument('--pages_end', type=int, required=True)
parser_crawling.add_argument('--pages_start', type=int, required=True)
parser_crawling.add_argument('--cookie', type=str, required=True)
parser_login = subparsers.add_parser('login')
parser_login.add_argument('--user', type=str, required=True)
parser_login.add_argument('--password', type=str, required=True)
parser_register = subparsers.add_parser('register')
parser_register.add_argument('--user', type=str, required=True)
parser_register.add_argument('--password', type=str, required=True)
args = parser.parse_args()
request = dict()
if args.action == 'crawling':
request = {'action': 'crawl zhiwang', 'word': args.word, 'pages_start': args.pages_start,
'pages_end': args.pages_end, 'cookie': args.cookie}
elif args.action == 'login' or args.action == 'register':
request = {'action': args.action, 'user': args.user, 'password': args.password}
response = send_request(args.ip, args.port, request)
if args.action == 'crawling':
receive_response()

@ -0,0 +1,106 @@
import json
import socket
import struct
import argparse
from json import JSONEncoder, JSONDecoder
def parse_request(client_socket: socket.socket):
request_header_size = struct.unpack("!Q", read_bytes(client_socket, 8))[0]
request_map = json.JSONDecoder().decode(read_bytes(client_socket, request_header_size).decode("utf-8"))
return request_map
def generate_request(request_info) -> 'bytes':
"""
根据传入的dict生成请求
请求包含 8字节头长度+头数据
:param request_info: dict
:return: bytes 请求数据
"""
request_bytes = JSONEncoder().encode(request_info).encode("utf-8")
return struct.pack("!Q", len(request_bytes)) + request_bytes
def read_bytes(s: 'socket.socket', size: 'int') -> 'bytes':
"""
从socket读取size个字节
:param s:套接字
:param size:要读取的大小
:return:读取的字节数在遇到套接字关闭的情况下返回的数据的长度可能小于 size
"""
data = ''.encode('utf-8')
while len(data) < size:
rsp_data = s.recv(size - len(data))
data += rsp_data
if len(rsp_data) == 0:
break
return data
def send_request(ip, port, request_info):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) as socket_to_server:
socket_to_server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
socket_to_server.bind(('', 9005))
socket_to_server.connect((ip, int(port)))
full_request = generate_request(request_info)
socket_to_server.sendall(full_request)
responseJson = JSONDecoder().decode(
read_bytes(socket_to_server, struct.unpack('!Q', socket_to_server.recv(8))[0]).decode(
"utf-8"))
return responseJson
def receive_response():
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind(('', 9005))
server_socket.listen()
while True:
client_socket, _ = server_socket.accept()
request_map = parse_request(client_socket)
if request_map['type'] == 'response':
print("receiving response:\n" + json.dumps(request_map, ensure_ascii=False))
break
if __name__ == '__main__':
# 使用方法 python .\connect.py --ip 127.0.0.1 --port 7777
# crawling --word computer --cookie 95f94e1ab71bdf96b85fef6e8f746c58eeb5f9fa --pages_start 1 --pages_end 10
parser = argparse.ArgumentParser('connect-manager')
parser.add_argument('--ip', type=str, required=True)
parser.add_argument('--port', type=str, required=True)
subparsers = parser.add_subparsers(help='provide actions including crawling, login, register',
dest='action') # 创建子解析器
parser_crawling = subparsers.add_parser('crawling')
parser_crawling.add_argument('--word', type=str, required=True)
parser_crawling.add_argument('--pages_end', type=int, required=True)
parser_crawling.add_argument('--pages_start', type=int, required=True)
parser_crawling.add_argument('--cookie', type=str, required=True)
parser_login = subparsers.add_parser('login')
parser_login.add_argument('--user', type=str, required=True)
parser_login.add_argument('--password', type=str, required=True)
parser_register = subparsers.add_parser('register')
parser_register.add_argument('--user', type=str, required=True)
parser_register.add_argument('--password', type=str, required=True)
args = parser.parse_args()
request = dict()
if args.action == 'crawling':
request = {'action': 'crawl zhiwang', 'word': args.word, 'pages_start': args.pages_start,
'pages_end': args.pages_end, 'cookie': args.cookie}
elif args.action == 'login' or args.action == 'register':
request = {'action': args.action, 'user': args.user, 'password': args.password}
response = send_request(args.ip, args.port, request)
print(response['cookie'])
if args.action == 'crawling':
receive_response()

@ -0,0 +1,183 @@
*{
padding: 0;
margin:0;
box-sizing: border-box;
font-family: 'Poppins',sans-serif;
}
/* 设置整个表单参数 (父盒子)*/
section {
position: relative;
min-height: 100vh;
background-image:url(1.jpg);
display: flex;
justify-content: center;
align-items: center;
padding: 20px;
}
section .container {
position: relative;
width: 550px;
height: 350px;
background: rgb(17, 168, 168);
box-shadow: 0 15px 50px rgba(0, 0, 0, 0.1);
overflow: hidden;
}
section .container .user{
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
display: flex;
}
/* 更改图片 (左侧)*/
section .container .imgBx{
position: relative;
width: 50%;
height: 100%;
/* background: #fff; */
transition: .5s;
}
section .container .user .imgBx img{
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
object-fit: cover;
}
/* 右侧表单盒子 */
section .container .user .formBx {
position: relative;
width: 50%;
height: 100%;
background: #fff;
display: flex;
justify-content: center;
align-items: center;
padding: 40px;
transition: .5s;
}
/* h2 */
section .container .user .formBx form h2{
font-size: 18px;
font-weight: 600;
text-transform: uppercase;/*大小*/
letter-spacing: 2px;/* 间距*/
text-align: center;
width: 100%;
margin-bottom: 10px;
color: #555;
}
/* 表单文字属性 */
section .container .user .formBx form input{
position: relative;
width: 100%;
padding: 10px;
background: #f5f5f5;
color: #333;
border: none;
outline:none;
box-shadow:none;
margin: 8px 0;
font-size: 14px;
letter-spacing:1px;
font-weight: 300;
}
/* 为登录设置样式 */
section .container .user .formBx form input[type="submit"]{
max-width: 100px;
background: #677eff;
color:#fff;
cursor:pointer;
font-size: 14px;
font-weight: 500;
letter-spacing: 1px;
transition: .5s;
}
/* 没有账号时 */
section .container .user .formBx form .signup{
position: relative;
margin-top: 20px;
font-size: 12px;
letter-spacing: 1px;
color: #555;
text-transform: uppercase;
font-weight: 300;
}
section .container .user .formBx form .signup a{
font-weight: 600;
text-decoration: none;
color: #677eff;
}
section .container .singupBx {
pointer-events: none;
}
section .container.active .singupBx {
pointer-events: initial;
}
section .container .singupBx .formBx {
left: 100%;
}
section .container.active .singupBx .formBx {
left: 0;
}
section .container .singupBx .imgBx {
left: -100%;
}
section .container.active .singupBx .imgBx {
left: 0;
}
section .container .singinBx .formBx {
left: 0;
}
section .container.active .singinBx .formBx {
left: 100%;
}
section .container .singinBx .imgBx {
left: 0;
}
section .container.active .singinBx .imgBx {
left: 100%;
}
@media (max-width: 991px) {
section .container {
max-width: 400px;
}
section .container .imgBx {
display: none;
}
section .container .user .formBx {
width: none;
}
}

@ -0,0 +1,54 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>登录</title>
<link rel="stylesheet" href="default.css">
<link rel="shortcut icon" href="img/favicon.ico">
</head>
<body>
<section>
<!-- 登录 -->
<div class="container">
<div class="user singinBx">
<div class="imgBx"><img src="2.jpg" alt=""></div>
<div class="formBx">
<form action="/login">
<h2><font size="10">登录</font></h2>
<input type="text" name="name" placeholder="username">
<input type="password" name="pwd" placeholder="password">
<input type="submit" name="" value="登录" onclick="alart(登录成功)">
<p class="signup">没有账号?<a href="#" onclick="topggleForm();">注册</a></p>
</form>
</div>
</div>
<!-- 注册 -->
<div class="user singupBx">
<div class="formBx">
<form action="/register">
<h2><font size="10">注册</font></h2>
<label><input class="text" type="text" placeholder="username" name="name" /></label>
<label><input class="text" type="password" placeholder="password" name="pwd" /></label>
<input type="submit" value="提交" onclick="注册成功">
<p class="signup">已有账号?<a href="#" onclick="topggleForm();">登录</a></p>
</form>
</div>
<div class="imgBx"><img src="3.jpg" alt=""></div>
</div>
</div>
</section>
<script type="text/javascript">
function topggleForm(){
var container = document.querySelector('.container');
container.classList.toggle('active');
}
</script>
</body>
</html>

@ -0,0 +1,380 @@
var express = require('express')
var path = require("path");
var mysql = require('mysql')
//var alert = require('alert')
//var router = express.Router()
var app = express()
const {get} = require('http')
const multer = require('multer')
var childProcess = require('child_process')
const bodyParser = require('body-parser')
const fs = require('fs')
//const document = require('document')
var jsdom = require("jsdom");
const { NULL } = require('mysql/lib/protocol/constants/types');
var JSDOM = jsdom.JSDOM;
var document = new JSDOM().window.document;
var execSync = require("child_process").execSync;
var sys = require('sys');
//var exec = require('child_process').exec
//const cp = require('child_process');
var session = require("express-session");
var FileStore = require('session-file-store')(session);
var identityKey = 'skey';
app.use(
session({
name: identityKey,
secret: "jhh",
store: new FileStore(), //加密存储
resave: false, //客户端并行请求是否覆盖
saveUninitialized: true, //初始化session存储
cookie: {
maxAge: 1000*60*10 // 这一条 是控制 sessionID 的过期时间的!!!
},
})
);
app.use(express.json())
app.use(express.urlencoded({ extended: true }))
app.use(express.static('./'))
/**
* 配置MySql
*/
var connection = mysql.createConnection({
host : '192.168.43.65',
user : 'root',
password : '427318Aa',
database : 'test',
port:'3306'
});
connection.connect();
app.use('/public', express.static('public')); // 设置静态文件的中间件
app.use(bodyParser.urlencoded({ extended: false })); // 判断请求体是不是json不是的话把请求体转化为对象
app.use(multer({ dest: 'tmp/' }).array('file'));//multer中间件用于上传表单型数据基本属性为dest会添加file对象到request对象中包含对象表单上传的文件信息
app.get('/',function (req,res) {
res.sendfile(__dirname + "/login.html" );
})
/*app.get('/',function(req,res){
res.sendFile(path.join(__dirname,"/login.html"))
//_dirname:当前文件的路径path.join():合并路径
})
/**
* 实现登录验证功能
*/
var ppcookie = ''
var ppname = ''
var pppwd = ''
app.get('/login', function (req, res) {
var response = {
"name":req.query.name,
"password":req.query.pwd,
};
/*var selectSQL = "select * from UserInfoTest where User_Name = '" + name + "' and User_Password = '" + password + "'";*/
var selectSQL = "select uname,pwd from user where uname = '" + req.query.name + "' and pwd = '" + req.query.pwd + "'";
connection.query(selectSQL, function (err, result) {
if (err) {
console.log('[login ERROR] - ', err.message);
return;
}
if (result == '') {
console.log("帐号密码错误");
res.end("The account does not exist or the password is wrong!");
}
else {
console.log(result);
console.log("OK"+'123');
ppname = req.query.name
pppwd = req.query.pwd
// res.redirect("/public/" + "ok1.html")
// dummy = childProcess.spawn('python' ,['./tmp.py'] ,{cwd: path.resolve(__dirname, './')})
const ls = childProcess.spawn('python3' ,['./connect.py', '--ip','192.168.43.241', '--port','7777','login','--user',req.query.name,'--password',req.query.pwd],{cwd: path.resolve(__dirname, './')})
ls.stdout.on('data', function (data){
//console.log('sdjfksjdfklajklfdjalkjfklda')
//req.session.cookie = data.toString().trim();
var sess = req.session;
sess.regenerate(function(err){ //添加session信息
req.session.name = data.toString().trim();
req.session.user = req.query.name;
req.session.pwd = req.query.pwd;
})
var a = data.toString()
a = a.trim()
ppcookie = a
console.log(ppcookie);
var start = new Date();
setTimeout(function(){
console.log(req.session.name);
console.log(req.session.user);
console.log(req.session.pwd);
res.redirect("/public/" + "ok1.html")
},2000)
// console.log(a[]);
})
//execute('python tmp.py')
// execute('python connect.py --ip 10.129.16.173 --port 7777 login --user wufayuan --password 113818');
/* const ls = childProcess.spawn('python3' ,['connect.py', '--ip','192.168.43.241', '--port','7777','login','--user','wufayuan','--password','113818'],{cwd: path.resolve(__dirname, './')
})
ls.stdout.on('data', function(data){
sys.print(data);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});*/
}
});
console.log(response);
})
/**connection.query(selectSQL,function (err,rs) {
if (err) throw err;
console.log(rs);
console.log('OK');
res.sendfile(__dirname + "/public/" + "ok.html" );
})
})**/
app.get('/register.html',function (req,res) {
res.sendfile(__dirname+"/lndex.html");
})
/**
* 实现注册功能
*/
app.get('/register',function (req,res) {
var name=req.query.name;
var pwd = req.query.pwd;
var selectSQL = "select uname,pwd from user where uname = '" + req.query.name+"'";
connection.query(selectSQL, function (err, result) {
if (err) {
console.log('[login ERROR] - ', err.message);
return;
}
if (result.length) {
res.send("The account exist!");
}
else {
var user = { uname: name, pwd: pwd ,finame:NULL,email:NULL,phone:NULL};
connection.query('insert into user set ?', user, function (err, rs) {
// if (err) throw err;
console.log('ok');
res.redirect('login.html');
const ls = childProcess.spawn('python3' ,['./connect.py', '--ip','192.168.43.241', '--port','7777','register','--user',name,'--password',pwd],{cwd: path.resolve(__dirname, './')
})
})
}
})
})
app.get('/ok1.html',function (req,res) {
res.redirect("/public/"+"ok1.html");
})
/*var server=app.listen(3300,function () {
console.log("start");
})*/
//const express = require('express');
const timeout = require('connect-timeout');
const { createProxyMiddleware } = require('http-proxy-middleware');
// HOST 指目标地址 PORT 服务端口
const HOST = 'http://192.168.43.64:7777', PORT = '3300';
// 超时时间
const TIME_OUT = 1000 * 1e3;
// 设置端口
app.set('port', PORT);
// 设置超时 返回超时响应
app.use(timeout(TIME_OUT));
app.use((req, res, next) => {
if (!req.timedout) next();
});
// 设置静态资源路径
app.use('/', express.static('static'));
// 反向代理(这里把需要进行反代的路径配置到这里即可)
// eg:将/api 代理到 ${HOST}/api
// app.use(createProxyMiddleware('/api', { target: HOST }));
// 自定义代理规则
app.use(createProxyMiddleware('/api', {
target: HOST, // target host
changeOrigin: true, // needed for virtual hosted sites
ws: true, // proxy websockets
pathRewrite: {
'^/api': '', // rewrite path
}
}));
// 监听端口
app.listen(app.get('port'), () => {
console.log(`server running ${PORT }`);
});
function execute(cmd) { //调用cmd命令
execSync(cmd, { cwd: './' }, function (error, stdout, stderr) {
if (error) {
console.error(error);
}
else {
console.log("executing success!")
}
})
}
app.get('/check', function (req, res) {
if(!!req.session.user){
var logo=req.query.logo;
console.log(logo);
// console.log(ppcookie);
a = ppcookie
console.log(a);
//const ls = childProcess.spawn('python3' ,['./connect.py','--word',logo,'--cookie',a])
const ls = childProcess.spawn('python3' ,['./tmp.py','--ip','192.168.43.241','--port','7777','crawling','--word',logo,'--pages_start',1,'--pages_end',3,'--cookie',req.session.name])
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', function(code){
res.redirect("/tmmps.html")
})}
else{
res.send('未登录')
}
/*exec('python connect.py --ip 10.129.16.173 --port 7777 crawling --word '+logo +' --pages_start 1 --pages_end 3 --cookie '+a, {
// timeout: 0, // 超时时间
cwd: path.resolve(__dirname, './'), // 可以改变当前的执行路径
}, function (err, stdout, stderr) {
res.redirect("/tmmps.html")
return
// 执行结果
})*/
//execute('python connect.py --ip 192.168.43.64 --port 7777 crawling --word '+logo +' --pages_start 1 --pages_end 5 --cookie '+a);
//execute('python connect.py --ip 192.168.43.65 --port 7777 crawling --word computer --cookie b07f9e6461343a07635438925b0b93f9e0f9f084 --pages_start 1 --pages_end 3');
})
app.post('/cook', function (req, res) {
console.log(req.session.user);
res.redirect('/public/ok2.html');
})
app.post('/cook2', function (req, res) {
req.session.destroy(function(err) {
res.redirect('/login.html');
})
ppname = '0'
pppwd = '0'
})
app.get('/check1',function (req, res) {
console.log(req.session.user)
if(!!req.session.user){
const ls = childProcess.spawn('python3' ,['./ceshi03.py','--id',req.session.user],{cwd: path.resolve(__dirname, './')
})
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', function(code){
res.redirect("/comment.html")
return;
})}
else{
res.send('未登录')
}
})
app.get('/std',function (req, res) {
if(!!req.session.user){
console.log(req.session.user);
var finame=req.query.finame;
var email = req.query.email;
var phone = req.query.phone;
var selectSQL = "select uname,pwd from user where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(selectSQL, function (err, result) {
console.log(req.session.user);
res.redirect('/public/ok1.html');
var user = {finame: finame,email:email, phone:phone};
sql = "update user set ? where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(sql, user, function (err, rs) {
// if (err) throw err;
console.log('ok');
})
})}
else{
res.send('未登录')
}
})
app.post('/std1',function (req, res) {
res.redirect('/public/ok3.html')
})
app.post('/std3',function (req, res) {
console.log('req.session.user');
var delSql = "DELETE FROM user_info where user_name = "+ req.session.user;
connection.query(delSql,function (err, result) {
if(err){
console.log('[DELETE ERROR] - ',err.message);
return;
}
});
var delSql1 = "DELETE FROM user where uname = "+ req.session.user;
connection.query(delSql1,function (err, result) {
if(err){
console.log('[DELETE ERROR] - ',err.message);
return;
//res.redirect('/login.html')
}
});
res.redirect('/login.html')
})
app.get('/std2',function (req, res) {
var pwd1=req.query.pwd1;
var pwd2=req.query.pwd2;
var pwd3=req.query.pwd3;
if(pwd3 != pwd2){
console.log("error")
res.send("两次输入的密码不一样");
}
if(pwd3 == pwd2){
var selectSQL = "select pwd from user where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(selectSQL, function (err, result) {
if (req.session.pwd != pwd1) {
res.send("当前密码输入错误");
console.log("error")
}
if(req.session.pwd == pwd1){
var user = {pwd:pwd2};
sql = "update user set ? where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(sql, user, function (err, rs) {
// if (err) throw err;
console.log('ok');
res.redirect('/public/ok1.html');
pppwd = pwd2
})
var user1 = {user_password:pwd2}
q = "update user_info set ? where user_name = '" + req.session.user + "' and user_password = '" + req.session.pwd + "'"
connection.query(q, user1, function (err, rs) {
// if (err) throw err;
console.log('ok');
//res.redirect('/public/ok1.html');
//pppwd = pwd2
})
}
})
}
})

@ -0,0 +1,372 @@
var express = require('express')
var path = require("path");
var mysql = require('mysql')
//var alert = require('alert')
//var router = express.Router()
var app = express()
const {get} = require('http')
const multer = require('multer')
var childProcess = require('child_process')
const bodyParser = require('body-parser')
const fs = require('fs')
//const document = require('document')
var jsdom = require("jsdom");
const { NULL } = require('mysql/lib/protocol/constants/types');
var JSDOM = jsdom.JSDOM;
var document = new JSDOM().window.document;
var execSync = require("child_process").execSync;
var sys = require('sys');
//var exec = require('child_process').exec
//const cp = require('child_process');
var session = require("express-session");
var FileStore = require('session-file-store')(session);
var identityKey = 'skey';
app.use(
session({
name: identityKey,
secret: "jhh",
store: new FileStore(), //加密存储
resave: false, //客户端并行请求是否覆盖
saveUninitialized: true, //初始化session存储
cookie: {
maxAge: 1000*60*10 // 这一条 是控制 sessionID 的过期时间的!!!
},
})
);
app.use(express.json())
app.use(express.urlencoded({ extended: true }))
app.use(express.static('./'))
/**
* 配置MySql
*/
var connection = mysql.createConnection({
host : '192.168.43.64',
user : 'root',
password : '427318Aa',
database : 'test',
port:'3306'
});
connection.connect();
app.use('/public', express.static('public')); // 设置静态文件的中间件
app.use(bodyParser.urlencoded({ extended: false })); // 判断请求体是不是json不是的话把请求体转化为对象
app.use(multer({ dest: 'tmp/' }).array('file'));//multer中间件用于上传表单型数据基本属性为dest会添加file对象到request对象中包含对象表单上传的文件信息
app.get('/',function (req,res) {
res.sendfile(__dirname + "/login.html" );
})
/*app.get('/',function(req,res){
res.sendFile(path.join(__dirname,"/login.html"))
//_dirname:当前文件的路径path.join():合并路径
})
/**
* 实现登录验证功能
*/
var ppcookie = ''
var ppname = ''
var pppwd = ''
app.get('/login', function (req, res) {
var response = {
"name":req.query.name,
"password":req.query.pwd,
};
/*var selectSQL = "select * from UserInfoTest where User_Name = '" + name + "' and User_Password = '" + password + "'";*/
var selectSQL = "select uname,pwd from user where uname = '" + req.query.name + "' and pwd = '" + req.query.pwd + "'";
connection.query(selectSQL, function (err, result) {
if (err) {
console.log('[login ERROR] - ', err.message);
return;
}
if (result == '') {
console.log("帐号密码错误");
res.end("The account does not exist or the password is wrong!");
}
else {
console.log(result);
console.log("OK"+'123');
ppname = req.query.name
pppwd = req.query.pwd
// res.redirect("/public/" + "ok1.html")
// dummy = childProcess.spawn('python' ,['./tmp.py'] ,{cwd: path.resolve(__dirname, './')})
const ls = childProcess.spawn('python3' ,['./connect.py', '--ip','192.168.43.241', '--port','7777','login','--user',req.query.name,'--password',req.query.pwd],{cwd: path.resolve(__dirname, './')})
ls.stdout.on('data', function (data){
//console.log('sdjfksjdfklajklfdjalkjfklda')
//req.session.cookie = data.toString().trim();
var sess = req.session;
sess.regenerate(function(err){ //添加session信息
req.session.name = data.toString().trim();
req.session.user = req.query.name;
req.session.pwd = req.query.pwd;
})
var a = data.toString()
a = a.trim()
ppcookie = a
console.log(ppcookie);
var start = new Date();
setTimeout(function(){
console.log(req.session.name);
console.log(req.session.user);
console.log(req.session.pwd);
res.redirect("/public/" + "ok1.html")
},2000)
// console.log(a[]);
})
//execute('python tmp.py')
// execute('python connect.py --ip 10.129.16.173 --port 7777 login --user wufayuan --password 113818');
/* const ls = childProcess.spawn('python3' ,['connect.py', '--ip','192.168.43.241', '--port','7777','login','--user','wufayuan','--password','113818'],{cwd: path.resolve(__dirname, './')
})
ls.stdout.on('data', function(data){
sys.print(data);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});*/
}
});
console.log(response);
})
/**connection.query(selectSQL,function (err,rs) {
if (err) throw err;
console.log(rs);
console.log('OK');
res.sendfile(__dirname + "/public/" + "ok.html" );
})
})**/
app.get('/register.html',function (req,res) {
res.sendfile(__dirname+"/lndex.html");
})
/**
* 实现注册功能
*/
app.get('/register',function (req,res) {
var name=req.query.name;
var pwd = req.query.pwd;
var selectSQL = "select uname,pwd from user where uname = '" + req.query.name+"'";
connection.query(selectSQL, function (err, result) {
if (err) {
console.log('[login ERROR] - ', err.message);
return;
}
if (result.length) {
res.send("The account exist!");
}
else {
var user = { uname: name, pwd: pwd ,finame:NULL,email:NULL,phone:NULL};
connection.query('insert into user set ?', user, function (err, rs) {
// if (err) throw err;
console.log('ok');
res.redirect('login.html');
const ls = childProcess.spawn('python3' ,['./connect.py', '--ip','192.168.43.241', '--port','7777','register','--user',name,'--password',pwd],{cwd: path.resolve(__dirname, './')
})
})
}
})
})
app.get('/ok1.html',function (req,res) {
res.redirect("/public/"+"ok1.html");
})
var server=app.listen(3300,function () {
console.log("start");
})
//const express = require('express');
/*const timeout = require('connect-timeout');
const { createProxyMiddleware } = require('http-proxy-middleware');
// HOST 指目标地址 PORT 服务端口
const HOST = 'http://192.168.43.64:7777', PORT = '3300';
// 超时时间
const TIME_OUT = 1000 * 1e3;
// 设置端口
app.set('port', PORT);
// 设置超时 返回超时响应
app.use(timeout(TIME_OUT));
app.use((req, res, next) => {
if (!req.timedout) next();
});
// 设置静态资源路径
app.use('/', express.static('static'));
// 反向代理(这里把需要进行反代的路径配置到这里即可)
// eg:将/api 代理到 ${HOST}/api
// app.use(createProxyMiddleware('/api', { target: HOST }));
// 自定义代理规则
app.use(createProxyMiddleware('/api', {
target: HOST, // target host
changeOrigin: true, // needed for virtual hosted sites
ws: true, // proxy websockets
pathRewrite: {
'^/api': '', // rewrite path
}
}));
// 监听端口
app.listen(app.get('port'), () => {
console.log(`server running ${PORT }`);
});*/
function execute(cmd) { //调用cmd命令
execSync(cmd, { cwd: './' }, function (error, stdout, stderr) {
if (error) {
console.error(error);
}
else {
console.log("executing success!")
}
})
}
app.get('/check', function (req, res) {
if(!!req.session.user){
var logo=req.query.logo;
console.log(logo);
// console.log(ppcookie);
a = ppcookie
console.log(a);
//const ls = childProcess.spawn('python3' ,['./connect.py','--word',logo,'--cookie',a])
const ls = childProcess.spawn('python3' ,['./tmp.py','--ip','192.168.43.241','--port','7777','crawling','--word',logo,'--pages_start',1,'--pages_end',3,'--cookie',req.session.name])
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', function(code){
res.redirect("/tmmps.html")
})}
else{
res.send('未登录')
}
/*exec('python connect.py --ip 10.129.16.173 --port 7777 crawling --word '+logo +' --pages_start 1 --pages_end 3 --cookie '+a, {
// timeout: 0, // 超时时间
cwd: path.resolve(__dirname, './'), // 可以改变当前的执行路径
}, function (err, stdout, stderr) {
res.redirect("/tmmps.html")
return
// 执行结果
})*/
//execute('python connect.py --ip 192.168.43.64 --port 7777 crawling --word '+logo +' --pages_start 1 --pages_end 5 --cookie '+a);
//execute('python connect.py --ip 192.168.43.65 --port 7777 crawling --word computer --cookie b07f9e6461343a07635438925b0b93f9e0f9f084 --pages_start 1 --pages_end 3');
})
app.post('/cook', function (req, res) {
console.log(req.session.user);
res.redirect('/public/ok2.html');
})
app.post('/cook2', function (req, res) {
req.session.destroy(function(err) {
res.redirect('/login.html');
})
ppname = '0'
pppwd = '0'
})
app.get('/check1',function (req, res) {
console.log(req.session.user)
if(!!req.session.user){
const ls = childProcess.spawn('python3' ,['./ceshi03.py','--id',req.session.user],{cwd: path.resolve(__dirname, './')
})
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', function(code){
res.redirect("/comment.html")
return;
})}
else{
res.send('未登录')
}
})
app.get('/std',function (req, res) {
if(!!req.session.user){
console.log(req.session.user);
var finame=req.query.finame;
var email = req.query.email;
var phone = req.query.phone;
var selectSQL = "select uname,pwd from user where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(selectSQL, function (err, result) {
console.log(req.session.user);
res.redirect('/public/ok1.html');
var user = {finame: finame,email:email, phone:phone};
sql = "update user set ? where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(sql, user, function (err, rs) {
// if (err) throw err;
console.log('ok');
})
})}
else{
res.send('未登录')
}
})
app.post('/std1',function (req, res) {
res.redirect('/public/ok3.html')
})
app.post('/std3',function (req, res) {
var delSql = "DELETE FROM user_info where user_name = '"+ ppname;
connection.query(delSql,function (err, result) {
if(err){
console.log('[DELETE ERROR] - ',err.message);
return;
}
});
res.redirect('/public/ok3.html')
})
app.get('/std2',function (req, res) {
var pwd1=req.query.pwd1;
var pwd2=req.query.pwd2;
var pwd3=req.query.pwd3;
if(pwd3 != pwd2){
console.log("error")
res.send("两次输入的密码不一样");
}
if(pwd3 == pwd2){
var selectSQL = "select pwd from user where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(selectSQL, function (err, result) {
if (req.session.pwd != pwd1) {
res.send("当前密码输入错误");
console.log("error")
}
if(req.session.pwd == pwd1){
var user = {pwd:pwd2};
sql = "update user set ? where uname = '" + req.session.user + "' and pwd = '" + req.session.pwd + "'"
connection.query(sql, user, function (err, rs) {
// if (err) throw err;
console.log('ok');
res.redirect('/public/ok1.html');
pppwd = pwd2
})
var user1 = {user_password:pwd2}
q = "update user_info set ? where user_name = '" + req.session.user + "' and user_password = '" + req.session.pwd + "'"
connection.query(q, user1, function (err, rs) {
// if (err) throw err;
console.log('ok');
//res.redirect('/public/ok1.html');
//pppwd = pwd2
})
}
})
}
})

@ -0,0 +1,349 @@
var express = require('express')
var path = require("path");
var mysql = require('mysql')
var alert = require('alert')
var router = express.Router()
var app = express()
const {get} = require('http')
const multer = require('multer')
var childProcess = require('child_process')
const bodyParser = require('body-parser')
const fs = require('fs')
app.use(express.static('./'))
//const document = require('document')
var jsdom = require("jsdom");
const { NULL } = require('mysql/lib/protocol/constants/types');
var JSDOM = jsdom.JSDOM;
var document = new JSDOM().window.document;
var execSync = require("child_process").execSync;
var sys = require('sys');
var exec = require('child_process').exec
const cp = require('child_process');
/**
* 配置MySql
*/
var connection = mysql.createConnection({
host : '10.129.16.173',
user : 'root',
password : '427318Aa',
database : 'test',
port:'3306'
});
connection.connect();
app.use('/public', express.static('public')); // 设置静态文件的中间件
app.use(bodyParser.urlencoded({ extended: false })); // 判断请求体是不是json不是的话把请求体转化为对象
app.use(multer({ dest: 'tmp/' }).array('file'));//multer中间件用于上传表单型数据基本属性为dest会添加file对象到request对象中包含对象表单上传的文件信息
app.get('/',function (req,res) {
res.sendfile(__dirname + "/login.html" );
})
/*app.get('/',function(req,res){
res.sendFile(path.join(__dirname,"/login.html"))
//_dirname:当前文件的路径path.join():合并路径
})
/**
* 实现登录验证功能
*/
var ppcookie = ''
var ppname = ''
var pppwd = ''
app.get('/login', function (req, res) {
var response = {
"name":req.query.name,
"password":req.query.pwd,
};
/*var selectSQL = "select * from UserInfoTest where User_Name = '" + name + "' and User_Password = '" + password + "'";*/
var selectSQL = "select uname,pwd from user where uname = '" + req.query.name + "' and pwd = '" + req.query.pwd + "'";
connection.query(selectSQL, function (err, result) {
if (err) {
console.log('[login ERROR] - ', err.message);
return;
}
if (result == '') {
console.log("帐号密码错误");
res.end("The account does not exist or the password is wrong!");
}
else {
console.log(result);
console.log("OK"+'123');
ppname = req.query.name
pppwd = req.query.pwd
res.redirect("/public/" + "ok1.html");//重定向到网页
// dummy = childProcess.spawn('python' ,['./tmp.py'] ,{cwd: path.resolve(__dirname, './')})
const ls = childProcess.spawn('python3' ,['./connect2.py', '--ip','10.129.16.173', '--port','7777','login','--user',req.query.name,'--password',req.query.pwd],{cwd: path.resolve(__dirname, './')})
ls.stdout.on('data', function (data){
//console.log('sdjfksjdfklajklfdjalkjfklda')
var a = data.toString()
a = a.trim()
ppcookie = a
console.log(ppcookie);
// console.log(a[]);
})
//execute('python tmp.py')
// execute('python connect.py --ip 10.129.16.173 --port 7777 login --user wufayuan --password 113818');
/* const ls = childProcess.spawn('python3' ,['connect.py', '--ip','192.168.43.241', '--port','7777','login','--user','wufayuan','--password','113818'],{cwd: path.resolve(__dirname, './')
})
ls.stdout.on('data', function(data){
sys.print(data);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});*/
}
});
console.log(response);
})
/**connection.query(selectSQL,function (err,rs) {
if (err) throw err;
console.log(rs);
console.log('OK');
res.sendfile(__dirname + "/public/" + "ok.html" );
})
})**/
app.get('/register.html',function (req,res) {
res.sendfile(__dirname+"/lndex.html");
})
/**
* 实现注册功能
*/
app.get('/register',function (req,res) {
var name=req.query.name;
var pwd = req.query.pwd;
var selectSQL = "select uname,pwd from user where uname = '" + req.query.name+"'";
connection.query(selectSQL, function (err, result) {
if (err) {
console.log('[login ERROR] - ', err.message);
return;
}
if (result.length) {
res.send("The account exist!");
}
else {
var user = { uname: name, pwd: pwd ,finame:NULL,email:NULL,phone:NULL};
connection.query('insert into user set ?', user, function (err, rs) {
// if (err) throw err;
console.log('ok');
res.redirect('login.html');
const ls = childProcess.spawn('python3' ,['./connect2.py', '--ip','10.129.16.173', '--port','7777','register','--user',name,'--password',pwd],{cwd: path.resolve(__dirname, './')
})
})
}
})
})
app.get('/ok1.html',function (req,res) {
res.redirect("/public/"+"ok1.html");
})
var server=app.listen(3300,function () {
console.log("start");
})
//const express = require('express');
/*const timeout = require('connect-timeout');
const { createProxyMiddleware } = require('http-proxy-middleware');
// HOST 指目标地址 PORT 服务端口
const HOST = 'http://10.129.77.113:7777', PORT = '3300';
// 超时时间
const TIME_OUT = 30 * 1e3;
// 设置端口
app.set('port', PORT);
// 设置超时 返回超时响应
app.use(timeout(TIME_OUT));
app.use((req, res, next) => {
if (!req.timedout) next();
});
// 设置静态资源路径
app.use('/', express.static('static'));
// 反向代理(这里把需要进行反代的路径配置到这里即可)
// eg:将/api 代理到 ${HOST}/api
// app.use(createProxyMiddleware('/api', { target: HOST }));
// 自定义代理规则
app.use(createProxyMiddleware('/api', {
target: HOST, // target host
changeOrigin: true, // needed for virtual hosted sites
ws: true, // proxy websockets
pathRewrite: {
'^/api': '', // rewrite path
}
}));
// 监听端口
app.listen(app.get('port'), () => {
console.log(`server running ${PORT }`);
});*/
// 上传文件api
app.post('/file_upload', function (req, res) {
console.log(req.files[0]); // 上传的文件信息
var des_file = __dirname + "/0/" + req.files[0].originalname;
fs.readFile( req.files[0].path, function (err, data) {
fs.writeFile(des_file, data, function (err) {
if( err ){
console.log( err );
}else{
response = {
message:'File uploaded successfully',
filename:req.files[0].originalname
};
}
console.log( response );
res.end( JSON.stringify( response ) );
});
});
})
function execute(cmd) { //调用cmd命令
execSync(cmd, { cwd: './' }, function (error, stdout, stderr) {
if (error) {
console.error(error);
}
else {
console.log("executing success!")
}
})
}
app.get('/check', function (req, res) {
var logo=req.query.logo;
console.log(logo);
// console.log(ppcookie);
a = ppcookie
console.log(a);
//const ls = childProcess.spawn('python3' ,['./connect.py','--word',logo,'--cookie',a])
const ls = childProcess.spawn('python3' ,['./connect1.py','--ip','10.129.16.173','--port','7777','crawling','--word',logo,'--pages_start',1,'--pages_end',3,'--cookie',a])
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', function(code){
res.redirect("/tmmps.html")
})
/*exec('python connect.py --ip 10.129.16.173 --port 7777 crawling --word '+logo +' --pages_start 1 --pages_end 3 --cookie '+a, {
// timeout: 0, // 超时时间
cwd: path.resolve(__dirname, './'), // 可以改变当前的执行路径
}, function (err, stdout, stderr) {
res.redirect("/tmmps.html")
return
// 执行结果
})*/
//execute('python connect.py --ip 192.168.43.64 --port 7777 crawling --word '+logo +' --pages_start 1 --pages_end 5 --cookie '+a);
//execute('python connect.py --ip 192.168.43.65 --port 7777 crawling --word computer --cookie b07f9e6461343a07635438925b0b93f9e0f9f084 --pages_start 1 --pages_end 3');
})
app.post('/cook', function (req, res) {
res.redirect('/public/ok2.html');
})
app.post('/cook2', function (req, res) {
res.redirect('/login.html');
ppname = '0'
pppwd = '0'
})
app.post('/check1',function (req, res) {
if(ppname != '0'){
const ls = childProcess.spawn('python3' ,['./ceshi03.py','--id',ppname],{cwd: path.resolve(__dirname, './')
})
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', function(code){
res.redirect("/comment.html")
return;
})}
else{
res.send('未登录')
}
})
app.get('/std',function (req, res) {
var finame=req.query.finame;
var email = req.query.email;
var phone = req.query.phone;
var selectSQL = "select uname,pwd from user where uname = '" + ppname + "' and pwd = '" + pppwd + "'"
connection.query(selectSQL, function (err, result) {
var user = {finame: finame,email:email, phone:phone};
sql = "update user set ? where uname = '" + ppname + "' and pwd = '" + pppwd + "'"
connection.query(sql, user, function (err, rs) {
// if (err) throw err;
console.log('ok');
res.redirect('/public/ok1.html');
})
})
})
app.post('/std1',function (req, res) {
res.redirect('/public/ok3.html')
})
app.post('/std3',function (req, res) {
var delSql = "DELETE FROM user_info where user_name = '"+ ppname;
connection.query(delSql,function (err, result) {
if(err){
console.log('[DELETE ERROR] - ',err.message);
return;
}
});
res.redirect('/public/ok3.html')
})
app.get('/std2',function (req, res) {
var pwd1=req.query.pwd1;
var pwd2=req.query.pwd2;
var pwd3=req.query.pwd3;
if(pwd3 != pwd2){
console.log("error")
res.send("两次输入的密码不一样");
}
if(pwd3 == pwd2){
var selectSQL = "select pwd from user where uname = '" + ppname + "' and pwd = '" + pppwd + "'"
connection.query(selectSQL, function (err, result) {
if (pppwd != pwd1) {
res.send("当前密码输入错误");
console.log("error")
}
if(pppwd == pwd1){
var user = {pwd:pwd2};
sql = "update user set ? where uname = '" + ppname + "' and pwd = '" + pppwd + "'"
connection.query(sql, user, function (err, rs) {
// if (err) throw err;
console.log('ok');
res.redirect('/public/ok1.html');
pppwd = pwd2
})
var user1 = {user_password:pwd2}
q = "update user_info set ? where user_name = '" + ppname + "' and user_password = '" + pppwd + "'"
connection.query(q, user1, function (err, rs) {
// if (err) throw err;
console.log('ok');
//res.redirect('/public/ok1.html');
//pppwd = pwd2
})
}
})
}
})

12
ui/node_modules/.bin/mime generated vendored

@ -0,0 +1,12 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*) basedir=`cygpath -w "$basedir"`;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../mime/cli.js" "$@"
else
exec node "$basedir/../mime/cli.js" "$@"
fi

17
ui/node_modules/.bin/mime.cmd generated vendored

@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\mime\cli.js" %*

28
ui/node_modules/.bin/mime.ps1 generated vendored

@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../mime/cli.js" $args
} else {
& "$basedir/node$exe" "$basedir/../mime/cli.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../mime/cli.js" $args
} else {
& "node$exe" "$basedir/../mime/cli.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

12
ui/node_modules/.bin/mkdirp generated vendored

@ -0,0 +1,12 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*) basedir=`cygpath -w "$basedir"`;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../mkdirp/bin/cmd.js" "$@"
else
exec node "$basedir/../mkdirp/bin/cmd.js" "$@"
fi

17
ui/node_modules/.bin/mkdirp.cmd generated vendored

@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\mkdirp\bin\cmd.js" %*

28
ui/node_modules/.bin/mkdirp.ps1 generated vendored

@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../mkdirp/bin/cmd.js" $args
} else {
& "$basedir/node$exe" "$basedir/../mkdirp/bin/cmd.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../mkdirp/bin/cmd.js" $args
} else {
& "node$exe" "$basedir/../mkdirp/bin/cmd.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

1208
ui/node_modules/.package-lock.json generated vendored

File diff suppressed because it is too large Load Diff

243
ui/node_modules/accepts/HISTORY.md generated vendored

@ -0,0 +1,243 @@
1.3.8 / 2022-02-02
==================
* deps: mime-types@~2.1.34
- deps: mime-db@~1.51.0
* deps: negotiator@0.6.3
1.3.7 / 2019-04-29
==================
* deps: negotiator@0.6.2
- Fix sorting charset, encoding, and language with extra parameters
1.3.6 / 2019-04-28
==================
* deps: mime-types@~2.1.24
- deps: mime-db@~1.40.0
1.3.5 / 2018-02-28
==================
* deps: mime-types@~2.1.18
- deps: mime-db@~1.33.0
1.3.4 / 2017-08-22
==================
* deps: mime-types@~2.1.16
- deps: mime-db@~1.29.0
1.3.3 / 2016-05-02
==================
* deps: mime-types@~2.1.11
- deps: mime-db@~1.23.0
* deps: negotiator@0.6.1
- perf: improve `Accept` parsing speed
- perf: improve `Accept-Charset` parsing speed
- perf: improve `Accept-Encoding` parsing speed
- perf: improve `Accept-Language` parsing speed
1.3.2 / 2016-03-08
==================
* deps: mime-types@~2.1.10
- Fix extension of `application/dash+xml`
- Update primary extension for `audio/mp4`
- deps: mime-db@~1.22.0
1.3.1 / 2016-01-19
==================
* deps: mime-types@~2.1.9
- deps: mime-db@~1.21.0
1.3.0 / 2015-09-29
==================
* deps: mime-types@~2.1.7
- deps: mime-db@~1.19.0
* deps: negotiator@0.6.0
- Fix including type extensions in parameters in `Accept` parsing
- Fix parsing `Accept` parameters with quoted equals
- Fix parsing `Accept` parameters with quoted semicolons
- Lazy-load modules from main entry point
- perf: delay type concatenation until needed
- perf: enable strict mode
- perf: hoist regular expressions
- perf: remove closures getting spec properties
- perf: remove a closure from media type parsing
- perf: remove property delete from media type parsing
1.2.13 / 2015-09-06
===================
* deps: mime-types@~2.1.6
- deps: mime-db@~1.18.0
1.2.12 / 2015-07-30
===================
* deps: mime-types@~2.1.4
- deps: mime-db@~1.16.0
1.2.11 / 2015-07-16
===================
* deps: mime-types@~2.1.3
- deps: mime-db@~1.15.0
1.2.10 / 2015-07-01
===================
* deps: mime-types@~2.1.2
- deps: mime-db@~1.14.0
1.2.9 / 2015-06-08
==================
* deps: mime-types@~2.1.1
- perf: fix deopt during mapping
1.2.8 / 2015-06-07
==================
* deps: mime-types@~2.1.0
- deps: mime-db@~1.13.0
* perf: avoid argument reassignment & argument slice
* perf: avoid negotiator recursive construction
* perf: enable strict mode
* perf: remove unnecessary bitwise operator
1.2.7 / 2015-05-10
==================
* deps: negotiator@0.5.3
- Fix media type parameter matching to be case-insensitive
1.2.6 / 2015-05-07
==================
* deps: mime-types@~2.0.11
- deps: mime-db@~1.9.1
* deps: negotiator@0.5.2
- Fix comparing media types with quoted values
- Fix splitting media types with quoted commas
1.2.5 / 2015-03-13
==================
* deps: mime-types@~2.0.10
- deps: mime-db@~1.8.0
1.2.4 / 2015-02-14
==================
* Support Node.js 0.6
* deps: mime-types@~2.0.9
- deps: mime-db@~1.7.0
* deps: negotiator@0.5.1
- Fix preference sorting to be stable for long acceptable lists
1.2.3 / 2015-01-31
==================
* deps: mime-types@~2.0.8
- deps: mime-db@~1.6.0
1.2.2 / 2014-12-30
==================
* deps: mime-types@~2.0.7
- deps: mime-db@~1.5.0
1.2.1 / 2014-12-30
==================
* deps: mime-types@~2.0.5
- deps: mime-db@~1.3.1
1.2.0 / 2014-12-19
==================
* deps: negotiator@0.5.0
- Fix list return order when large accepted list
- Fix missing identity encoding when q=0 exists
- Remove dynamic building of Negotiator class
1.1.4 / 2014-12-10
==================
* deps: mime-types@~2.0.4
- deps: mime-db@~1.3.0
1.1.3 / 2014-11-09
==================
* deps: mime-types@~2.0.3
- deps: mime-db@~1.2.0
1.1.2 / 2014-10-14
==================
* deps: negotiator@0.4.9
- Fix error when media type has invalid parameter
1.1.1 / 2014-09-28
==================
* deps: mime-types@~2.0.2
- deps: mime-db@~1.1.0
* deps: negotiator@0.4.8
- Fix all negotiations to be case-insensitive
- Stable sort preferences of same quality according to client order
1.1.0 / 2014-09-02
==================
* update `mime-types`
1.0.7 / 2014-07-04
==================
* Fix wrong type returned from `type` when match after unknown extension
1.0.6 / 2014-06-24
==================
* deps: negotiator@0.4.7
1.0.5 / 2014-06-20
==================
* fix crash when unknown extension given
1.0.4 / 2014-06-19
==================
* use `mime-types`
1.0.3 / 2014-06-11
==================
* deps: negotiator@0.4.6
- Order by specificity when quality is the same
1.0.2 / 2014-05-29
==================
* Fix interpretation when header not in request
* deps: pin negotiator@0.4.5
1.0.1 / 2014-01-18
==================
* Identity encoding isn't always acceptable
* deps: negotiator@~0.4.0
1.0.0 / 2013-12-27
==================
* Genesis

23
ui/node_modules/accepts/LICENSE generated vendored

@ -0,0 +1,23 @@
(The MIT License)
Copyright (c) 2014 Jonathan Ong <me@jongleberry.com>
Copyright (c) 2015 Douglas Christopher Wilson <doug@somethingdoug.com>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

140
ui/node_modules/accepts/README.md generated vendored

@ -0,0 +1,140 @@
# accepts
[![NPM Version][npm-version-image]][npm-url]
[![NPM Downloads][npm-downloads-image]][npm-url]
[![Node.js Version][node-version-image]][node-version-url]
[![Build Status][github-actions-ci-image]][github-actions-ci-url]
[![Test Coverage][coveralls-image]][coveralls-url]
Higher level content negotiation based on [negotiator](https://www.npmjs.com/package/negotiator).
Extracted from [koa](https://www.npmjs.com/package/koa) for general use.
In addition to negotiator, it allows:
- Allows types as an array or arguments list, ie `(['text/html', 'application/json'])`
as well as `('text/html', 'application/json')`.
- Allows type shorthands such as `json`.
- Returns `false` when no types match
- Treats non-existent headers as `*`
## Installation
This is a [Node.js](https://nodejs.org/en/) module available through the
[npm registry](https://www.npmjs.com/). Installation is done using the
[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally):
```sh
$ npm install accepts
```
## API
```js
var accepts = require('accepts')
```
### accepts(req)
Create a new `Accepts` object for the given `req`.
#### .charset(charsets)
Return the first accepted charset. If nothing in `charsets` is accepted,
then `false` is returned.
#### .charsets()
Return the charsets that the request accepts, in the order of the client's
preference (most preferred first).
#### .encoding(encodings)
Return the first accepted encoding. If nothing in `encodings` is accepted,
then `false` is returned.
#### .encodings()
Return the encodings that the request accepts, in the order of the client's
preference (most preferred first).
#### .language(languages)
Return the first accepted language. If nothing in `languages` is accepted,
then `false` is returned.
#### .languages()
Return the languages that the request accepts, in the order of the client's
preference (most preferred first).
#### .type(types)
Return the first accepted type (and it is returned as the same text as what
appears in the `types` array). If nothing in `types` is accepted, then `false`
is returned.
The `types` array can contain full MIME types or file extensions. Any value
that is not a full MIME types is passed to `require('mime-types').lookup`.
#### .types()
Return the types that the request accepts, in the order of the client's
preference (most preferred first).
## Examples
### Simple type negotiation
This simple example shows how to use `accepts` to return a different typed
respond body based on what the client wants to accept. The server lists it's
preferences in order and will get back the best match between the client and
server.
```js
var accepts = require('accepts')
var http = require('http')
function app (req, res) {
var accept = accepts(req)
// the order of this list is significant; should be server preferred order
switch (accept.type(['json', 'html'])) {
case 'json':
res.setHeader('Content-Type', 'application/json')
res.write('{"hello":"world!"}')
break
case 'html':
res.setHeader('Content-Type', 'text/html')
res.write('<b>hello, world!</b>')
break
default:
// the fallback is text/plain, so no need to specify it above
res.setHeader('Content-Type', 'text/plain')
res.write('hello, world!')
break
}
res.end()
}
http.createServer(app).listen(3000)
```
You can test this out with the cURL program:
```sh
curl -I -H'Accept: text/html' http://localhost:3000/
```
## License
[MIT](LICENSE)
[coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/accepts/master
[coveralls-url]: https://coveralls.io/r/jshttp/accepts?branch=master
[github-actions-ci-image]: https://badgen.net/github/checks/jshttp/accepts/master?label=ci
[github-actions-ci-url]: https://github.com/jshttp/accepts/actions/workflows/ci.yml
[node-version-image]: https://badgen.net/npm/node/accepts
[node-version-url]: https://nodejs.org/en/download
[npm-downloads-image]: https://badgen.net/npm/dm/accepts
[npm-url]: https://npmjs.org/package/accepts
[npm-version-image]: https://badgen.net/npm/v/accepts

238
ui/node_modules/accepts/index.js generated vendored

@ -0,0 +1,238 @@
/*!
* accepts
* Copyright(c) 2014 Jonathan Ong
* Copyright(c) 2015 Douglas Christopher Wilson
* MIT Licensed
*/
'use strict'
/**
* Module dependencies.
* @private
*/
var Negotiator = require('negotiator')
var mime = require('mime-types')
/**
* Module exports.
* @public
*/
module.exports = Accepts
/**
* Create a new Accepts object for the given req.
*
* @param {object} req
* @public
*/
function Accepts (req) {
if (!(this instanceof Accepts)) {
return new Accepts(req)
}
this.headers = req.headers
this.negotiator = new Negotiator(req)
}
/**
* Check if the given `type(s)` is acceptable, returning
* the best match when true, otherwise `undefined`, in which
* case you should respond with 406 "Not Acceptable".
*
* The `type` value may be a single mime type string
* such as "application/json", the extension name
* such as "json" or an array `["json", "html", "text/plain"]`. When a list
* or array is given the _best_ match, if any is returned.
*
* Examples:
*
* // Accept: text/html
* this.types('html');
* // => "html"
*
* // Accept: text/*, application/json
* this.types('html');
* // => "html"
* this.types('text/html');
* // => "text/html"
* this.types('json', 'text');
* // => "json"
* this.types('application/json');
* // => "application/json"
*
* // Accept: text/*, application/json
* this.types('image/png');
* this.types('png');
* // => undefined
*
* // Accept: text/*;q=.5, application/json
* this.types(['html', 'json']);
* this.types('html', 'json');
* // => "json"
*
* @param {String|Array} types...
* @return {String|Array|Boolean}
* @public
*/
Accepts.prototype.type =
Accepts.prototype.types = function (types_) {
var types = types_
// support flattened arguments
if (types && !Array.isArray(types)) {
types = new Array(arguments.length)
for (var i = 0; i < types.length; i++) {
types[i] = arguments[i]
}
}
// no types, return all requested types
if (!types || types.length === 0) {
return this.negotiator.mediaTypes()
}
// no accept header, return first given type
if (!this.headers.accept) {
return types[0]
}
var mimes = types.map(extToMime)
var accepts = this.negotiator.mediaTypes(mimes.filter(validMime))
var first = accepts[0]
return first
? types[mimes.indexOf(first)]
: false
}
/**
* Return accepted encodings or best fit based on `encodings`.
*
* Given `Accept-Encoding: gzip, deflate`
* an array sorted by quality is returned:
*
* ['gzip', 'deflate']
*
* @param {String|Array} encodings...
* @return {String|Array}
* @public
*/
Accepts.prototype.encoding =
Accepts.prototype.encodings = function (encodings_) {
var encodings = encodings_
// support flattened arguments
if (encodings && !Array.isArray(encodings)) {
encodings = new Array(arguments.length)
for (var i = 0; i < encodings.length; i++) {
encodings[i] = arguments[i]
}
}
// no encodings, return all requested encodings
if (!encodings || encodings.length === 0) {
return this.negotiator.encodings()
}
return this.negotiator.encodings(encodings)[0] || false
}
/**
* Return accepted charsets or best fit based on `charsets`.
*
* Given `Accept-Charset: utf-8, iso-8859-1;q=0.2, utf-7;q=0.5`
* an array sorted by quality is returned:
*
* ['utf-8', 'utf-7', 'iso-8859-1']
*
* @param {String|Array} charsets...
* @return {String|Array}
* @public
*/
Accepts.prototype.charset =
Accepts.prototype.charsets = function (charsets_) {
var charsets = charsets_
// support flattened arguments
if (charsets && !Array.isArray(charsets)) {
charsets = new Array(arguments.length)
for (var i = 0; i < charsets.length; i++) {
charsets[i] = arguments[i]
}
}
// no charsets, return all requested charsets
if (!charsets || charsets.length === 0) {
return this.negotiator.charsets()
}
return this.negotiator.charsets(charsets)[0] || false
}
/**
* Return accepted languages or best fit based on `langs`.
*
* Given `Accept-Language: en;q=0.8, es, pt`
* an array sorted by quality is returned:
*
* ['es', 'pt', 'en']
*
* @param {String|Array} langs...
* @return {Array|String}
* @public
*/
Accepts.prototype.lang =
Accepts.prototype.langs =
Accepts.prototype.language =
Accepts.prototype.languages = function (languages_) {
var languages = languages_
// support flattened arguments
if (languages && !Array.isArray(languages)) {
languages = new Array(arguments.length)
for (var i = 0; i < languages.length; i++) {
languages[i] = arguments[i]
}
}
// no languages, return all requested languages
if (!languages || languages.length === 0) {
return this.negotiator.languages()
}
return this.negotiator.languages(languages)[0] || false
}
/**
* Convert extnames to mime.
*
* @param {String} type
* @return {String}
* @private
*/
function extToMime (type) {
return type.indexOf('/') === -1
? mime.lookup(type)
: type
}
/**
* Check if mime is valid.
*
* @param {String} type
* @return {String}
* @private
*/
function validMime (type) {
return typeof type === 'string'
}

@ -0,0 +1,47 @@
{
"name": "accepts",
"description": "Higher-level content negotiation",
"version": "1.3.8",
"contributors": [
"Douglas Christopher Wilson <doug@somethingdoug.com>",
"Jonathan Ong <me@jongleberry.com> (http://jongleberry.com)"
],
"license": "MIT",
"repository": "jshttp/accepts",
"dependencies": {
"mime-types": "~2.1.34",
"negotiator": "0.6.3"
},
"devDependencies": {
"deep-equal": "1.0.1",
"eslint": "7.32.0",
"eslint-config-standard": "14.1.1",
"eslint-plugin-import": "2.25.4",
"eslint-plugin-markdown": "2.2.1",
"eslint-plugin-node": "11.1.0",
"eslint-plugin-promise": "4.3.1",
"eslint-plugin-standard": "4.1.0",
"mocha": "9.2.0",
"nyc": "15.1.0"
},
"files": [
"LICENSE",
"HISTORY.md",
"index.js"
],
"engines": {
"node": ">= 0.6"
},
"scripts": {
"lint": "eslint .",
"test": "mocha --reporter spec --check-leaks --bail test/",
"test-ci": "nyc --reporter=lcov --reporter=text npm test",
"test-cov": "nyc --reporter=html --reporter=text npm test"
},
"keywords": [
"content",
"negotiation",
"accept",
"accepts"
]
}

@ -0,0 +1 @@
node_modules/

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015 Linus Unnebäck
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

@ -0,0 +1,44 @@
# `append-field`
A [W3C HTML JSON forms spec](http://www.w3.org/TR/html-json-forms/) compliant
field appender (for lack of a better name). Useful for people implementing
`application/x-www-form-urlencoded` and `multipart/form-data` parsers.
It works best on objects created with `Object.create(null)`. Otherwise it might
conflict with variables from the prototype (e.g. `hasOwnProperty`).
## Installation
```sh
npm install --save append-field
```
## Usage
```javascript
var appendField = require('append-field')
var obj = Object.create(null)
appendField(obj, 'pets[0][species]', 'Dahut')
appendField(obj, 'pets[0][name]', 'Hypatia')
appendField(obj, 'pets[1][species]', 'Felis Stultus')
appendField(obj, 'pets[1][name]', 'Billie')
console.log(obj)
```
```text
{ pets:
[ { species: 'Dahut', name: 'Hypatia' },
{ species: 'Felis Stultus', name: 'Billie' } ] }
```
## API
### `appendField(store, key, value)`
Adds the field named `key` with the value `value` to the object `store`.
## License
MIT

@ -0,0 +1,12 @@
var parsePath = require('./lib/parse-path')
var setValue = require('./lib/set-value')
function appendField (store, key, value) {
var steps = parsePath(key)
steps.reduce(function (context, step) {
return setValue(context, step, context[step.key], value)
}, store)
}
module.exports = appendField

@ -0,0 +1,53 @@
var reFirstKey = /^[^\[]*/
var reDigitPath = /^\[(\d+)\]/
var reNormalPath = /^\[([^\]]+)\]/
function parsePath (key) {
function failure () {
return [{ type: 'object', key: key, last: true }]
}
var firstKey = reFirstKey.exec(key)[0]
if (!firstKey) return failure()
var len = key.length
var pos = firstKey.length
var tail = { type: 'object', key: firstKey }
var steps = [tail]
while (pos < len) {
var m
if (key[pos] === '[' && key[pos + 1] === ']') {
pos += 2
tail.append = true
if (pos !== len) return failure()
continue
}
m = reDigitPath.exec(key.substring(pos))
if (m !== null) {
pos += m[0].length
tail.nextType = 'array'
tail = { type: 'array', key: parseInt(m[1], 10) }
steps.push(tail)
continue
}
m = reNormalPath.exec(key.substring(pos))
if (m !== null) {
pos += m[0].length
tail.nextType = 'object'
tail = { type: 'object', key: m[1] }
steps.push(tail)
continue
}
return failure()
}
tail.last = true
return steps
}
module.exports = parsePath

@ -0,0 +1,64 @@
function valueType (value) {
if (value === undefined) return 'undefined'
if (Array.isArray(value)) return 'array'
if (typeof value === 'object') return 'object'
return 'scalar'
}
function setLastValue (context, step, currentValue, entryValue) {
switch (valueType(currentValue)) {
case 'undefined':
if (step.append) {
context[step.key] = [entryValue]
} else {
context[step.key] = entryValue
}
break
case 'array':
context[step.key].push(entryValue)
break
case 'object':
return setLastValue(currentValue, { type: 'object', key: '', last: true }, currentValue[''], entryValue)
case 'scalar':
context[step.key] = [context[step.key], entryValue]
break
}
return context
}
function setValue (context, step, currentValue, entryValue) {
if (step.last) return setLastValue(context, step, currentValue, entryValue)
var obj
switch (valueType(currentValue)) {
case 'undefined':
if (step.nextType === 'array') {
context[step.key] = []
} else {
context[step.key] = Object.create(null)
}
return context[step.key]
case 'object':
return context[step.key]
case 'array':
if (step.nextType === 'array') {
return currentValue
}
obj = Object.create(null)
context[step.key] = obj
currentValue.forEach(function (item, i) {
if (item !== undefined) obj['' + i] = item
})
return obj
case 'scalar':
obj = Object.create(null)
obj[''] = currentValue
context[step.key] = obj
return obj
}
}
module.exports = setValue

@ -0,0 +1,19 @@
{
"name": "append-field",
"version": "1.0.0",
"license": "MIT",
"author": "Linus Unnebäck <linus@folkdatorn.se>",
"main": "index.js",
"devDependencies": {
"mocha": "^2.2.4",
"standard": "^6.0.5",
"testdata-w3c-json-form": "^0.2.0"
},
"scripts": {
"test": "standard && mocha"
},
"repository": {
"type": "git",
"url": "http://github.com/LinusU/node-append-field.git"
}
}

@ -0,0 +1,19 @@
/* eslint-env mocha */
var assert = require('assert')
var appendField = require('../')
var testData = require('testdata-w3c-json-form')
describe('Append Field', function () {
for (var test of testData) {
it('handles ' + test.name, function () {
var store = Object.create(null)
for (var field of test.fields) {
appendField(store, field.key, field.value)
}
assert.deepEqual(store, test.expected)
})
}
})

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Blake Embrey (hello@blakeembrey.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

@ -0,0 +1,43 @@
# Array Flatten
[![NPM version][npm-image]][npm-url]
[![NPM downloads][downloads-image]][downloads-url]
[![Build status][travis-image]][travis-url]
[![Test coverage][coveralls-image]][coveralls-url]
> Flatten an array of nested arrays into a single flat array. Accepts an optional depth.
## Installation
```
npm install array-flatten --save
```
## Usage
```javascript
var flatten = require('array-flatten')
flatten([1, [2, [3, [4, [5], 6], 7], 8], 9])
//=> [1, 2, 3, 4, 5, 6, 7, 8, 9]
flatten([1, [2, [3, [4, [5], 6], 7], 8], 9], 2)
//=> [1, 2, 3, [4, [5], 6], 7, 8, 9]
(function () {
flatten(arguments) //=> [1, 2, 3]
})(1, [2, 3])
```
## License
MIT
[npm-image]: https://img.shields.io/npm/v/array-flatten.svg?style=flat
[npm-url]: https://npmjs.org/package/array-flatten
[downloads-image]: https://img.shields.io/npm/dm/array-flatten.svg?style=flat
[downloads-url]: https://npmjs.org/package/array-flatten
[travis-image]: https://img.shields.io/travis/blakeembrey/array-flatten.svg?style=flat
[travis-url]: https://travis-ci.org/blakeembrey/array-flatten
[coveralls-image]: https://img.shields.io/coveralls/blakeembrey/array-flatten.svg?style=flat
[coveralls-url]: https://coveralls.io/r/blakeembrey/array-flatten?branch=master

@ -0,0 +1,64 @@
'use strict'
/**
* Expose `arrayFlatten`.
*/
module.exports = arrayFlatten
/**
* Recursive flatten function with depth.
*
* @param {Array} array
* @param {Array} result
* @param {Number} depth
* @return {Array}
*/
function flattenWithDepth (array, result, depth) {
for (var i = 0; i < array.length; i++) {
var value = array[i]
if (depth > 0 && Array.isArray(value)) {
flattenWithDepth(value, result, depth - 1)
} else {
result.push(value)
}
}
return result
}
/**
* Recursive flatten function. Omitting depth is slightly faster.
*
* @param {Array} array
* @param {Array} result
* @return {Array}
*/
function flattenForever (array, result) {
for (var i = 0; i < array.length; i++) {
var value = array[i]
if (Array.isArray(value)) {
flattenForever(value, result)
} else {
result.push(value)
}
}
return result
}
/**
* Flatten an array, with the ability to define a depth.
*
* @param {Array} array
* @param {Number} depth
* @return {Array}
*/
function arrayFlatten (array, depth) {
if (depth == null) {
return flattenForever(array, [])
}
return flattenWithDepth(array, [], depth)
}

@ -0,0 +1,39 @@
{
"name": "array-flatten",
"version": "1.1.1",
"description": "Flatten an array of nested arrays into a single flat array",
"main": "array-flatten.js",
"files": [
"array-flatten.js",
"LICENSE"
],
"scripts": {
"test": "istanbul cover _mocha -- -R spec"
},
"repository": {
"type": "git",
"url": "git://github.com/blakeembrey/array-flatten.git"
},
"keywords": [
"array",
"flatten",
"arguments",
"depth"
],
"author": {
"name": "Blake Embrey",
"email": "hello@blakeembrey.com",
"url": "http://blakeembrey.me"
},
"license": "MIT",
"bugs": {
"url": "https://github.com/blakeembrey/array-flatten/issues"
},
"homepage": "https://github.com/blakeembrey/array-flatten",
"devDependencies": {
"istanbul": "^0.3.13",
"mocha": "^2.2.4",
"pre-commit": "^1.0.7",
"standard": "^3.7.3"
}
}

@ -0,0 +1,266 @@
#### 9.0.0
* 27/05/2019
* For compatibility with legacy browsers, remove `Symbol` references.
#### 8.1.1
* 24/02/2019
* [BUGFIX] #222 Restore missing `var` to `export BigNumber`.
* Allow any key in BigNumber.Instance in *bignumber.d.ts*.
#### 8.1.0
* 23/02/2019
* [NEW FEATURE] #220 Create a BigNumber using `{s, e, c}`.
* [NEW FEATURE] `isBigNumber`: if `BigNumber.DEBUG` is `true`, also check that the BigNumber instance is well-formed.
* Remove `instanceof` checks; just use `_isBigNumber` to identify a BigNumber instance.
* Add `_isBigNumber` to prototype in *bignumber.mjs*.
* Add tests for BigNumber creation from object.
* Update *API.html*.
#### 8.0.2
* 13/01/2019
* #209 `toPrecision` without argument should follow `toString`.
* Improve *Use* section of *README*.
* Optimise `toString(10)`.
* Add verson number to API doc.
#### 8.0.1
* 01/11/2018
* Rest parameter must be array type in *bignumber.d.ts*.
#### 8.0.0
* 01/11/2018
* [NEW FEATURE] Add `BigNumber.sum` method.
* [NEW FEATURE]`toFormat`: add `prefix` and `suffix` options.
* [NEW FEATURE] #178 Pass custom formatting to `toFormat`.
* [BREAKING CHANGE] #184 `toFraction`: return array of BigNumbers not strings.
* [NEW FEATURE] #185 Enable overwrite of `valueOf` to prevent accidental addition to string.
* #183 Add Node.js `crypto` requirement to documentation.
* [BREAKING CHANGE] #198 Disallow signs and whitespace in custom alphabet.
* [NEW FEATURE] #188 Implement `util.inspect.custom` for Node.js REPL.
* #170 Make `isBigNumber` a type guard in *bignumber.d.ts*.
* [BREAKING CHANGE] `BigNumber.min` and `BigNumber.max`: don't accept an array.
* Update *.travis.yml*.
* Remove *bower.json*.
#### 7.2.1
* 24/05/2018
* Add `browser` field to *package.json*.
#### 7.2.0
* 22/05/2018
* #166 Correct *.mjs* file. Remove extension from `main` field in *package.json*.
#### 7.1.0
* 18/05/2018
* Add `module` field to *package.json* for *bignumber.mjs*.
#### 7.0.2
* 17/05/2018
* #165 Bugfix: upper-case letters for bases 11-36 in a custom alphabet.
* Add note to *README* regarding creating BigNumbers from Number values.
#### 7.0.1
* 26/04/2018
* #158 Fix global object variable name typo.
#### 7.0.0
* 26/04/2018
* #143 Remove global BigNumber from typings.
* #144 Enable compatibility with `Object.freeze(Object.prototype)`.
* #148 #123 #11 Only throw on a number primitive with more than 15 significant digits if `BigNumber.DEBUG` is `true`.
* Only throw on an invalid BigNumber value if `BigNumber.DEBUG` is `true`. Return BigNumber `NaN` instead.
* #154 `exponentiatedBy`: allow BigNumber exponent.
* #156 Prevent Content Security Policy *unsafe-eval* issue.
* `toFraction`: allow `Infinity` maximum denominator.
* Comment-out some excess tests to reduce test time.
* Amend indentation and other spacing.
#### 6.0.0
* 26/01/2018
* #137 Implement `APLHABET` configuration option.
* Remove `ERRORS` configuration option.
* Remove `toDigits` method; extend `precision` method accordingly.
* Remove s`round` method; extend `decimalPlaces` method accordingly.
* Remove methods: `ceil`, `floor`, and `truncated`.
* Remove method aliases: `add`, `cmp`, `isInt`, `isNeg`, `trunc`, `mul`, `neg` and `sub`.
* Rename methods: `shift` to `shiftedBy`, `another` to `clone`, `toPower` to `exponentiatedBy`, and `equals` to `isEqualTo`.
* Rename methods: add `is` prefix to `greaterThan`, `greaterThanOrEqualTo`, `lessThan` and `lessThanOrEqualTo`.
* Add methods: `multipliedBy`, `isBigNumber`, `isPositive`, `integerValue`, `maximum` and `minimum`.
* Refactor test suite.
* Add *CHANGELOG.md*.
* Rewrite *bignumber.d.ts*.
* Redo API image.
#### 5.0.0
* 27/11/2017
* #81 Don't throw on constructor call without `new`.
#### 4.1.0
* 26/09/2017
* Remove node 0.6 from *.travis.yml*.
* Add *bignumber.mjs*.
#### 4.0.4
* 03/09/2017
* Add missing aliases to *bignumber.d.ts*.
#### 4.0.3
* 30/08/2017
* Add types: *bignumber.d.ts*.
#### 4.0.2
* 03/05/2017
* #120 Workaround Safari/Webkit bug.
#### 4.0.1
* 05/04/2017
* #121 BigNumber.default to BigNumber['default'].
#### 4.0.0
* 09/01/2017
* Replace BigNumber.isBigNumber method with isBigNumber prototype property.
#### 3.1.2
* 08/01/2017
* Minor documentation edit.
#### 3.1.1
* 08/01/2017
* Uncomment `isBigNumber` tests.
* Ignore dot files.
#### 3.1.0
* 08/01/2017
* Add `isBigNumber` method.
#### 3.0.2
* 08/01/2017
* Bugfix: Possible incorrect value of `ERRORS` after a `BigNumber.another` call (due to `parseNumeric` declaration in outer scope).
#### 3.0.1
* 23/11/2016
* Apply fix for old ipads with `%` issue, see #57 and #102.
* Correct error message.
#### 3.0.0
* 09/11/2016
* Remove `require('crypto')` - leave it to the user.
* Add `BigNumber.set` as `BigNumber.config` alias.
* Default `POW_PRECISION` to `0`.
#### 2.4.0
* 14/07/2016
* #97 Add exports to support ES6 imports.
#### 2.3.0
* 07/03/2016
* #86 Add modulus parameter to `toPower`.
#### 2.2.0
* 03/03/2016
* #91 Permit larger JS integers.
#### 2.1.4
* 15/12/2015
* Correct UMD.
#### 2.1.3
* 13/12/2015
* Refactor re global object and crypto availability when bundling.
#### 2.1.2
* 10/12/2015
* Bugfix: `window.crypto` not assigned to `crypto`.
#### 2.1.1
* 09/12/2015
* Prevent code bundler from adding `crypto` shim.
#### 2.1.0
* 26/10/2015
* For `valueOf` and `toJSON`, include the minus sign with negative zero.
#### 2.0.8
* 2/10/2015
* Internal round function bugfix.
#### 2.0.6
* 31/03/2015
* Add bower.json. Tweak division after in-depth review.
#### 2.0.5
* 25/03/2015
* Amend README. Remove bitcoin address.
#### 2.0.4
* 25/03/2015
* Critical bugfix #58: division.
#### 2.0.3
* 18/02/2015
* Amend README. Add source map.
#### 2.0.2
* 18/02/2015
* Correct links.
#### 2.0.1
* 18/02/2015
* Add `max`, `min`, `precision`, `random`, `shiftedBy`, `toDigits` and `truncated` methods.
* Add the short-forms: `add`, `mul`, `sd`, `sub` and `trunc`.
* Add an `another` method to enable multiple independent constructors to be created.
* Add support for the base 2, 8 and 16 prefixes `0b`, `0o` and `0x`.
* Enable a rounding mode to be specified as a second parameter to `toExponential`, `toFixed`, `toFormat` and `toPrecision`.
* Add a `CRYPTO` configuration property so cryptographically-secure pseudo-random number generation can be specified.
* Add a `MODULO_MODE` configuration property to enable the rounding mode used by the `modulo` operation to be specified.
* Add a `POW_PRECISION` configuration property to enable the number of significant digits calculated by the power operation to be limited.
* Improve code quality.
* Improve documentation.
#### 2.0.0
* 29/12/2014
* Add `dividedToIntegerBy`, `isInteger` and `toFormat` methods.
* Remove the following short-forms: `isF`, `isZ`, `toE`, `toF`, `toFr`, `toN`, `toP`, `toS`.
* Store a BigNumber's coefficient in base 1e14, rather than base 10.
* Add fast path for integers to BigNumber constructor.
* Incorporate the library into the online documentation.
#### 1.5.0
* 13/11/2014
* Add `toJSON` and `decimalPlaces` methods.
#### 1.4.1
* 08/06/2014
* Amend README.
#### 1.4.0
* 08/05/2014
* Add `toNumber`.
#### 1.3.0
* 08/11/2013
* Ensure correct rounding of `sqrt` in all, rather than almost all, cases.
* Maximum radix to 64.
#### 1.2.1
* 17/10/2013
* Sign of zero when x < 0 and x + (-x) = 0.
#### 1.2.0
* 19/9/2013
* Throw Error objects for stack.
#### 1.1.1
* 22/8/2013
* Show original value in constructor error message.
#### 1.1.0
* 1/8/2013
* Allow numbers with trailing radix point.
#### 1.0.1
* Bugfix: error messages with incorrect method name
#### 1.0.0
* 8/11/2012
* Initial release

@ -0,0 +1,23 @@
The MIT Licence.
Copyright (c) 2019 Michael Mclaughlin
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

@ -0,0 +1,268 @@
![bignumber.js](https://raw.githubusercontent.com/MikeMcl/bignumber.js/gh-pages/bignumberjs.png)
A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
[![Build Status](https://travis-ci.org/MikeMcl/bignumber.js.svg)](https://travis-ci.org/MikeMcl/bignumber.js)
<br />
## Features
- Integers and decimals
- Simple API but full-featured
- Faster, smaller, and perhaps easier to use than JavaScript versions of Java's BigDecimal
- 8 KB minified and gzipped
- Replicates the `toExponential`, `toFixed`, `toPrecision` and `toString` methods of JavaScript's Number type
- Includes a `toFraction` and a correctly-rounded `squareRoot` method
- Supports cryptographically-secure pseudo-random number generation
- No dependencies
- Wide platform compatibility: uses JavaScript 1.5 (ECMAScript 3) features only
- Comprehensive [documentation](http://mikemcl.github.io/bignumber.js/) and test set
![API](https://raw.githubusercontent.com/MikeMcl/bignumber.js/gh-pages/API.png)
If a smaller and simpler library is required see [big.js](https://github.com/MikeMcl/big.js/).
It's less than half the size but only works with decimal numbers and only has half the methods.
It also does not allow `NaN` or `Infinity`, or have the configuration options of this library.
See also [decimal.js](https://github.com/MikeMcl/decimal.js/), which among other things adds support for non-integer powers, and performs all operations to a specified number of significant digits.
## Load
The library is the single JavaScript file *bignumber.js* (or minified, *bignumber.min.js*).
Browser:
```html
<script src='path/to/bignumber.js'></script>
```
[Node.js](http://nodejs.org):
```bash
$ npm install bignumber.js
```
```javascript
const BigNumber = require('bignumber.js');
```
ES6 module:
```javascript
import BigNumber from "./bignumber.mjs"
```
AMD loader libraries such as [requireJS](http://requirejs.org/):
```javascript
require(['bignumber'], function(BigNumber) {
// Use BigNumber here in local scope. No global BigNumber.
});
```
## Use
The library exports a single constructor function, [`BigNumber`](http://mikemcl.github.io/bignumber.js/#bignumber), which accepts a value of type Number, String or BigNumber,
```javascript
let x = new BigNumber(123.4567);
let y = BigNumber('123456.7e-3');
let z = new BigNumber(x);
x.isEqualTo(y) && y.isEqualTo(z) && x.isEqualTo(z); // true
```
To get the string value of a BigNumber use [`toString()`](http://mikemcl.github.io/bignumber.js/#toS) or [`toFixed()`](http://mikemcl.github.io/bignumber.js/#toFix). Using `toFixed()` prevents exponential notation being returned, no matter how large or small the value.
```javascript
let x = new BigNumber('1111222233334444555566');
x.toString(); // "1.111222233334444555566e+21"
x.toFixed(); // "1111222233334444555566"
```
If the limited precision of Number values is not well understood, it is recommended to create BigNumbers from String values rather than Number values to avoid a potential loss of precision.
*In all further examples below, `let`, semicolons and `toString` calls are not shown. If a commented-out value is in quotes it means `toString` has been called on the preceding expression.*
```javascript
// Precision loss from using numeric literals with more than 15 significant digits.
new BigNumber(1.0000000000000001) // '1'
new BigNumber(88259496234518.57) // '88259496234518.56'
new BigNumber(99999999999999999999) // '100000000000000000000'
// Precision loss from using numeric literals outside the range of Number values.
new BigNumber(2e+308) // 'Infinity'
new BigNumber(1e-324) // '0'
// Precision loss from the unexpected result of arithmetic with Number values.
new BigNumber(0.7 + 0.1) // '0.7999999999999999'
```
When creating a BigNumber from a Number, note that a BigNumber is created from a Number's decimal `toString()` value not from its underlying binary value. If the latter is required, then pass the Number's `toString(2)` value and specify base 2.
```javascript
new BigNumber(Number.MAX_VALUE.toString(2), 2)
```
BigNumbers can be created from values in bases from 2 to 36. See [`ALPHABET`](http://mikemcl.github.io/bignumber.js/#alphabet) to extend this range.
```javascript
a = new BigNumber(1011, 2) // "11"
b = new BigNumber('zz.9', 36) // "1295.25"
c = a.plus(b) // "1306.25"
```
Performance is better if base 10 is NOT specified for decimal values. Only specify base 10 when it is desired that the number of decimal places of the input value be limited to the current [`DECIMAL_PLACES`](http://mikemcl.github.io/bignumber.js/#decimal-places) setting.
A BigNumber is immutable in the sense that it is not changed by its methods.
```javascript
0.3 - 0.1 // 0.19999999999999998
x = new BigNumber(0.3)
x.minus(0.1) // "0.2"
x // "0.3"
```
The methods that return a BigNumber can be chained.
```javascript
x.dividedBy(y).plus(z).times(9)
x.times('1.23456780123456789e+9').plus(9876.5432321).dividedBy('4444562598.111772').integerValue()
```
Some of the longer method names have a shorter alias.
```javascript
x.squareRoot().dividedBy(y).exponentiatedBy(3).isEqualTo(x.sqrt().div(y).pow(3)) // true
x.modulo(y).multipliedBy(z).eq(x.mod(y).times(z)) // true
```
As with JavaScript's Number type, there are [`toExponential`](http://mikemcl.github.io/bignumber.js/#toE), [`toFixed`](http://mikemcl.github.io/bignumber.js/#toFix) and [`toPrecision`](http://mikemcl.github.io/bignumber.js/#toP) methods.
```javascript
x = new BigNumber(255.5)
x.toExponential(5) // "2.55500e+2"
x.toFixed(5) // "255.50000"
x.toPrecision(5) // "255.50"
x.toNumber() // 255.5
```
A base can be specified for [`toString`](http://mikemcl.github.io/bignumber.js/#toS). Performance is better if base 10 is NOT specified, i.e. use `toString()` not `toString(10)`. Only specify base 10 when it is desired that the number of decimal places be limited to the current [`DECIMAL_PLACES`](http://mikemcl.github.io/bignumber.js/#decimal-places) setting.
```javascript
x.toString(16) // "ff.8"
```
There is a [`toFormat`](http://mikemcl.github.io/bignumber.js/#toFor) method which may be useful for internationalisation.
```javascript
y = new BigNumber('1234567.898765')
y.toFormat(2) // "1,234,567.90"
```
The maximum number of decimal places of the result of an operation involving division (i.e. a division, square root, base conversion or negative power operation) is set using the `set` or `config` method of the `BigNumber` constructor.
The other arithmetic operations always give the exact result.
```javascript
BigNumber.set({ DECIMAL_PLACES: 10, ROUNDING_MODE: 4 })
x = new BigNumber(2)
y = new BigNumber(3)
z = x.dividedBy(y) // "0.6666666667"
z.squareRoot() // "0.8164965809"
z.exponentiatedBy(-3) // "3.3749999995"
z.toString(2) // "0.1010101011"
z.multipliedBy(z) // "0.44444444448888888889"
z.multipliedBy(z).decimalPlaces(10) // "0.4444444445"
```
There is a [`toFraction`](http://mikemcl.github.io/bignumber.js/#toFr) method with an optional *maximum denominator* argument
```javascript
y = new BigNumber(355)
pi = y.dividedBy(113) // "3.1415929204"
pi.toFraction() // [ "7853982301", "2500000000" ]
pi.toFraction(1000) // [ "355", "113" ]
```
and [`isNaN`](http://mikemcl.github.io/bignumber.js/#isNaN) and [`isFinite`](http://mikemcl.github.io/bignumber.js/#isF) methods, as `NaN` and `Infinity` are valid `BigNumber` values.
```javascript
x = new BigNumber(NaN) // "NaN"
y = new BigNumber(Infinity) // "Infinity"
x.isNaN() && !y.isNaN() && !x.isFinite() && !y.isFinite() // true
```
The value of a BigNumber is stored in a decimal floating point format in terms of a coefficient, exponent and sign.
```javascript
x = new BigNumber(-123.456);
x.c // [ 123, 45600000000000 ] coefficient (i.e. significand)
x.e // 2 exponent
x.s // -1 sign
```
For advanced usage, multiple BigNumber constructors can be created, each with their own independent configuration.
```javascript
// Set DECIMAL_PLACES for the original BigNumber constructor
BigNumber.set({ DECIMAL_PLACES: 10 })
// Create another BigNumber constructor, optionally passing in a configuration object
BN = BigNumber.clone({ DECIMAL_PLACES: 5 })
x = new BigNumber(1)
y = new BN(1)
x.div(3) // '0.3333333333'
y.div(3) // '0.33333'
```
For further information see the [API](http://mikemcl.github.io/bignumber.js/) reference in the *doc* directory.
## Test
The *test/modules* directory contains the test scripts for each method.
The tests can be run with Node.js or a browser. For Node.js use
$ npm test
or
$ node test/test
To test a single method, use, for example
$ node test/methods/toFraction
For the browser, open *test/test.html*.
## Build
For Node, if [uglify-js](https://github.com/mishoo/UglifyJS2) is installed
npm install uglify-js -g
then
npm run build
will create *bignumber.min.js*.
A source map will also be created in the root directory.
## Feedback
Open an issue, or email
Michael
<a href="mailto:M8ch88l@gmail.com">M8ch88l@gmail.com</a>
## Licence
The MIT Licence.
See [LICENCE](https://github.com/MikeMcl/bignumber.js/blob/master/LICENCE).

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

@ -0,0 +1,40 @@
{
"name": "bignumber.js",
"description": "A library for arbitrary-precision decimal and non-decimal arithmetic",
"version": "9.0.0",
"keywords": [
"arbitrary",
"precision",
"arithmetic",
"big",
"number",
"decimal",
"float",
"biginteger",
"bigdecimal",
"bignumber",
"bigint",
"bignum"
],
"repository": {
"type": "git",
"url": "https://github.com/MikeMcl/bignumber.js.git"
},
"main": "bignumber",
"module": "bignumber.mjs",
"browser": "bignumber.js",
"types": "bignumber.d.ts",
"author": {
"name": "Michael Mclaughlin",
"email": "M8ch88l@gmail.com"
},
"engines": {
"node": "*"
},
"license": "MIT",
"scripts": {
"test": "node test/test",
"build": "uglifyjs bignumber.js --source-map -c -m -o bignumber.min.js"
},
"dependencies": {}
}

@ -0,0 +1,651 @@
1.20.0 / 2022-04-02
===================
* Fix error message for json parse whitespace in `strict`
* Fix internal error when inflated body exceeds limit
* Prevent loss of async hooks context
* Prevent hanging when request already read
* deps: depd@2.0.0
- Replace internal `eval` usage with `Function` constructor
- Use instance methods on `process` to check for listeners
* deps: http-errors@2.0.0
- deps: depd@2.0.0
- deps: statuses@2.0.1
* deps: on-finished@2.4.1
* deps: qs@6.10.3
* deps: raw-body@2.5.1
- deps: http-errors@2.0.0
1.19.2 / 2022-02-15
===================
* deps: bytes@3.1.2
* deps: qs@6.9.7
* Fix handling of `__proto__` keys
* deps: raw-body@2.4.3
- deps: bytes@3.1.2
1.19.1 / 2021-12-10
===================
* deps: bytes@3.1.1
* deps: http-errors@1.8.1
- deps: inherits@2.0.4
- deps: toidentifier@1.0.1
- deps: setprototypeof@1.2.0
* deps: qs@6.9.6
* deps: raw-body@2.4.2
- deps: bytes@3.1.1
- deps: http-errors@1.8.1
* deps: safe-buffer@5.2.1
* deps: type-is@~1.6.18
1.19.0 / 2019-04-25
===================
* deps: bytes@3.1.0
- Add petabyte (`pb`) support
* deps: http-errors@1.7.2
- Set constructor name when possible
- deps: setprototypeof@1.1.1
- deps: statuses@'>= 1.5.0 < 2'
* deps: iconv-lite@0.4.24
- Added encoding MIK
* deps: qs@6.7.0
- Fix parsing array brackets after index
* deps: raw-body@2.4.0
- deps: bytes@3.1.0
- deps: http-errors@1.7.2
- deps: iconv-lite@0.4.24
* deps: type-is@~1.6.17
- deps: mime-types@~2.1.24
- perf: prevent internal `throw` on invalid type
1.18.3 / 2018-05-14
===================
* Fix stack trace for strict json parse error
* deps: depd@~1.1.2
- perf: remove argument reassignment
* deps: http-errors@~1.6.3
- deps: depd@~1.1.2
- deps: setprototypeof@1.1.0
- deps: statuses@'>= 1.3.1 < 2'
* deps: iconv-lite@0.4.23
- Fix loading encoding with year appended
- Fix deprecation warnings on Node.js 10+
* deps: qs@6.5.2
* deps: raw-body@2.3.3
- deps: http-errors@1.6.3
- deps: iconv-lite@0.4.23
* deps: type-is@~1.6.16
- deps: mime-types@~2.1.18
1.18.2 / 2017-09-22
===================
* deps: debug@2.6.9
* perf: remove argument reassignment
1.18.1 / 2017-09-12
===================
* deps: content-type@~1.0.4
- perf: remove argument reassignment
- perf: skip parameter parsing when no parameters
* deps: iconv-lite@0.4.19
- Fix ISO-8859-1 regression
- Update Windows-1255
* deps: qs@6.5.1
- Fix parsing & compacting very deep objects
* deps: raw-body@2.3.2
- deps: iconv-lite@0.4.19
1.18.0 / 2017-09-08
===================
* Fix JSON strict violation error to match native parse error
* Include the `body` property on verify errors
* Include the `type` property on all generated errors
* Use `http-errors` to set status code on errors
* deps: bytes@3.0.0
* deps: debug@2.6.8
* deps: depd@~1.1.1
- Remove unnecessary `Buffer` loading
* deps: http-errors@~1.6.2
- deps: depd@1.1.1
* deps: iconv-lite@0.4.18
- Add support for React Native
- Add a warning if not loaded as utf-8
- Fix CESU-8 decoding in Node.js 8
- Improve speed of ISO-8859-1 encoding
* deps: qs@6.5.0
* deps: raw-body@2.3.1
- Use `http-errors` for standard emitted errors
- deps: bytes@3.0.0
- deps: iconv-lite@0.4.18
- perf: skip buffer decoding on overage chunk
* perf: prevent internal `throw` when missing charset
1.17.2 / 2017-05-17
===================
* deps: debug@2.6.7
- Fix `DEBUG_MAX_ARRAY_LENGTH`
- deps: ms@2.0.0
* deps: type-is@~1.6.15
- deps: mime-types@~2.1.15
1.17.1 / 2017-03-06
===================
* deps: qs@6.4.0
- Fix regression parsing keys starting with `[`
1.17.0 / 2017-03-01
===================
* deps: http-errors@~1.6.1
- Make `message` property enumerable for `HttpError`s
- deps: setprototypeof@1.0.3
* deps: qs@6.3.1
- Fix compacting nested arrays
1.16.1 / 2017-02-10
===================
* deps: debug@2.6.1
- Fix deprecation messages in WebStorm and other editors
- Undeprecate `DEBUG_FD` set to `1` or `2`
1.16.0 / 2017-01-17
===================
* deps: debug@2.6.0
- Allow colors in workers
- Deprecated `DEBUG_FD` environment variable
- Fix error when running under React Native
- Use same color for same namespace
- deps: ms@0.7.2
* deps: http-errors@~1.5.1
- deps: inherits@2.0.3
- deps: setprototypeof@1.0.2
- deps: statuses@'>= 1.3.1 < 2'
* deps: iconv-lite@0.4.15
- Added encoding MS-31J
- Added encoding MS-932
- Added encoding MS-936
- Added encoding MS-949
- Added encoding MS-950
- Fix GBK/GB18030 handling of Euro character
* deps: qs@6.2.1
- Fix array parsing from skipping empty values
* deps: raw-body@~2.2.0
- deps: iconv-lite@0.4.15
* deps: type-is@~1.6.14
- deps: mime-types@~2.1.13
1.15.2 / 2016-06-19
===================
* deps: bytes@2.4.0
* deps: content-type@~1.0.2
- perf: enable strict mode
* deps: http-errors@~1.5.0
- Use `setprototypeof` module to replace `__proto__` setting
- deps: statuses@'>= 1.3.0 < 2'
- perf: enable strict mode
* deps: qs@6.2.0
* deps: raw-body@~2.1.7
- deps: bytes@2.4.0
- perf: remove double-cleanup on happy path
* deps: type-is@~1.6.13
- deps: mime-types@~2.1.11
1.15.1 / 2016-05-05
===================
* deps: bytes@2.3.0
- Drop partial bytes on all parsed units
- Fix parsing byte string that looks like hex
* deps: raw-body@~2.1.6
- deps: bytes@2.3.0
* deps: type-is@~1.6.12
- deps: mime-types@~2.1.10
1.15.0 / 2016-02-10
===================
* deps: http-errors@~1.4.0
- Add `HttpError` export, for `err instanceof createError.HttpError`
- deps: inherits@2.0.1
- deps: statuses@'>= 1.2.1 < 2'
* deps: qs@6.1.0
* deps: type-is@~1.6.11
- deps: mime-types@~2.1.9
1.14.2 / 2015-12-16
===================
* deps: bytes@2.2.0
* deps: iconv-lite@0.4.13
* deps: qs@5.2.0
* deps: raw-body@~2.1.5
- deps: bytes@2.2.0
- deps: iconv-lite@0.4.13
* deps: type-is@~1.6.10
- deps: mime-types@~2.1.8
1.14.1 / 2015-09-27
===================
* Fix issue where invalid charset results in 400 when `verify` used
* deps: iconv-lite@0.4.12
- Fix CESU-8 decoding in Node.js 4.x
* deps: raw-body@~2.1.4
- Fix masking critical errors from `iconv-lite`
- deps: iconv-lite@0.4.12
* deps: type-is@~1.6.9
- deps: mime-types@~2.1.7
1.14.0 / 2015-09-16
===================
* Fix JSON strict parse error to match syntax errors
* Provide static `require` analysis in `urlencoded` parser
* deps: depd@~1.1.0
- Support web browser loading
* deps: qs@5.1.0
* deps: raw-body@~2.1.3
- Fix sync callback when attaching data listener causes sync read
* deps: type-is@~1.6.8
- Fix type error when given invalid type to match against
- deps: mime-types@~2.1.6
1.13.3 / 2015-07-31
===================
* deps: type-is@~1.6.6
- deps: mime-types@~2.1.4
1.13.2 / 2015-07-05
===================
* deps: iconv-lite@0.4.11
* deps: qs@4.0.0
- Fix dropping parameters like `hasOwnProperty`
- Fix user-visible incompatibilities from 3.1.0
- Fix various parsing edge cases
* deps: raw-body@~2.1.2
- Fix error stack traces to skip `makeError`
- deps: iconv-lite@0.4.11
* deps: type-is@~1.6.4
- deps: mime-types@~2.1.2
- perf: enable strict mode
- perf: remove argument reassignment
1.13.1 / 2015-06-16
===================
* deps: qs@2.4.2
- Downgraded from 3.1.0 because of user-visible incompatibilities
1.13.0 / 2015-06-14
===================
* Add `statusCode` property on `Error`s, in addition to `status`
* Change `type` default to `application/json` for JSON parser
* Change `type` default to `application/x-www-form-urlencoded` for urlencoded parser
* Provide static `require` analysis
* Use the `http-errors` module to generate errors
* deps: bytes@2.1.0
- Slight optimizations
* deps: iconv-lite@0.4.10
- The encoding UTF-16 without BOM now defaults to UTF-16LE when detection fails
- Leading BOM is now removed when decoding
* deps: on-finished@~2.3.0
- Add defined behavior for HTTP `CONNECT` requests
- Add defined behavior for HTTP `Upgrade` requests
- deps: ee-first@1.1.1
* deps: qs@3.1.0
- Fix dropping parameters like `hasOwnProperty`
- Fix various parsing edge cases
- Parsed object now has `null` prototype
* deps: raw-body@~2.1.1
- Use `unpipe` module for unpiping requests
- deps: iconv-lite@0.4.10
* deps: type-is@~1.6.3
- deps: mime-types@~2.1.1
- perf: reduce try block size
- perf: remove bitwise operations
* perf: enable strict mode
* perf: remove argument reassignment
* perf: remove delete call
1.12.4 / 2015-05-10
===================
* deps: debug@~2.2.0
* deps: qs@2.4.2
- Fix allowing parameters like `constructor`
* deps: on-finished@~2.2.1
* deps: raw-body@~2.0.1
- Fix a false-positive when unpiping in Node.js 0.8
- deps: bytes@2.0.1
* deps: type-is@~1.6.2
- deps: mime-types@~2.0.11
1.12.3 / 2015-04-15
===================
* Slight efficiency improvement when not debugging
* deps: depd@~1.0.1
* deps: iconv-lite@0.4.8
- Add encoding alias UNICODE-1-1-UTF-7
* deps: raw-body@1.3.4
- Fix hanging callback if request aborts during read
- deps: iconv-lite@0.4.8
1.12.2 / 2015-03-16
===================
* deps: qs@2.4.1
- Fix error when parameter `hasOwnProperty` is present
1.12.1 / 2015-03-15
===================
* deps: debug@~2.1.3
- Fix high intensity foreground color for bold
- deps: ms@0.7.0
* deps: type-is@~1.6.1
- deps: mime-types@~2.0.10
1.12.0 / 2015-02-13
===================
* add `debug` messages
* accept a function for the `type` option
* use `content-type` to parse `Content-Type` headers
* deps: iconv-lite@0.4.7
- Gracefully support enumerables on `Object.prototype`
* deps: raw-body@1.3.3
- deps: iconv-lite@0.4.7
* deps: type-is@~1.6.0
- fix argument reassignment
- fix false-positives in `hasBody` `Transfer-Encoding` check
- support wildcard for both type and subtype (`*/*`)
- deps: mime-types@~2.0.9
1.11.0 / 2015-01-30
===================
* make internal `extended: true` depth limit infinity
* deps: type-is@~1.5.6
- deps: mime-types@~2.0.8
1.10.2 / 2015-01-20
===================
* deps: iconv-lite@0.4.6
- Fix rare aliases of single-byte encodings
* deps: raw-body@1.3.2
- deps: iconv-lite@0.4.6
1.10.1 / 2015-01-01
===================
* deps: on-finished@~2.2.0
* deps: type-is@~1.5.5
- deps: mime-types@~2.0.7
1.10.0 / 2014-12-02
===================
* make internal `extended: true` array limit dynamic
1.9.3 / 2014-11-21
==================
* deps: iconv-lite@0.4.5
- Fix Windows-31J and X-SJIS encoding support
* deps: qs@2.3.3
- Fix `arrayLimit` behavior
* deps: raw-body@1.3.1
- deps: iconv-lite@0.4.5
* deps: type-is@~1.5.3
- deps: mime-types@~2.0.3
1.9.2 / 2014-10-27
==================
* deps: qs@2.3.2
- Fix parsing of mixed objects and values
1.9.1 / 2014-10-22
==================
* deps: on-finished@~2.1.1
- Fix handling of pipelined requests
* deps: qs@2.3.0
- Fix parsing of mixed implicit and explicit arrays
* deps: type-is@~1.5.2
- deps: mime-types@~2.0.2
1.9.0 / 2014-09-24
==================
* include the charset in "unsupported charset" error message
* include the encoding in "unsupported content encoding" error message
* deps: depd@~1.0.0
1.8.4 / 2014-09-23
==================
* fix content encoding to be case-insensitive
1.8.3 / 2014-09-19
==================
* deps: qs@2.2.4
- Fix issue with object keys starting with numbers truncated
1.8.2 / 2014-09-15
==================
* deps: depd@0.4.5
1.8.1 / 2014-09-07
==================
* deps: media-typer@0.3.0
* deps: type-is@~1.5.1
1.8.0 / 2014-09-05
==================
* make empty-body-handling consistent between chunked requests
- empty `json` produces `{}`
- empty `raw` produces `new Buffer(0)`
- empty `text` produces `''`
- empty `urlencoded` produces `{}`
* deps: qs@2.2.3
- Fix issue where first empty value in array is discarded
* deps: type-is@~1.5.0
- fix `hasbody` to be true for `content-length: 0`
1.7.0 / 2014-09-01
==================
* add `parameterLimit` option to `urlencoded` parser
* change `urlencoded` extended array limit to 100
* respond with 413 when over `parameterLimit` in `urlencoded`
1.6.7 / 2014-08-29
==================
* deps: qs@2.2.2
- Remove unnecessary cloning
1.6.6 / 2014-08-27
==================
* deps: qs@2.2.0
- Array parsing fix
- Performance improvements
1.6.5 / 2014-08-16
==================
* deps: on-finished@2.1.0
1.6.4 / 2014-08-14
==================
* deps: qs@1.2.2
1.6.3 / 2014-08-10
==================
* deps: qs@1.2.1
1.6.2 / 2014-08-07
==================
* deps: qs@1.2.0
- Fix parsing array of objects
1.6.1 / 2014-08-06
==================
* deps: qs@1.1.0
- Accept urlencoded square brackets
- Accept empty values in implicit array notation
1.6.0 / 2014-08-05
==================
* deps: qs@1.0.2
- Complete rewrite
- Limits array length to 20
- Limits object depth to 5
- Limits parameters to 1,000
1.5.2 / 2014-07-27
==================
* deps: depd@0.4.4
- Work-around v8 generating empty stack traces
1.5.1 / 2014-07-26
==================
* deps: depd@0.4.3
- Fix exception when global `Error.stackTraceLimit` is too low
1.5.0 / 2014-07-20
==================
* deps: depd@0.4.2
- Add `TRACE_DEPRECATION` environment variable
- Remove non-standard grey color from color output
- Support `--no-deprecation` argument
- Support `--trace-deprecation` argument
* deps: iconv-lite@0.4.4
- Added encoding UTF-7
* deps: raw-body@1.3.0
- deps: iconv-lite@0.4.4
- Added encoding UTF-7
- Fix `Cannot switch to old mode now` error on Node.js 0.10+
* deps: type-is@~1.3.2
1.4.3 / 2014-06-19
==================
* deps: type-is@1.3.1
- fix global variable leak
1.4.2 / 2014-06-19
==================
* deps: type-is@1.3.0
- improve type parsing
1.4.1 / 2014-06-19
==================
* fix urlencoded extended deprecation message
1.4.0 / 2014-06-19
==================
* add `text` parser
* add `raw` parser
* check accepted charset in content-type (accepts utf-8)
* check accepted encoding in content-encoding (accepts identity)
* deprecate `bodyParser()` middleware; use `.json()` and `.urlencoded()` as needed
* deprecate `urlencoded()` without provided `extended` option
* lazy-load urlencoded parsers
* parsers split into files for reduced mem usage
* support gzip and deflate bodies
- set `inflate: false` to turn off
* deps: raw-body@1.2.2
- Support all encodings from `iconv-lite`
1.3.1 / 2014-06-11
==================
* deps: type-is@1.2.1
- Switch dependency from mime to mime-types@1.0.0
1.3.0 / 2014-05-31
==================
* add `extended` option to urlencoded parser
1.2.2 / 2014-05-27
==================
* deps: raw-body@1.1.6
- assert stream encoding on node.js 0.8
- assert stream encoding on node.js < 0.10.6
- deps: bytes@1
1.2.1 / 2014-05-26
==================
* invoke `next(err)` after request fully read
- prevents hung responses and socket hang ups
1.2.0 / 2014-05-11
==================
* add `verify` option
* deps: type-is@1.2.0
- support suffix matching
1.1.2 / 2014-05-11
==================
* improve json parser speed
1.1.1 / 2014-05-11
==================
* fix repeated limit parsing with every request
1.1.0 / 2014-05-10
==================
* add `type` option
* deps: pin for safety and consistency
1.0.2 / 2014-04-14
==================
* use `type-is` module
1.0.1 / 2014-03-20
==================
* lower default limits to 100kb

@ -0,0 +1,23 @@
(The MIT License)
Copyright (c) 2014 Jonathan Ong <me@jongleberry.com>
Copyright (c) 2014-2015 Douglas Christopher Wilson <doug@somethingdoug.com>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

@ -0,0 +1,464 @@
# body-parser
[![NPM Version][npm-image]][npm-url]
[![NPM Downloads][downloads-image]][downloads-url]
[![Build Status][github-actions-ci-image]][github-actions-ci-url]
[![Test Coverage][coveralls-image]][coveralls-url]
Node.js body parsing middleware.
Parse incoming request bodies in a middleware before your handlers, available
under the `req.body` property.
**Note** As `req.body`'s shape is based on user-controlled input, all
properties and values in this object are untrusted and should be validated
before trusting. For example, `req.body.foo.toString()` may fail in multiple
ways, for example the `foo` property may not be there or may not be a string,
and `toString` may not be a function and instead a string or other user input.
[Learn about the anatomy of an HTTP transaction in Node.js](https://nodejs.org/en/docs/guides/anatomy-of-an-http-transaction/).
_This does not handle multipart bodies_, due to their complex and typically
large nature. For multipart bodies, you may be interested in the following
modules:
* [busboy](https://www.npmjs.org/package/busboy#readme) and
[connect-busboy](https://www.npmjs.org/package/connect-busboy#readme)
* [multiparty](https://www.npmjs.org/package/multiparty#readme) and
[connect-multiparty](https://www.npmjs.org/package/connect-multiparty#readme)
* [formidable](https://www.npmjs.org/package/formidable#readme)
* [multer](https://www.npmjs.org/package/multer#readme)
This module provides the following parsers:
* [JSON body parser](#bodyparserjsonoptions)
* [Raw body parser](#bodyparserrawoptions)
* [Text body parser](#bodyparsertextoptions)
* [URL-encoded form body parser](#bodyparserurlencodedoptions)
Other body parsers you might be interested in:
- [body](https://www.npmjs.org/package/body#readme)
- [co-body](https://www.npmjs.org/package/co-body#readme)
## Installation
```sh
$ npm install body-parser
```
## API
```js
var bodyParser = require('body-parser')
```
The `bodyParser` object exposes various factories to create middlewares. All
middlewares will populate the `req.body` property with the parsed body when
the `Content-Type` request header matches the `type` option, or an empty
object (`{}`) if there was no body to parse, the `Content-Type` was not matched,
or an error occurred.
The various errors returned by this module are described in the
[errors section](#errors).
### bodyParser.json([options])
Returns middleware that only parses `json` and only looks at requests where
the `Content-Type` header matches the `type` option. This parser accepts any
Unicode encoding of the body and supports automatic inflation of `gzip` and
`deflate` encodings.
A new `body` object containing the parsed data is populated on the `request`
object after the middleware (i.e. `req.body`).
#### Options
The `json` function takes an optional `options` object that may contain any of
the following keys:
##### inflate
When set to `true`, then deflated (compressed) bodies will be inflated; when
`false`, deflated bodies are rejected. Defaults to `true`.
##### limit
Controls the maximum request body size. If this is a number, then the value
specifies the number of bytes; if it is a string, the value is passed to the
[bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults
to `'100kb'`.
##### reviver
The `reviver` option is passed directly to `JSON.parse` as the second
argument. You can find more information on this argument
[in the MDN documentation about JSON.parse](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Example.3A_Using_the_reviver_parameter).
##### strict
When set to `true`, will only accept arrays and objects; when `false` will
accept anything `JSON.parse` accepts. Defaults to `true`.
##### type
The `type` option is used to determine what media type the middleware will
parse. This option can be a string, array of strings, or a function. If not a
function, `type` option is passed directly to the
[type-is](https://www.npmjs.org/package/type-is#readme) library and this can
be an extension name (like `json`), a mime type (like `application/json`), or
a mime type with a wildcard (like `*/*` or `*/json`). If a function, the `type`
option is called as `fn(req)` and the request is parsed if it returns a truthy
value. Defaults to `application/json`.
##### verify
The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`,
where `buf` is a `Buffer` of the raw request body and `encoding` is the
encoding of the request. The parsing can be aborted by throwing an error.
### bodyParser.raw([options])
Returns middleware that parses all bodies as a `Buffer` and only looks at
requests where the `Content-Type` header matches the `type` option. This
parser supports automatic inflation of `gzip` and `deflate` encodings.
A new `body` object containing the parsed data is populated on the `request`
object after the middleware (i.e. `req.body`). This will be a `Buffer` object
of the body.
#### Options
The `raw` function takes an optional `options` object that may contain any of
the following keys:
##### inflate
When set to `true`, then deflated (compressed) bodies will be inflated; when
`false`, deflated bodies are rejected. Defaults to `true`.
##### limit
Controls the maximum request body size. If this is a number, then the value
specifies the number of bytes; if it is a string, the value is passed to the
[bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults
to `'100kb'`.
##### type
The `type` option is used to determine what media type the middleware will
parse. This option can be a string, array of strings, or a function.
If not a function, `type` option is passed directly to the
[type-is](https://www.npmjs.org/package/type-is#readme) library and this
can be an extension name (like `bin`), a mime type (like
`application/octet-stream`), or a mime type with a wildcard (like `*/*` or
`application/*`). If a function, the `type` option is called as `fn(req)`
and the request is parsed if it returns a truthy value. Defaults to
`application/octet-stream`.
##### verify
The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`,
where `buf` is a `Buffer` of the raw request body and `encoding` is the
encoding of the request. The parsing can be aborted by throwing an error.
### bodyParser.text([options])
Returns middleware that parses all bodies as a string and only looks at
requests where the `Content-Type` header matches the `type` option. This
parser supports automatic inflation of `gzip` and `deflate` encodings.
A new `body` string containing the parsed data is populated on the `request`
object after the middleware (i.e. `req.body`). This will be a string of the
body.
#### Options
The `text` function takes an optional `options` object that may contain any of
the following keys:
##### defaultCharset
Specify the default character set for the text content if the charset is not
specified in the `Content-Type` header of the request. Defaults to `utf-8`.
##### inflate
When set to `true`, then deflated (compressed) bodies will be inflated; when
`false`, deflated bodies are rejected. Defaults to `true`.
##### limit
Controls the maximum request body size. If this is a number, then the value
specifies the number of bytes; if it is a string, the value is passed to the
[bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults
to `'100kb'`.
##### type
The `type` option is used to determine what media type the middleware will
parse. This option can be a string, array of strings, or a function. If not
a function, `type` option is passed directly to the
[type-is](https://www.npmjs.org/package/type-is#readme) library and this can
be an extension name (like `txt`), a mime type (like `text/plain`), or a mime
type with a wildcard (like `*/*` or `text/*`). If a function, the `type`
option is called as `fn(req)` and the request is parsed if it returns a
truthy value. Defaults to `text/plain`.
##### verify
The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`,
where `buf` is a `Buffer` of the raw request body and `encoding` is the
encoding of the request. The parsing can be aborted by throwing an error.
### bodyParser.urlencoded([options])
Returns middleware that only parses `urlencoded` bodies and only looks at
requests where the `Content-Type` header matches the `type` option. This
parser accepts only UTF-8 encoding of the body and supports automatic
inflation of `gzip` and `deflate` encodings.
A new `body` object containing the parsed data is populated on the `request`
object after the middleware (i.e. `req.body`). This object will contain
key-value pairs, where the value can be a string or array (when `extended` is
`false`), or any type (when `extended` is `true`).
#### Options
The `urlencoded` function takes an optional `options` object that may contain
any of the following keys:
##### extended
The `extended` option allows to choose between parsing the URL-encoded data
with the `querystring` library (when `false`) or the `qs` library (when
`true`). The "extended" syntax allows for rich objects and arrays to be
encoded into the URL-encoded format, allowing for a JSON-like experience
with URL-encoded. For more information, please
[see the qs library](https://www.npmjs.org/package/qs#readme).
Defaults to `true`, but using the default has been deprecated. Please
research into the difference between `qs` and `querystring` and choose the
appropriate setting.
##### inflate
When set to `true`, then deflated (compressed) bodies will be inflated; when
`false`, deflated bodies are rejected. Defaults to `true`.
##### limit
Controls the maximum request body size. If this is a number, then the value
specifies the number of bytes; if it is a string, the value is passed to the
[bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults
to `'100kb'`.
##### parameterLimit
The `parameterLimit` option controls the maximum number of parameters that
are allowed in the URL-encoded data. If a request contains more parameters
than this value, a 413 will be returned to the client. Defaults to `1000`.
##### type
The `type` option is used to determine what media type the middleware will
parse. This option can be a string, array of strings, or a function. If not
a function, `type` option is passed directly to the
[type-is](https://www.npmjs.org/package/type-is#readme) library and this can
be an extension name (like `urlencoded`), a mime type (like
`application/x-www-form-urlencoded`), or a mime type with a wildcard (like
`*/x-www-form-urlencoded`). If a function, the `type` option is called as
`fn(req)` and the request is parsed if it returns a truthy value. Defaults
to `application/x-www-form-urlencoded`.
##### verify
The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`,
where `buf` is a `Buffer` of the raw request body and `encoding` is the
encoding of the request. The parsing can be aborted by throwing an error.
## Errors
The middlewares provided by this module create errors using the
[`http-errors` module](https://www.npmjs.com/package/http-errors). The errors
will typically have a `status`/`statusCode` property that contains the suggested
HTTP response code, an `expose` property to determine if the `message` property
should be displayed to the client, a `type` property to determine the type of
error without matching against the `message`, and a `body` property containing
the read body, if available.
The following are the common errors created, though any error can come through
for various reasons.
### content encoding unsupported
This error will occur when the request had a `Content-Encoding` header that
contained an encoding but the "inflation" option was set to `false`. The
`status` property is set to `415`, the `type` property is set to
`'encoding.unsupported'`, and the `charset` property will be set to the
encoding that is unsupported.
### entity parse failed
This error will occur when the request contained an entity that could not be
parsed by the middleware. The `status` property is set to `400`, the `type`
property is set to `'entity.parse.failed'`, and the `body` property is set to
the entity value that failed parsing.
### entity verify failed
This error will occur when the request contained an entity that could not be
failed verification by the defined `verify` option. The `status` property is
set to `403`, the `type` property is set to `'entity.verify.failed'`, and the
`body` property is set to the entity value that failed verification.
### request aborted
This error will occur when the request is aborted by the client before reading
the body has finished. The `received` property will be set to the number of
bytes received before the request was aborted and the `expected` property is
set to the number of expected bytes. The `status` property is set to `400`
and `type` property is set to `'request.aborted'`.
### request entity too large
This error will occur when the request body's size is larger than the "limit"
option. The `limit` property will be set to the byte limit and the `length`
property will be set to the request body's length. The `status` property is
set to `413` and the `type` property is set to `'entity.too.large'`.
### request size did not match content length
This error will occur when the request's length did not match the length from
the `Content-Length` header. This typically occurs when the request is malformed,
typically when the `Content-Length` header was calculated based on characters
instead of bytes. The `status` property is set to `400` and the `type` property
is set to `'request.size.invalid'`.
### stream encoding should not be set
This error will occur when something called the `req.setEncoding` method prior
to this middleware. This module operates directly on bytes only and you cannot
call `req.setEncoding` when using this module. The `status` property is set to
`500` and the `type` property is set to `'stream.encoding.set'`.
### stream is not readable
This error will occur when the request is no longer readable when this middleware
attempts to read it. This typically means something other than a middleware from
this module read the reqest body already and the middleware was also configured to
read the same request. The `status` property is set to `500` and the `type`
property is set to `'stream.not.readable'`.
### too many parameters
This error will occur when the content of the request exceeds the configured
`parameterLimit` for the `urlencoded` parser. The `status` property is set to
`413` and the `type` property is set to `'parameters.too.many'`.
### unsupported charset "BOGUS"
This error will occur when the request had a charset parameter in the
`Content-Type` header, but the `iconv-lite` module does not support it OR the
parser does not support it. The charset is contained in the message as well
as in the `charset` property. The `status` property is set to `415`, the
`type` property is set to `'charset.unsupported'`, and the `charset` property
is set to the charset that is unsupported.
### unsupported content encoding "bogus"
This error will occur when the request had a `Content-Encoding` header that
contained an unsupported encoding. The encoding is contained in the message
as well as in the `encoding` property. The `status` property is set to `415`,
the `type` property is set to `'encoding.unsupported'`, and the `encoding`
property is set to the encoding that is unsupported.
## Examples
### Express/Connect top-level generic
This example demonstrates adding a generic JSON and URL-encoded parser as a
top-level middleware, which will parse the bodies of all incoming requests.
This is the simplest setup.
```js
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }))
// parse application/json
app.use(bodyParser.json())
app.use(function (req, res) {
res.setHeader('Content-Type', 'text/plain')
res.write('you posted:\n')
res.end(JSON.stringify(req.body, null, 2))
})
```
### Express route-specific
This example demonstrates adding body parsers specifically to the routes that
need them. In general, this is the most recommended way to use body-parser with
Express.
```js
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// create application/json parser
var jsonParser = bodyParser.json()
// create application/x-www-form-urlencoded parser
var urlencodedParser = bodyParser.urlencoded({ extended: false })
// POST /login gets urlencoded bodies
app.post('/login', urlencodedParser, function (req, res) {
res.send('welcome, ' + req.body.username)
})
// POST /api/users gets JSON bodies
app.post('/api/users', jsonParser, function (req, res) {
// create user in req.body
})
```
### Change accepted type for parsers
All the parsers accept a `type` option which allows you to change the
`Content-Type` that the middleware will parse.
```js
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// parse various different custom JSON types as JSON
app.use(bodyParser.json({ type: 'application/*+json' }))
// parse some custom thing into a Buffer
app.use(bodyParser.raw({ type: 'application/vnd.custom-type' }))
// parse an HTML body into a string
app.use(bodyParser.text({ type: 'text/html' }))
```
## License
[MIT](LICENSE)
[npm-image]: https://img.shields.io/npm/v/body-parser.svg
[npm-url]: https://npmjs.org/package/body-parser
[coveralls-image]: https://img.shields.io/coveralls/expressjs/body-parser/master.svg
[coveralls-url]: https://coveralls.io/r/expressjs/body-parser?branch=master
[downloads-image]: https://img.shields.io/npm/dm/body-parser.svg
[downloads-url]: https://npmjs.org/package/body-parser
[github-actions-ci-image]: https://img.shields.io/github/workflow/status/expressjs/body-parser/ci/master?label=ci
[github-actions-ci-url]: https://github.com/expressjs/body-parser/actions/workflows/ci.yml

@ -0,0 +1,157 @@
/*!
* body-parser
* Copyright(c) 2014-2015 Douglas Christopher Wilson
* MIT Licensed
*/
'use strict'
/**
* Module dependencies.
* @private
*/
var deprecate = require('depd')('body-parser')
/**
* Cache of loaded parsers.
* @private
*/
var parsers = Object.create(null)
/**
* @typedef Parsers
* @type {function}
* @property {function} json
* @property {function} raw
* @property {function} text
* @property {function} urlencoded
*/
/**
* Module exports.
* @type {Parsers}
*/
exports = module.exports = deprecate.function(bodyParser,
'bodyParser: use individual json/urlencoded middlewares')
/**
* JSON parser.
* @public
*/
Object.defineProperty(exports, 'json', {
configurable: true,
enumerable: true,
get: createParserGetter('json')
})
/**
* Raw parser.
* @public
*/
Object.defineProperty(exports, 'raw', {
configurable: true,
enumerable: true,
get: createParserGetter('raw')
})
/**
* Text parser.
* @public
*/
Object.defineProperty(exports, 'text', {
configurable: true,
enumerable: true,
get: createParserGetter('text')
})
/**
* URL-encoded parser.
* @public
*/
Object.defineProperty(exports, 'urlencoded', {
configurable: true,
enumerable: true,
get: createParserGetter('urlencoded')
})
/**
* Create a middleware to parse json and urlencoded bodies.
*
* @param {object} [options]
* @return {function}
* @deprecated
* @public
*/
function bodyParser (options) {
var opts = {}
// exclude type option
if (options) {
for (var prop in options) {
if (prop !== 'type') {
opts[prop] = options[prop]
}
}
}
var _urlencoded = exports.urlencoded(opts)
var _json = exports.json(opts)
return function bodyParser (req, res, next) {
_json(req, res, function (err) {
if (err) return next(err)
_urlencoded(req, res, next)
})
}
}
/**
* Create a getter for loading a parser.
* @private
*/
function createParserGetter (name) {
return function get () {
return loadParser(name)
}
}
/**
* Load a parser module.
* @private
*/
function loadParser (parserName) {
var parser = parsers[parserName]
if (parser !== undefined) {
return parser
}
// this uses a switch for static require analysis
switch (parserName) {
case 'json':
parser = require('./lib/types/json')
break
case 'raw':
parser = require('./lib/types/raw')
break
case 'text':
parser = require('./lib/types/text')
break
case 'urlencoded':
parser = require('./lib/types/urlencoded')
break
}
// store to prevent invoking require()
return (parsers[parserName] = parser)
}

@ -0,0 +1,205 @@
/*!
* body-parser
* Copyright(c) 2014-2015 Douglas Christopher Wilson
* MIT Licensed
*/
'use strict'
/**
* Module dependencies.
* @private
*/
var createError = require('http-errors')
var destroy = require('destroy')
var getBody = require('raw-body')
var iconv = require('iconv-lite')
var onFinished = require('on-finished')
var unpipe = require('unpipe')
var zlib = require('zlib')
/**
* Module exports.
*/
module.exports = read
/**
* Read a request into a buffer and parse.
*
* @param {object} req
* @param {object} res
* @param {function} next
* @param {function} parse
* @param {function} debug
* @param {object} options
* @private
*/
function read (req, res, next, parse, debug, options) {
var length
var opts = options
var stream
// flag as parsed
req._body = true
// read options
var encoding = opts.encoding !== null
? opts.encoding
: null
var verify = opts.verify
try {
// get the content stream
stream = contentstream(req, debug, opts.inflate)
length = stream.length
stream.length = undefined
} catch (err) {
return next(err)
}
// set raw-body options
opts.length = length
opts.encoding = verify
? null
: encoding
// assert charset is supported
if (opts.encoding === null && encoding !== null && !iconv.encodingExists(encoding)) {
return next(createError(415, 'unsupported charset "' + encoding.toUpperCase() + '"', {
charset: encoding.toLowerCase(),
type: 'charset.unsupported'
}))
}
// read body
debug('read body')
getBody(stream, opts, function (error, body) {
if (error) {
var _error
if (error.type === 'encoding.unsupported') {
// echo back charset
_error = createError(415, 'unsupported charset "' + encoding.toUpperCase() + '"', {
charset: encoding.toLowerCase(),
type: 'charset.unsupported'
})
} else {
// set status code on error
_error = createError(400, error)
}
// unpipe from stream and destroy
if (stream !== req) {
unpipe(req)
destroy(stream, true)
}
// read off entire request
dump(req, function onfinished () {
next(createError(400, _error))
})
return
}
// verify
if (verify) {
try {
debug('verify body')
verify(req, res, body, encoding)
} catch (err) {
next(createError(403, err, {
body: body,
type: err.type || 'entity.verify.failed'
}))
return
}
}
// parse
var str = body
try {
debug('parse body')
str = typeof body !== 'string' && encoding !== null
? iconv.decode(body, encoding)
: body
req.body = parse(str)
} catch (err) {
next(createError(400, err, {
body: str,
type: err.type || 'entity.parse.failed'
}))
return
}
next()
})
}
/**
* Get the content stream of the request.
*
* @param {object} req
* @param {function} debug
* @param {boolean} [inflate=true]
* @return {object}
* @api private
*/
function contentstream (req, debug, inflate) {
var encoding = (req.headers['content-encoding'] || 'identity').toLowerCase()
var length = req.headers['content-length']
var stream
debug('content-encoding "%s"', encoding)
if (inflate === false && encoding !== 'identity') {
throw createError(415, 'content encoding unsupported', {
encoding: encoding,
type: 'encoding.unsupported'
})
}
switch (encoding) {
case 'deflate':
stream = zlib.createInflate()
debug('inflate body')
req.pipe(stream)
break
case 'gzip':
stream = zlib.createGunzip()
debug('gunzip body')
req.pipe(stream)
break
case 'identity':
stream = req
stream.length = length
break
default:
throw createError(415, 'unsupported content encoding "' + encoding + '"', {
encoding: encoding,
type: 'encoding.unsupported'
})
}
return stream
}
/**
* Dump the contents of a request.
*
* @param {object} req
* @param {function} callback
* @api private
*/
function dump (req, callback) {
if (onFinished.isFinished(req)) {
callback(null)
} else {
onFinished(req, callback)
req.resume()
}
}

@ -0,0 +1,236 @@
/*!
* body-parser
* Copyright(c) 2014 Jonathan Ong
* Copyright(c) 2014-2015 Douglas Christopher Wilson
* MIT Licensed
*/
'use strict'
/**
* Module dependencies.
* @private
*/
var bytes = require('bytes')
var contentType = require('content-type')
var createError = require('http-errors')
var debug = require('debug')('body-parser:json')
var read = require('../read')
var typeis = require('type-is')
/**
* Module exports.
*/
module.exports = json
/**
* RegExp to match the first non-space in a string.
*
* Allowed whitespace is defined in RFC 7159:
*
* ws = *(
* %x20 / ; Space
* %x09 / ; Horizontal tab
* %x0A / ; Line feed or New line
* %x0D ) ; Carriage return
*/
var FIRST_CHAR_REGEXP = /^[\x20\x09\x0a\x0d]*([^\x20\x09\x0a\x0d])/ // eslint-disable-line no-control-regex
/**
* Create a middleware to parse JSON bodies.
*
* @param {object} [options]
* @return {function}
* @public
*/
function json (options) {
var opts = options || {}
var limit = typeof opts.limit !== 'number'
? bytes.parse(opts.limit || '100kb')
: opts.limit
var inflate = opts.inflate !== false
var reviver = opts.reviver
var strict = opts.strict !== false
var type = opts.type || 'application/json'
var verify = opts.verify || false
if (verify !== false && typeof verify !== 'function') {
throw new TypeError('option verify must be function')
}
// create the appropriate type checking function
var shouldParse = typeof type !== 'function'
? typeChecker(type)
: type
function parse (body) {
if (body.length === 0) {
// special-case empty json body, as it's a common client-side mistake
// TODO: maybe make this configurable or part of "strict" option
return {}
}
if (strict) {
var first = firstchar(body)
if (first !== '{' && first !== '[') {
debug('strict violation')
throw createStrictSyntaxError(body, first)
}
}
try {
debug('parse json')
return JSON.parse(body, reviver)
} catch (e) {
throw normalizeJsonSyntaxError(e, {
message: e.message,
stack: e.stack
})
}
}
return function jsonParser (req, res, next) {
if (req._body) {
debug('body already parsed')
next()
return
}
req.body = req.body || {}
// skip requests without bodies
if (!typeis.hasBody(req)) {
debug('skip empty body')
next()
return
}
debug('content-type %j', req.headers['content-type'])
// determine if request should be parsed
if (!shouldParse(req)) {
debug('skip parsing')
next()
return
}
// assert charset per RFC 7159 sec 8.1
var charset = getCharset(req) || 'utf-8'
if (charset.slice(0, 4) !== 'utf-') {
debug('invalid charset')
next(createError(415, 'unsupported charset "' + charset.toUpperCase() + '"', {
charset: charset,
type: 'charset.unsupported'
}))
return
}
// read
read(req, res, next, parse, debug, {
encoding: charset,
inflate: inflate,
limit: limit,
verify: verify
})
}
}
/**
* Create strict violation syntax error matching native error.
*
* @param {string} str
* @param {string} char
* @return {Error}
* @private
*/
function createStrictSyntaxError (str, char) {
var index = str.indexOf(char)
var partial = index !== -1
? str.substring(0, index) + '#'
: ''
try {
JSON.parse(partial); /* istanbul ignore next */ throw new SyntaxError('strict violation')
} catch (e) {
return normalizeJsonSyntaxError(e, {
message: e.message.replace('#', char),
stack: e.stack
})
}
}
/**
* Get the first non-whitespace character in a string.
*
* @param {string} str
* @return {function}
* @private
*/
function firstchar (str) {
var match = FIRST_CHAR_REGEXP.exec(str)
return match
? match[1]
: undefined
}
/**
* Get the charset of a request.
*
* @param {object} req
* @api private
*/
function getCharset (req) {
try {
return (contentType.parse(req).parameters.charset || '').toLowerCase()
} catch (e) {
return undefined
}
}
/**
* Normalize a SyntaxError for JSON.parse.
*
* @param {SyntaxError} error
* @param {object} obj
* @return {SyntaxError}
*/
function normalizeJsonSyntaxError (error, obj) {
var keys = Object.getOwnPropertyNames(error)
for (var i = 0; i < keys.length; i++) {
var key = keys[i]
if (key !== 'stack' && key !== 'message') {
delete error[key]
}
}
// replace stack before message for Node.js 0.10 and below
error.stack = obj.stack.replace(error.message, obj.message)
error.message = obj.message
return error
}
/**
* Get the simple type checker.
*
* @param {string} type
* @return {function}
*/
function typeChecker (type) {
return function checkType (req) {
return Boolean(typeis(req, type))
}
}

@ -0,0 +1,101 @@
/*!
* body-parser
* Copyright(c) 2014-2015 Douglas Christopher Wilson
* MIT Licensed
*/
'use strict'
/**
* Module dependencies.
*/
var bytes = require('bytes')
var debug = require('debug')('body-parser:raw')
var read = require('../read')
var typeis = require('type-is')
/**
* Module exports.
*/
module.exports = raw
/**
* Create a middleware to parse raw bodies.
*
* @param {object} [options]
* @return {function}
* @api public
*/
function raw (options) {
var opts = options || {}
var inflate = opts.inflate !== false
var limit = typeof opts.limit !== 'number'
? bytes.parse(opts.limit || '100kb')
: opts.limit
var type = opts.type || 'application/octet-stream'
var verify = opts.verify || false
if (verify !== false && typeof verify !== 'function') {
throw new TypeError('option verify must be function')
}
// create the appropriate type checking function
var shouldParse = typeof type !== 'function'
? typeChecker(type)
: type
function parse (buf) {
return buf
}
return function rawParser (req, res, next) {
if (req._body) {
debug('body already parsed')
next()
return
}
req.body = req.body || {}
// skip requests without bodies
if (!typeis.hasBody(req)) {
debug('skip empty body')
next()
return
}
debug('content-type %j', req.headers['content-type'])
// determine if request should be parsed
if (!shouldParse(req)) {
debug('skip parsing')
next()
return
}
// read
read(req, res, next, parse, debug, {
encoding: null,
inflate: inflate,
limit: limit,
verify: verify
})
}
}
/**
* Get the simple type checker.
*
* @param {string} type
* @return {function}
*/
function typeChecker (type) {
return function checkType (req) {
return Boolean(typeis(req, type))
}
}

@ -0,0 +1,121 @@
/*!
* body-parser
* Copyright(c) 2014-2015 Douglas Christopher Wilson
* MIT Licensed
*/
'use strict'
/**
* Module dependencies.
*/
var bytes = require('bytes')
var contentType = require('content-type')
var debug = require('debug')('body-parser:text')
var read = require('../read')
var typeis = require('type-is')
/**
* Module exports.
*/
module.exports = text
/**
* Create a middleware to parse text bodies.
*
* @param {object} [options]
* @return {function}
* @api public
*/
function text (options) {
var opts = options || {}
var defaultCharset = opts.defaultCharset || 'utf-8'
var inflate = opts.inflate !== false
var limit = typeof opts.limit !== 'number'
? bytes.parse(opts.limit || '100kb')
: opts.limit
var type = opts.type || 'text/plain'
var verify = opts.verify || false
if (verify !== false && typeof verify !== 'function') {
throw new TypeError('option verify must be function')
}
// create the appropriate type checking function
var shouldParse = typeof type !== 'function'
? typeChecker(type)
: type
function parse (buf) {
return buf
}
return function textParser (req, res, next) {
if (req._body) {
debug('body already parsed')
next()
return
}
req.body = req.body || {}
// skip requests without bodies
if (!typeis.hasBody(req)) {
debug('skip empty body')
next()
return
}
debug('content-type %j', req.headers['content-type'])
// determine if request should be parsed
if (!shouldParse(req)) {
debug('skip parsing')
next()
return
}
// get charset
var charset = getCharset(req) || defaultCharset
// read
read(req, res, next, parse, debug, {
encoding: charset,
inflate: inflate,
limit: limit,
verify: verify
})
}
}
/**
* Get the charset of a request.
*
* @param {object} req
* @api private
*/
function getCharset (req) {
try {
return (contentType.parse(req).parameters.charset || '').toLowerCase()
} catch (e) {
return undefined
}
}
/**
* Get the simple type checker.
*
* @param {string} type
* @return {function}
*/
function typeChecker (type) {
return function checkType (req) {
return Boolean(typeis(req, type))
}
}

@ -0,0 +1,284 @@
/*!
* body-parser
* Copyright(c) 2014 Jonathan Ong
* Copyright(c) 2014-2015 Douglas Christopher Wilson
* MIT Licensed
*/
'use strict'
/**
* Module dependencies.
* @private
*/
var bytes = require('bytes')
var contentType = require('content-type')
var createError = require('http-errors')
var debug = require('debug')('body-parser:urlencoded')
var deprecate = require('depd')('body-parser')
var read = require('../read')
var typeis = require('type-is')
/**
* Module exports.
*/
module.exports = urlencoded
/**
* Cache of parser modules.
*/
var parsers = Object.create(null)
/**
* Create a middleware to parse urlencoded bodies.
*
* @param {object} [options]
* @return {function}
* @public
*/
function urlencoded (options) {
var opts = options || {}
// notice because option default will flip in next major
if (opts.extended === undefined) {
deprecate('undefined extended: provide extended option')
}
var extended = opts.extended !== false
var inflate = opts.inflate !== false
var limit = typeof opts.limit !== 'number'
? bytes.parse(opts.limit || '100kb')
: opts.limit
var type = opts.type || 'application/x-www-form-urlencoded'
var verify = opts.verify || false
if (verify !== false && typeof verify !== 'function') {
throw new TypeError('option verify must be function')
}
// create the appropriate query parser
var queryparse = extended
? extendedparser(opts)
: simpleparser(opts)
// create the appropriate type checking function
var shouldParse = typeof type !== 'function'
? typeChecker(type)
: type
function parse (body) {
return body.length
? queryparse(body)
: {}
}
return function urlencodedParser (req, res, next) {
if (req._body) {
debug('body already parsed')
next()
return
}
req.body = req.body || {}
// skip requests without bodies
if (!typeis.hasBody(req)) {
debug('skip empty body')
next()
return
}
debug('content-type %j', req.headers['content-type'])
// determine if request should be parsed
if (!shouldParse(req)) {
debug('skip parsing')
next()
return
}
// assert charset
var charset = getCharset(req) || 'utf-8'
if (charset !== 'utf-8') {
debug('invalid charset')
next(createError(415, 'unsupported charset "' + charset.toUpperCase() + '"', {
charset: charset,
type: 'charset.unsupported'
}))
return
}
// read
read(req, res, next, parse, debug, {
debug: debug,
encoding: charset,
inflate: inflate,
limit: limit,
verify: verify
})
}
}
/**
* Get the extended query parser.
*
* @param {object} options
*/
function extendedparser (options) {
var parameterLimit = options.parameterLimit !== undefined
? options.parameterLimit
: 1000
var parse = parser('qs')
if (isNaN(parameterLimit) || parameterLimit < 1) {
throw new TypeError('option parameterLimit must be a positive number')
}
if (isFinite(parameterLimit)) {
parameterLimit = parameterLimit | 0
}
return function queryparse (body) {
var paramCount = parameterCount(body, parameterLimit)
if (paramCount === undefined) {
debug('too many parameters')
throw createError(413, 'too many parameters', {
type: 'parameters.too.many'
})
}
var arrayLimit = Math.max(100, paramCount)
debug('parse extended urlencoding')
return parse(body, {
allowPrototypes: true,
arrayLimit: arrayLimit,
depth: Infinity,
parameterLimit: parameterLimit
})
}
}
/**
* Get the charset of a request.
*
* @param {object} req
* @api private
*/
function getCharset (req) {
try {
return (contentType.parse(req).parameters.charset || '').toLowerCase()
} catch (e) {
return undefined
}
}
/**
* Count the number of parameters, stopping once limit reached
*
* @param {string} body
* @param {number} limit
* @api private
*/
function parameterCount (body, limit) {
var count = 0
var index = 0
while ((index = body.indexOf('&', index)) !== -1) {
count++
index++
if (count === limit) {
return undefined
}
}
return count
}
/**
* Get parser for module name dynamically.
*
* @param {string} name
* @return {function}
* @api private
*/
function parser (name) {
var mod = parsers[name]
if (mod !== undefined) {
return mod.parse
}
// this uses a switch for static require analysis
switch (name) {
case 'qs':
mod = require('qs')
break
case 'querystring':
mod = require('querystring')
break
}
// store to prevent invoking require()
parsers[name] = mod
return mod.parse
}
/**
* Get the simple query parser.
*
* @param {object} options
*/
function simpleparser (options) {
var parameterLimit = options.parameterLimit !== undefined
? options.parameterLimit
: 1000
var parse = parser('querystring')
if (isNaN(parameterLimit) || parameterLimit < 1) {
throw new TypeError('option parameterLimit must be a positive number')
}
if (isFinite(parameterLimit)) {
parameterLimit = parameterLimit | 0
}
return function queryparse (body) {
var paramCount = parameterCount(body, parameterLimit)
if (paramCount === undefined) {
debug('too many parameters')
throw createError(413, 'too many parameters', {
type: 'parameters.too.many'
})
}
debug('parse urlencoding')
return parse(body, undefined, undefined, { maxKeys: parameterLimit })
}
}
/**
* Get the simple type checker.
*
* @param {string} type
* @return {function}
*/
function typeChecker (type) {
return function checkType (req) {
return Boolean(typeis(req, type))
}
}

@ -0,0 +1,56 @@
{
"name": "body-parser",
"description": "Node.js body parsing middleware",
"version": "1.20.0",
"contributors": [
"Douglas Christopher Wilson <doug@somethingdoug.com>",
"Jonathan Ong <me@jongleberry.com> (http://jongleberry.com)"
],
"license": "MIT",
"repository": "expressjs/body-parser",
"dependencies": {
"bytes": "3.1.2",
"content-type": "~1.0.4",
"debug": "2.6.9",
"depd": "2.0.0",
"destroy": "1.2.0",
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"on-finished": "2.4.1",
"qs": "6.10.3",
"raw-body": "2.5.1",
"type-is": "~1.6.18",
"unpipe": "1.0.0"
},
"devDependencies": {
"eslint": "7.32.0",
"eslint-config-standard": "14.1.1",
"eslint-plugin-import": "2.25.4",
"eslint-plugin-markdown": "2.2.1",
"eslint-plugin-node": "11.1.0",
"eslint-plugin-promise": "5.2.0",
"eslint-plugin-standard": "4.1.0",
"methods": "1.1.2",
"mocha": "9.2.2",
"nyc": "15.1.0",
"safe-buffer": "5.2.1",
"supertest": "6.2.2"
},
"files": [
"lib/",
"LICENSE",
"HISTORY.md",
"SECURITY.md",
"index.js"
],
"engines": {
"node": ">= 0.8",
"npm": "1.2.8000 || >= 1.4.16"
},
"scripts": {
"lint": "eslint .",
"test": "mocha --require test/support/env --reporter spec --check-leaks --bail test/",
"test-ci": "nyc --reporter=lcov --reporter=text npm test",
"test-cov": "nyc --reporter=html --reporter=text npm test"
}
}

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2016, 2018 Linus Unnebäck
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save