scikit-learning 学习笔记

版本 0.24.2


 

先看这本电子书, 哈佛95后写的入门: https://dafriedman97.github.io/mlbook/content/c1/concept.html

 

中文版本的这个也还可以: https://lulaoshi.info/machine-learning/linear-model/minimise-loss-function.html

 


名字对照:  linear regression     线性回归

loss function.             损失函数

poisson distribution : 泊松分布
Poisson function :  泊松函数
———————————
—————————————-
分类器的一个入门例子 :   https://www.cnblogs.com/qcloud1001/p/9405730.html

 

怎么用heidisql 写mysql 5.1 的存储过程

例子:

 

怎么做一个带login 的python 定时爬虫程序

网络爬虫(英語:web crawler),也叫網路蜘蛛(spider)

python有这样几个库:

  • BeautifulSoup: Beautiful soup is a library for parsing HTML and XML documents. Requests (handles HTTP sessions and makes HTTP requests) in combination with BeautifulSoup (a parsing library) are the best package tools for small and quick web scraping. For scraping simpler, static, less-JS related complexities, then this tool is probably what you’re looking for. If you want to know more about BeautifulSoup, please refer to my previous guide on Extracting Data from HTML with BeautifulSoup.

    lxml is a high-performance, straightforward, fast, and feature-rich parsing library which is another prominent alternative to BeautifulSoup.

  • Scrapy: Scrapy is a web crawling framework that provides a complete tool for scraping. In Scrapy, we create Spiders which are python classes that define how a particular site/sites will be scrapped. So, if you want to build a robust, concurrent, scalable, large scale scraper, then Scrapy is an excellent choice for you. Also, Scrapy comes with a bunch of middlewares for cookies, redirects, sessions, caching, etc. that helps you to deal with different complexities that you might come across. If you want to know more about Scrapy, please refer to my previous guide on Crawling the Web with Python and Scrapy.

  • Selenium For heavy-JS rendered pages or very sophisticated websites, Selenium webdriver is the best tool to choose. Selenium is a tool that automates the web-browsers, also known as a web-driver. With this, you can open a Google Chrome/Mozilla Firefox automated window, which visits a URL and navigates on the links. However, it is not as efficient as the tools which we have discussed so far. This tool is something to use when all doors of web scraping are being closed, and you still want the data which matters to you. If you want to know more about Selenium, please refer to Web Scraping with Selenium.

 

其中 scrapy 简单的例子 : https://www.digitalocean.com/community/tutorials/how-to-crawl-a-web-page-with-scrapy-and-python-3

以上例子是不需要login的

 


如果需要login , 要用到。scrapy 的 formrequest。

以 https://ktu3333.asuscomm.com:9085/enLogin.htm

为例

测试login已成功


scrapy 抓取的只是静态内容, 目标网页含有js 和ajax , 需要配合  selenium 和 webdrive一起用

原因见: https://www.geeksforgeeks.org/scrape-content-from-dynamic-websites/


mac 如何安装 chrome web drive

https://www.swtestacademy.com/install-chrome-driver-on-mac/

 


2021-07-27 : login 不再使用 scrapy , 因为它login之后和selenium 不是一个session , 所以直接用selenium login


找element用 xpath , 注意xpath里如果用到参数的写法


目前为止能运行的代码: 还没加定时功能 , python 版本3.8.6

 


改进版,把结果放到一个json 数组

 

mac 怎么用wireshark抓 flutter web开发网页的包-chrome

wireshark 版本   Version 3.4.5

mac。 版本 11.3.1

在。/users/mac/documents  新建一个文件 权限777 , sslkeylog.log

wireshark 这样设置

 

 

 

命令行执行 :  /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome –user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log

会启动一个新的chrome。然后打开wireshark 就能抓包到这个chrome 的http和 https 包


 

但是。flutter 启动web的方法是 :

flutter  run -d chrome

这种方法怎么带参数 , 比如像这样:

flutter-web-admin-dashboard-ecommerce-main % flutter run -d chrome –chrome-args=”–user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log”

 

这个网页提出的同样的问题: https://github.com/dart-lang/webdev/issues/1080

 

解决方法是: 先用flutter run -d chrome 运行,然后用 这个命令打开的chrome

/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome –user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log

打开flutter 网页的地址

 

 

flutter how to add getx package to exist project

  •  先安装package

  • 运行

get package

  • 执行

启动 cli 工具

 

  • 执行

看是否显示getx的版本 , 能正确显示版本说明到此为止正确

  • 比如现有的目录是这样的  /lib/widgets/layout/sms.dart

想给sms.dart文件加一个controller ,  可以执行 :

 

以上命令来自于 :https://github.com/jonataslaw/get_cli

 

 

 

 

flutter 怎么让datatable 也能自适应宽度

SizedBox.expand results in the DataTable taking an infinite height which the SingleChildScrollView won’t like. Since you only want to span the width of the parent, you can use a LayoutBuilder to get the size of the parent you care about and then wrap the DataTable in a ConstrainedBox.

来自网站: https://stackoverflow.com/questions/56625052/how-to-make-a-multi-column-flutter-datatable-widget-span-the-full-width

 

 

如何检查flutter textfield 是否包含unicode字符

 

来源于网站: https://stackoverflow.com/questions/55607305/how-can-i-check-if-a-textfield-contains-unicode-characters-in-flutter-dart

 

修改后成为:

 

 

这个网址也可以看一下; https://dev.to/stack-labs/flutter-utf8-textfield-length-limiter-and-char-counter-31o7

如何用flutter做web

这个命令是启动web 开发

  • 查看当前channel

  • 切换channel

  • 顺序是:先切换channel 再启动web
  • 左边导航条怎么实现子菜单

https://stackoverflow.com/questions/45559580/expansion-panel-list-in-flutter

  • 怎么在网页上画图表

https://whereisdarran.com/2020/02/charts-for-flutter-and-flutter-web/

  • 怎么执行js

https://medium.com/flutter-community/using-javascript-code-in-flutter-web-903de54a2000

https://fireship.io/snippets/using-js-with-flutter-web/

 

  • 画table 和 带换页的table

https://medium.com/codechai/flutter-web-and-paginateddatatable-3779da7683e

 

  • 怎么画bootstrap 风格的页面
bootstrap就是指 网页能随着缩放拉伸,自动适应设备的大小, 布局可能随页面的大小而变
用这个package :  responsive_builder: ^0.3.0
————–
以下是仿 movider