进入 gradle ——–> ./gradlew
For all the girls I loved
版本 0.24.2
先看这本电子书, 哈佛95后写的入门: https://dafriedman97.github.io/mlbook/content/c1/concept.html
中文版本的这个也还可以: https://lulaoshi.info/machine-learning/linear-model/minimise-loss-function.html
名字对照: linear regression 线性回归
loss function. 损失函数
例子:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
DELIMITER $$ create PROCEDURE loopTables111 () BEGIN DECLARE done INT; DECLARE TableName VARCHAR(17); DECLARE TablesCursor CURSOR FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE SUBSTRING_INDEX(TABLE_NAME,'_',-1) = '20210826'; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; #DECLARE CONTINUE HANDLER FOR NOT FOUND SET Finished = TRUE; OPEN TablesCursor; MainLoop: LOOP FETCH TablesCursor INTO TableName; SELECT TableName; END LOOP; CLOSE TablesCursor; END$$ DELIMITER ; |
网络爬虫(英語:web crawler),也叫網路蜘蛛(spider)
python有这样几个库:
BeautifulSoup: Beautiful soup is a library for parsing HTML and XML documents. Requests (handles HTTP sessions and makes HTTP requests) in combination with BeautifulSoup (a parsing library) are the best package tools for small and quick web scraping. For scraping simpler, static, less-JS related complexities, then this tool is probably what you’re looking for. If you want to know more about BeautifulSoup, please refer to my previous guide on Extracting Data from HTML with BeautifulSoup.
lxml is a high-performance, straightforward, fast, and feature-rich parsing library which is another prominent alternative to BeautifulSoup.
Scrapy: Scrapy is a web crawling framework that provides a complete tool for scraping. In Scrapy, we create Spiders which are python classes that define how a particular site/sites will be scrapped. So, if you want to build a robust, concurrent, scalable, large scale scraper, then Scrapy is an excellent choice for you. Also, Scrapy comes with a bunch of middlewares for cookies, redirects, sessions, caching, etc. that helps you to deal with different complexities that you might come across. If you want to know more about Scrapy, please refer to my previous guide on Crawling the Web with Python and Scrapy.
其中 scrapy 简单的例子 : https://www.digitalocean.com/community/tutorials/how-to-crawl-a-web-page-with-scrapy-and-python-3
以上例子是不需要login的
如果需要login , 要用到。scrapy 的 formrequest。
以 https://ktu3333.asuscomm.com:9085/enLogin.htm
为例
测试login已成功
scrapy 抓取的只是静态内容, 目标网页含有js 和ajax , 需要配合 selenium 和 webdrive一起用
原因见: https://www.geeksforgeeks.org/scrape-content-from-dynamic-websites/
mac 如何安装 chrome web drive
https://www.swtestacademy.com/install-chrome-driver-on-mac/
2021-07-27 : login 不再使用 scrapy , 因为它login之后和selenium 不是一个session , 所以直接用selenium login
找element用 xpath , 注意xpath里如果用到参数的写法
目前为止能运行的代码: 还没加定时功能 , python 版本3.8.6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import NoSuchElementException import time options = webdriver.ChromeOptions() options.add_argument('ignore-certificate-errors') driver = webdriver.Chrome(chrome_options=options) # driver = webdriver.Chrome('/usr/local/bin/chromedriver') driver.get("https://ktu3333.asuscomm.com:9085/enLogin.htm") # try: # element = WebDriverWait(driver, 20).until( # EC.visibility_of_element_located((By.ID, "login_button")) # ) # finally: # driver.quit() time.sleep(5) print("login page finish loaded") # find username/email field and send the username itself to the input field driver.find_element_by_id("loginname").send_keys("TheStringOfUsername") # find password input field and insert password as well driver.find_element_by_id("loginpass").send_keys("TheStringOfPassword") # click login button driver.find_element_by_id("login_button").click() time.sleep(5) print("status page finish loaded") driver.get("https://ktu3333.asuscomm.com:9085/enHBSim.htm") time.sleep(20) print("redirect success") try: # rows_in_table = driver.find_elements_by_class_name("TB") # print("the table exist") # for row in rows_in_table.find_elements_by_css_selector('tr'): # for cell in row.find_elements_by_tag_name('td'): # print(cell.text) # table = driver.find_element_by_id("OvefrviewInfo") # print("the table exist") # # for i in table: # # tbody = i.find_element_by_tag_name('tbody') # rows = table.find_elements(By.TAG_NAME, "tr") # get all of the rows in the table # for row in rows: # # cols = row.find_elements(By.TAG_NAME, "td") # get all of the rows in the table # # Get the columns (all the column 2) # col1 = row.find_elements(By.TAG_NAME, "td")[1] #note: index start from 0, 1 is col 2 # print(col1.text) #prints text from the element # col2 = row.find_elements(By.TAG_NAME, "td")[2] #note: index start from 0, 1 is col 2 # print(col2.text) #prints text from the element # col3 = row.find_elements(By.TAG_NAME, "td")[3] #note: index start from 0, 1 is col 2 # print(col3.text) #prints text from the element # # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[3]') # print(col1.text) # col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[4]') # print(col2.text) tbody = driver.find_element_by_xpath('//*[@id="OverviewInfo"]') trows = driver.find_elements_by_xpath('//*[@id="OverviewInfo"]/tr') print("the tbody exist") print("total rows is : ") print(len(trows)) for i in range(1,len(trows)+1): # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]/tr[i]/td[1]') # print("find col1") col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[2]') # print("find col2") col3= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[3]') # print("find col3") col4= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[4]') # print("find col4") col5= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[5]') col6= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[6]') # print("find col5") print( col2.text , '\t' ,col3.text , '\t' , col4.text , '\t' , col5.text, '\t',col6.text) except NoSuchElementException: print("Element does not exist") driver.close() |
改进版,把结果放到一个json 数组
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import NoSuchElementException import time import json options = webdriver.ChromeOptions() options.add_argument('ignore-certificate-errors') driver = webdriver.Chrome(chrome_options=options) # driver = webdriver.Chrome('/usr/local/bin/chromedriver') driver.get("https://ktu3333.asuscomm.com:9085/enLogin.htm") # try: # element = WebDriverWait(driver, 20).until( # EC.visibility_of_element_located((By.ID, "login_button")) # ) # finally: # driver.quit() time.sleep(5) print("login page finish loaded") # find username/email field and send the username itself to the input field driver.find_element_by_id("loginname").send_keys("StringOfUserName") # find password input field and insert password as well driver.find_element_by_id("loginpass").send_keys("StringOfPassword") # click login button driver.find_element_by_id("login_button").click() time.sleep(5) print("status page finish loaded") driver.get("https://ktu3333.asuscomm.com:9085/enHBSim.htm") time.sleep(20) print("redirect success") try: # rows_in_table = driver.find_elements_by_class_name("TB") # print("the table exist") # for row in rows_in_table.find_elements_by_css_selector('tr'): # for cell in row.find_elements_by_tag_name('td'): # print(cell.text) # table = driver.find_element_by_id("OvefrviewInfo") # print("the table exist") # # for i in table: # # tbody = i.find_element_by_tag_name('tbody') # rows = table.find_elements(By.TAG_NAME, "tr") # get all of the rows in the table # for row in rows: # # cols = row.find_elements(By.TAG_NAME, "td") # get all of the rows in the table # # Get the columns (all the column 2) # col1 = row.find_elements(By.TAG_NAME, "td")[1] #note: index start from 0, 1 is col 2 # print(col1.text) #prints text from the element # col2 = row.find_elements(By.TAG_NAME, "td")[2] #note: index start from 0, 1 is col 2 # print(col2.text) #prints text from the element # col3 = row.find_elements(By.TAG_NAME, "td")[3] #note: index start from 0, 1 is col 2 # print(col3.text) #prints text from the element # # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[3]') # print(col1.text) # col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[4]') # print(col2.text) tbody = driver.find_element_by_xpath('//*[@id="OverviewInfo"]') trows = driver.find_elements_by_xpath('//*[@id="OverviewInfo"]/tr') print("the tbody exist") print("total rows is : ") print(len(trows)) totalList = [] for i in range(1,len(trows)+1): # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]/tr[i]/td[1]') # print("find col1") col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[2]') # print("find col2") col3= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[3]') # print("find col3") col4= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[4]') # print("find col4") col5= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[5]') col6= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[6]') # print("find col5") print( col2.text , '\t' ,col3.text , '\t' , col4.text , '\t' , col5.text, '\t',col6.text) singelRecord = {'SIM': col2.text, 'Port Status': col3.text, 'Phone Number':col4.text, 'Last matched Balance':col5.text,'Calculated Balance': col6.text} # to_json= json.dumps(singelRecord) # print(to_json) totalList.append(singelRecord) to_json= json.dumps(totalList) print(to_json) except NoSuchElementException: print("Element does not exist") driver.close() |
wireshark 版本 Version 3.4.5
mac。 版本 11.3.1
在。/users/mac/documents 新建一个文件 权限777 , sslkeylog.log
wireshark 这样设置
命令行执行 : /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome –user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log
会启动一个新的chrome。然后打开wireshark 就能抓包到这个chrome 的http和 https 包
但是。flutter 启动web的方法是 :
flutter run -d chrome
这种方法怎么带参数 , 比如像这样:
flutter-web-admin-dashboard-ecommerce-main % flutter run -d chrome –chrome-args=”–user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log”
这个网页提出的同样的问题: https://github.com/dart-lang/webdev/issues/1080
解决方法是: 先用flutter run -d chrome 运行,然后用 这个命令打开的chrome
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome –user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log
打开flutter 网页的地址
1 |
get: ^4.1.4 |
1 |
flutter pub get -v |
get package
1 |
flutter pub global activate get_cli |
启动 cli 工具
1 |
get -v |
看是否显示getx的版本 , 能正确显示版本说明到此为止正确
想给sms.dart文件加一个controller , 可以执行 :
1 |
get create controller:bulksms on widgets/layoutget |
以上命令来自于 :https://github.com/jonataslaw/get_cli
SizedBox.expand
results in the DataTable
taking an infinite height which the SingleChildScrollView
won’t like. Since you only want to span the width of the parent, you can use a LayoutBuilder
to get the size of the parent you care about and then wrap the DataTable
in a ConstrainedBox
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
Widget build(BuildContext context) { <span class="hljs-keyword">return</span> Scaffold( body: LayoutBuilder( builder: (context, constraints) => SingleChildScrollView( child: Column( children: [ <span class="hljs-keyword">const</span> Text(<span class="hljs-string">'My Text'</span>), Container( alignment: Alignment.topLeft, child: SingleChildScrollView( scrollDirection: Axis.horizontal, child: ConstrainedBox( constraints: BoxConstraints(minWidth: constraints.minWidth), child: DataTable(columns: [], rows: []), ), ), ), ], ), ), ), ); } |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
string.codeUnits is an array of Unicode UTF-16 code units. So i would say it could be like int maxLengthOfTextField(String text){ final int maxBits = 128; List<int> unicodeSymbols = text.codeUnits.where((ch) => ch > maxBits ).toList(); return unicodeSymbols.length > 0 ? 160 : 70; } final textFieldController = TextEditingController(); TextField( controller: textFieldController, maxLength: maxLengthOfTextField(textFieldController.text) ); |
修改后成为:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
child: TextField( decoration: InputDecoration( hintStyle: TextStyle(fontSize: 17), hintText: 'Search your trips', // suffixIcon: Icon(Icons.search), counterText: "", border: InputBorder.none, contentPadding: EdgeInsets.all(20), ), maxLength: 480, controller: textController, onChanged: _onChanged, ), |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
_onChanged(String value) { // print(value); final int maxBits = 128; List<int> unicodeSymbols = value.codeUnits.where((ch) => ch > maxBits ).toList(); if(unicodeSymbols.length > 0) { setState(() { ifContainUnicode = "unicode"; }); } else { setState(() { ifContainUnicode = "bit"; }); } setState(() { charLength = value.length; }); } |
这个网址也可以看一下; https://dev.to/stack-labs/flutter-utf8-textfield-length-limiter-and-char-counter-31o7
1 |
flutter config <span class="nt">--enable-web</span> |
这个命令是启动web 开发
1 |
flutter channel |
1 |
flutter channel masters |
https://stackoverflow.com/questions/45559580/expansion-panel-list-in-flutter
https://whereisdarran.com/2020/02/charts-for-flutter-and-flutter-web/
https://medium.com/flutter-community/using-javascript-code-in-flutter-web-903de54a2000
https://fireship.io/snippets/using-js-with-flutter-web/
https://medium.com/codechai/flutter-web-and-paginateddatatable-3779da7683e