1. learning_curve():主要是用来判断模型是否过拟合
  2. validation_curve():这个函数主要是用来查看不同参数的取值下模型的准确性

以下是Python机器学习书里面的例子, 我改了部分参数

learning_curve

import matplotlib.pyplot as plt

from sklearn.model_selection import learning_curve
from sklearn.decomposition import PCA
from sklearn.svm import SVC


pipe_lr = Pipeline([('scl', StandardScaler()),
                    ('pca',PCA()),
                    ('svc',SVC(kernel='rbf')),
#                     ('clf', LogisticRegression(penalty='l2', random_state=0,solver='lbfgs')),
                    ])

train_sizes, train_scores, test_scores =\
                learning_curve(estimator=pipe_lr,
                               X=X_train,
                               y=y_train,
                               train_sizes=np.linspace(0.1, 1.0, 10),
                               cv=10,
                               n_jobs=1)

train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)

plt.plot(train_sizes, train_mean,
         color='blue', marker='o',
         markersize=5, label='training accuracy')

plt.fill_between(train_sizes,
                 train_mean + train_std,
                 train_mean - train_std,
                 alpha=0.15, color='blue')

plt.plot(train_sizes, test_mean,
         color='green', linestyle='--',
         marker='s', markersize=5,
         label='validation accuracy')

plt.fill_between(train_sizes,
                 test_mean + test_std,
                 test_mean - test_std,
                 alpha=0.15, color='green')

plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.tight_layout()
plt.savefig('learning_curve.png', dpi=300)
plt.show()

从下图可以看出,蓝色的training曲线部分的准确率明显是要高于绿色的testing曲线,这说明有过度拟合的情况,其中一个办法是通过增加数据集来解决。

validation_curve

from sklearn.model_selection import validation_curve



param_range = ['linear','sigmoid','poly','rbf']
train_scores, test_scores = validation_curve(
                estimator=pipe_lr, 
                X=X_train, 
                y=y_train, 
                param_name='svc__kernel', 
                param_range=param_range)

train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)

plt.plot(param_range, train_mean, 
         color='blue', marker='o', 
         markersize=5, label='training accuracy')

plt.fill_between(param_range, train_mean + train_std,
                 train_mean - train_std, alpha=0.15,
                 color='blue')

plt.plot(param_range, test_mean, 
         color='green', linestyle='--', 
         marker='s', markersize=5, 
         label='validation accuracy')

plt.fill_between(param_range, 
                 test_mean + test_std,
                 test_mean - test_std, 
                 alpha=0.15, color='green')

plt.grid()
plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.ylim([0.8, 1.0])
plt.tight_layout()
plt.savefig('validation_curve.png', dpi=300)
plt.show()

在上面的代码中,我将param_range 设为 ['linear','sigmoid','poly','rbf'],这主要是测试在不同的kernel中,模型的准确性有什么的不同。 有一点需要注意的是:因为前面我们使用pineline, 所以后面赋予参数的时候param_name='svc__kernel' 的param_name后面需要紧跟对应的svc,并且指明是svc下的kernel参数, 两者间用两条下划线__

其实两者功能上都是一样的,LabelEncoder是将labels转为数字,而ordinalencoder 是将labels转为数字并且附带一定的features(数据特性)。

下面是官网的例子:

LabelEncoder

>>> from sklearn import preprocessing
>>> le = preprocessing.LabelEncoder()
>>> le.fit([1, 2, 2, 6])
LabelEncoder()
>>> le.classes_
array([1, 2, 6])
>>> le.transform([1, 1, 2, 6]) 
array([0, 0, 1, 2]...)
>>> le.inverse_transform([0, 0, 1, 2])
array([1, 1, 2, 6])
>>> le = preprocessing.LabelEncoder()
>>> le.fit(["paris", "paris", "tokyo", "amsterdam"])
LabelEncoder()
>>> list(le.classes_)
['amsterdam', 'paris', 'tokyo']
>>> le.transform(["tokyo", "tokyo", "paris"]) 
array([2, 2, 1]...)
>>> list(le.inverse_transform([2, 2, 1]))
['tokyo', 'tokyo', 'paris']

OrdinalEncoder

>>> from sklearn.preprocessing import OrdinalEncoder
>>> enc = OrdinalEncoder()
>>> X = [['Male', 1], ['Female', 3], ['Female', 2]]
>>> enc.fit(X)
... 
OrdinalEncoder(categories='auto', dtype=<... 'numpy.float64'>)
>>> enc.categories_
[array(['Female', 'Male'], dtype=object), array([1, 2, 3], dtype=object)]
>>> enc.transform([['Female', 3], ['Male', 1]])
array([[0., 2.],
       [1., 0.]])
>>> enc.inverse_transform([[1, 0], [0, 1]])
array([['Male', 1],
       ['Female', 2]], dtype=object)

参考:
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
https://datascience.stackexchange.com/questions/39317/difference-between-ordinalencoder-and-labelencoder

之前介绍了如何在宝塔上如何利用Gunicorn部署Flask,今天要介绍的是Supervisor和Gunicorn部署Flask。 为啥需要Supervisor呢?

Supervisor是一个进程管理系统,它通过fork/exec的方式将这些被管理的进程当作它的子进程来启动,若该子进程异常中断,则父进程可以准确地获取子进程异常中断的信息。
参考:https://blog.csdn.net/guolindonggld/article/details/83386920

我之前写的网站一直都运行正常,所以没有机会使用Supervisor。但直到今天,我上传了重构的网站并运行成功,于是乎我就退出SSH,但我发现退出SSH之后网站就停止运行了。。。google了很多资料才发现需要Supervisor来管理FLASK的后台进程,以防止自动退出。

第一步, 安装supervisor

sudo apt install supervisor

第二步,建立conf文件

安装完supervisor之后你会发现etc下会出现了supervisor的文件夹,接下来我们要在 /etc/supervisor/conf.d/ 这个目录下新建一个xxx.conf的文件,这个文件是用于填写相关的运行命令。

[program:hello_world]
directory=/home/ubuntu/hello_world
command=/home/ubuntu/.env/bin/gunicorn app:app -b localhost:8000
autostart=true
autorestart=true
stderr_logfile=/var/log/hello_world/hello_world.err.log
stdout_logfile=/var/log/hello_world/hello_world.out.log

上面的directory是指你项目的具体位置
command是指gunicorn的所在目录,例如,我的vps上安装了virtualenv,所以我的gunicorn所在目录为/xxxx/py3env/bin
stderr_logfile和stdout_logfile是记录文件的地址,我们需要在/var/log/的目录下新建一个名为hello_world的文件夹以便supervisor生成hello_world.err.loghello_world.out.log

第三步,运行

上面的步骤都做好之后,我们可以开始运行了

$ sudo supervisorctl reread
$ sudo service supervisor restart

查看运行状态

$ sudo supervisorctl status

这样就大功告成了。

参考:https://medium.com/ymedialabs-innovation/deploy-flask-app-with-nginx-using-gunicorn-and-supervisor-d7a93aa07c18

最近做的网站需要用到tabulator作为前端的数据展示,之前一直用3.5的版本,但到了4.0的版本之后,引用js就出现了$(...).tabulator is not a function的错误,后台发现了新的版本不在依赖jQuery.

The core code of version 4.0 of Tabulator is now dependency free! That means no more jQuery, which means there are a few changes that need to be made to your existing code to get on board with the new way of doing things.

下面是新的引用形式:

var table = new Tabulator("#example-table", {
    //table setup options
});

需要注意的是JS的引用要放在jQuery的前面,要不然会出错。

<script type="text/javascript" src="https://unpkg.com/[email protected]/dist/js/tabulator.min.js"></script>

参考:
http://tabulator.info/docs/4.0/upgrade

这几天在看“hands-on machine learning with sklearn and tensorflow” 的第二章节,狗血的事情发生了。其中的一部分是需要将median_income划分为5个等分,然后大于5的部分全部归类为5,代码如下:

    housing["income_cat"] = np.ceil(housing["median_income"] / 1.5)
    housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True)

结果我看了大半天还以为housing["income_cat"] < 5这里写错了,后来才发现mask 函数和 where 作用刚好相反。

  s = pd.Series(range(5))
  s.where(s > 1, 10)
  0    10.0
  1    10.0
  2    2.0
  3    3.0
  4    4.0

  s.mask(s > 1, 10)
  0    0.0
  1    1.0
  2    10.0
  3    10.0
  4    10.0
  df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])
  m = df % 3 == 0
  # df.where(m, np.array([1,2,3,4,5]).reshape(-1, 5))  #此句话报错
  df.where(m, -df)
  A  B
  0  0 -1
  1 -2  3
  2 -4 -5
  3  6 -7
  4 -8  9

--------------------- 
作者:依斐 
来源:CSDN 
原文:https://blog.csdn.net/dss_dssssd/article/details/82818587 
版权声明:本文为博主原创文章,转载请附上博文链接!