网上找了N种所谓最简单的解决方法,最后其实只有一种是最简单的,这里就记录一下:
1 .用jupyter notebook或者Spyder等python IDE 找出matplotlib的matplotlibrc文件的地址

import matplotlib
matplotlib.matplotlib_fname()

输出:

'c:\users\local\programs\python\python37\lib\site-packages\matplotlib\mpl-data\matplotlibrc'

2 .修改matplotlibrc文件

将文件中的
#font.family: sans-serif
在注释的下一行,添加一行:
font.family: Microsoft YaHei

这就搞掂了!!

在vps中安装了宝塔,但问题是无论在宝塔上如何更改,时间和时区始终出错。最后还是要在SSH中更改,而且方便快捷。

在SSH中输入以下命令:

rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

然后输入以下命令看看是否生效:

date

参考:
https://blog.csdn.net/zx711166/article/details/78839431

  1. learning_curve():主要是用来判断模型是否过拟合
  2. validation_curve():这个函数主要是用来查看不同参数的取值下模型的准确性

以下是Python机器学习书里面的例子, 我改了部分参数

learning_curve

import matplotlib.pyplot as plt

from sklearn.model_selection import learning_curve
from sklearn.decomposition import PCA
from sklearn.svm import SVC


pipe_lr = Pipeline([('scl', StandardScaler()),
                    ('pca',PCA()),
                    ('svc',SVC(kernel='rbf')),
#                     ('clf', LogisticRegression(penalty='l2', random_state=0,solver='lbfgs')),
                    ])

train_sizes, train_scores, test_scores =\
                learning_curve(estimator=pipe_lr,
                               X=X_train,
                               y=y_train,
                               train_sizes=np.linspace(0.1, 1.0, 10),
                               cv=10,
                               n_jobs=1)

train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)

plt.plot(train_sizes, train_mean,
         color='blue', marker='o',
         markersize=5, label='training accuracy')

plt.fill_between(train_sizes,
                 train_mean + train_std,
                 train_mean - train_std,
                 alpha=0.15, color='blue')

plt.plot(train_sizes, test_mean,
         color='green', linestyle='--',
         marker='s', markersize=5,
         label='validation accuracy')

plt.fill_between(train_sizes,
                 test_mean + test_std,
                 test_mean - test_std,
                 alpha=0.15, color='green')

plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.tight_layout()
plt.savefig('learning_curve.png', dpi=300)
plt.show()

从下图可以看出,蓝色的training曲线部分的准确率明显是要高于绿色的testing曲线,这说明有过度拟合的情况,其中一个办法是通过增加数据集来解决。

validation_curve

from sklearn.model_selection import validation_curve



param_range = ['linear','sigmoid','poly','rbf']
train_scores, test_scores = validation_curve(
                estimator=pipe_lr, 
                X=X_train, 
                y=y_train, 
                param_name='svc__kernel', 
                param_range=param_range)

train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)

plt.plot(param_range, train_mean, 
         color='blue', marker='o', 
         markersize=5, label='training accuracy')

plt.fill_between(param_range, train_mean + train_std,
                 train_mean - train_std, alpha=0.15,
                 color='blue')

plt.plot(param_range, test_mean, 
         color='green', linestyle='--', 
         marker='s', markersize=5, 
         label='validation accuracy')

plt.fill_between(param_range, 
                 test_mean + test_std,
                 test_mean - test_std, 
                 alpha=0.15, color='green')

plt.grid()
plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.ylim([0.8, 1.0])
plt.tight_layout()
plt.savefig('validation_curve.png', dpi=300)
plt.show()

在上面的代码中,我将param_range 设为 ['linear','sigmoid','poly','rbf'],这主要是测试在不同的kernel中,模型的准确性有什么的不同。 有一点需要注意的是:因为前面我们使用pineline, 所以后面赋予参数的时候param_name='svc__kernel' 的param_name后面需要紧跟对应的svc,并且指明是svc下的kernel参数, 两者间用两条下划线__

其实两者功能上都是一样的,LabelEncoder是将labels转为数字,而ordinalencoder 是将labels转为数字并且附带一定的features(数据特性)。

下面是官网的例子:

LabelEncoder

>>> from sklearn import preprocessing
>>> le = preprocessing.LabelEncoder()
>>> le.fit([1, 2, 2, 6])
LabelEncoder()
>>> le.classes_
array([1, 2, 6])
>>> le.transform([1, 1, 2, 6]) 
array([0, 0, 1, 2]...)
>>> le.inverse_transform([0, 0, 1, 2])
array([1, 1, 2, 6])
>>> le = preprocessing.LabelEncoder()
>>> le.fit(["paris", "paris", "tokyo", "amsterdam"])
LabelEncoder()
>>> list(le.classes_)
['amsterdam', 'paris', 'tokyo']
>>> le.transform(["tokyo", "tokyo", "paris"]) 
array([2, 2, 1]...)
>>> list(le.inverse_transform([2, 2, 1]))
['tokyo', 'tokyo', 'paris']

OrdinalEncoder

>>> from sklearn.preprocessing import OrdinalEncoder
>>> enc = OrdinalEncoder()
>>> X = [['Male', 1], ['Female', 3], ['Female', 2]]
>>> enc.fit(X)
... 
OrdinalEncoder(categories='auto', dtype=<... 'numpy.float64'>)
>>> enc.categories_
[array(['Female', 'Male'], dtype=object), array([1, 2, 3], dtype=object)]
>>> enc.transform([['Female', 3], ['Male', 1]])
array([[0., 2.],
       [1., 0.]])
>>> enc.inverse_transform([[1, 0], [0, 1]])
array([['Male', 1],
       ['Female', 2]], dtype=object)

参考:
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
https://datascience.stackexchange.com/questions/39317/difference-between-ordinalencoder-and-labelencoder