1 windows10安装mysql8
进入mysql官网地址:https://dev.mysql.com/ -> DOWNLOADS->MySQL Community (GPL) Downloads ->MySQL Community Server->
解压,配置Path环境变量 xxxxbin,
解压文件根目录下新建my.ini
[mysqld]# 设置3306端口port=3306# 设置mysql的安装目录 ---这里输入你安装的文件路径----basedir=F:APPMySQLmysql-8.0.31-winx64# 设置mysql数据库的数据的存放目录datadir=F:APPMySQLmysql-8.0.31-winx64Data# 允许最大连接数max_connections=200# 允许连接失败的次数。max_connect_errors=10# 服务端使用的字符集默认为utf8character-set-server=utf8# 创建新表时将使用的默认存储引擎default-storage-engine=INNODB# 默认使用“mysql_native_password”插件认证#mysql_native_passworddefault_authentication_plugin=mysql_native_password[mysql]# 设置mysql客户端默认字符集default-character-set=utf8[client]# 设置mysql客户端连接服务端时默认使用的端口port=3306default-character-set=utf8
管理员打开cmd:
mysqld --initialize --console 最后一行记录初始密码,
mysqld --install,net start mysql,
更改密码:
mysql -u root -p,ALTER USER ‘root’@‘localhost’ IDENTIFIED BY ‘新密码’;
2 windows10安装redis
官网:http://redis.io/download
官网无windows版本,只有从github上下载:
github:https://github.com/MicrosoftArchive/redis/releases/tag/win-3.2.100, 下载zip文件,
解压, redis.windows.conf中 设置密码 requirepass xxxx,
注册服务到windows上redis-server --service-install redis.windows.conf, 启动服务redis-server --service-start
说明:管理员cmd, sc delete Redis删除服务
3 windows10 安装kafka
根目录最好在某个盘的根路径下,路径树太深报错
下载地址:https://kafka.apache.org/downloads,下载Scala 2.13 - kafka_2.13-3.2.1.tgz (asc, sha512),
解压, 在server.properties中修改log.dirs=根目录kafka-logs,在zookeeper.properties修改dataDir=根目录zookeeper-data,
根目录下执行:.binwindowszookeeper-server-start.bat .configzookeeper.properties,
根目录下执行:.binwindowskafka-server-start.bat .configserver.properties,
创建topic: .binwindowskafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
4 windows10 安装naocs
nacos下载地址:https://gitcode.net/mirrors/alibaba/nacos?utm_source=csdn_github_accelerator
nacos与springcloud版本对应地址: https://github.com/alibaba/spring-cloud-alibaba/wiki/版本说明
下载naocs-server-x.x.x.zip文件,解压,将startup.cmd中改set MODE=“standalone”,双击startup.cmd即启动了。
访问地址: http://127.0.0.1:8848/nacos/#/login
5 windows10 安装minio
官网下载地址: https://min.io/download#/windows, 下载minio server,
解压, 在根目录下创建data文件夹(例如解压D盘下), 在根目录下创建start.cmd脚本,
脚本写入:
@echo offecho.echo [信息] 运行MinIO文服务器。echo.title miniocd %~dp0cd D:miniominio.exe server D:miniodata --console-address ":9990"pause
双击start.cmd启动
搭建springboot项目:
maven仓库查询spring-boot-starter-parent版本要与jdk版本匹配,搭建一个空的maven项目,起名springboottest,
配置pom文件:
8 8 org.springframework.boot spring-boot-starter-parent 2.7.11 org.springframework.boot spring-boot-starter-web org.springframework.boot spring-boot-maven-plugin
配置application启动类:
@SpringBootApplicationpublic class SpringboottestApplication { public static void main(String[] args) { SpringApplication.run(SpringboottestApplication.class, args); }}
搭建springcloud项目:
创建空的maven项目,起名springcloudtest,
org.example.springboot1
删除src文件夹,引入依赖:
8 8 org.springframework.boot spring-boot-starter-parent 2.6.11 pom import com.alibaba.cloud spring-cloud-alibaba-dependencies 2021.0.4.0 pom import org.springframework.cloud spring-cloud-dependencies 2021.0.4 pom import
创建moudle,起名springboot1,引入依赖:
org.springframework.boot spring-boot-starter-web org.springframework.boot spring-boot-starter-test test mysql mysql-connector-java com.baomidou mybatis-plus-boot-starter 3.5.1 com.alibaba druid-spring-boot-starter 1.1.20 p6spy p6spy 3.9.0 org.springframework.boot spring-boot-starter-data-redis org.redisson redisson-spring-boot-starter 3.16.8 io.minio minio 8.3.4 com.squareup.okhttp3 okhttp 4.9.0 org.springframework.boot spring-boot-starter-amqp org.springframework.kafka spring-kafka org.springframework.boot spring-boot-starter-aop org.projectlombok lombok com.github.ulisesbocchio jasypt-spring-boot-starter 3.0.3 cn.hutool hutool-all 5.4.3 jakarta.validation jakarta.validation-api org.springframework.cloud spring-cloud-starter-bootstrap 3.0.2 org.springframework.boot spring-boot-starter-actuator org.elasticsearch.client elasticsearch-rest-high-level-client bootstrap.yaml:spring: application: name: helpservice profiles: active: devbootstrap-dev.yaml:version: 123mybatis: mapper-locations: classpath:mapper/*.xml type-aliases-package: com.hsz.help.entity.po #开启驼峰命名 configuration: map-underscore-to-camel-case: truelogging: level.com.hsz.help.mapper: debugserver: port: 8087spring: datasource: type: com.alibaba.druid.pool.DruidDataSource driver-class-name: com.mysql.jdbc.Driver url: jdbc:mysql://localhost:3306/helpservice?characterEncoding=UTF-8&useSSL=false username: root password: 123456 redis: database: 0 host: 127.0.0.1 port: 6379 password: 123456 # spring-boot 1.0默认 jedis; spring-boot2.0 默认lettuce ,lettuce线程安全 lettuce: pool: # 连接池中的最大空闲连接 默认8 max-idle: 8 # 连接池中的最小空闲连接 默认0 min-idle: 0 # 连接池最大连接数 默认8 ,负数表示没有限制 max-active: 2000 # 连接池最大阻塞等待时间(使用负值表示没有限制) 默认-1 max-wait: -1 cache: type: redis# rabbitmq:# host: localhost# port: 5672# username: rabbitmq# password: 123456# virtual-host: / #/ems# kafka:# bootstrap-servers: localhost:9092# producer: # producer 生产者# retries: 0 # 重试次数# acks: 1 # 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1)# batch-size: 16384 # 批量大小# buffer-memory: 33554432 # 生产端缓冲区大小# key-serializer: org.apache.kafka.common.serialization.StringSerializer# value-serializer: org.apache.kafka.common.serialization.StringSerializer# consumer: # consumer消费者# group-id: mentugroup # 默认的消费组ID# enable-auto-commit: true # 是否自动提交offset# auto-commit-interval: 100 # 提交offset延时(接收到消息后多久提交offset)# # earliest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费# # latest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据# # none:topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常# auto-offset-reset: latest# key-deserializer: org.apache.kafka.common.serialization.StringDeserializer# value-deserializer: org.apache.kafka.common.serialization.StringDeserializer# servlet:# multipart:# max-file-size: 200MB# max-request-size: 200MB#minio:# endpoint: http://localhost:9000# accessKey: admin# secretKey: admin123# bucketName: bytosRedisConf:@Configuration@EnableCachingpublic class RedisConfig extends CachingConfigurerSupport { @Bean @Primary public RedisTemplate org.springframework.boot spring-boot-maven-plugin org.example.springboot1.Springboot1Application redisTemplate(RedissonConnectionFactory redissonConnectionFactory){ RedisTemplate redisTemplat = new RedisTemplate<>(); redisTemplat.setConnectionFactory(redissonConnectionFactory); redisTemplat.setKeySerializer(RedisSerializer.string()); redisTemplat.setValueSerializer(RedisSerializer.json()); redisTemplat.setHashKeySerializer(RedisSerializer.string()); redisTemplat.setHashValueSerializer(RedisSerializer.string()); redisTemplat.afterPropertiesSet(); return redisTemplat; } @Bean @Primary public CacheManager cacheManager(RedissonConnectionFactory redissonConnectionFactory){ return RedisCacheManager.builder(redissonConnectionFactory) .cacheDefaults(RedisCacheConfiguration.defaultCacheConfig() .serializeKeysWith(RedisSerializationContext .SerializationPair.fromSerializer(RedisSerializer.string())) .serializeValuesWith(RedisSerializationContext .SerializationPair.fromSerializer(RedisSerializer.json())) .disableCachingNullValues() ).build(); }}RedissonConfig:@Configurationpublic class RedissonConfig { @Value("${spring.redis.host}") private String redisHost; @Value("${spring.redis.port}") private String redisPort; @Value("${spring.redis.password}") private String redisPassword; @Bean public RedissonClient redissonClient() { Config config = new Config(); //单机模式 依次设置redis地址和密码 config.useSingleServer().setAddress("redis://"+redisHost+":"+redisPort).setPassword(redisPassword); RedissonClient redissonClient= Redisson.create(config); return redissonClient; }}